From 63f30b086a4c517c6f43467920678b094a79de65 Mon Sep 17 00:00:00 2001 From: Vineeth Pothulapati Date: Tue, 11 Feb 2020 04:15:53 +0530 Subject: [PATCH] Sync up between dev-1.18 and master branches (#19055) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Fixed outdated ECR credential debug message (#18631) * Fixed outdated ECR credential debug message The log message for troubleshooting kubelet auto fetching ECR credentils issue has been changed (noticed since 1.14), and the new message reads like this when verbose log level is set to 3: - `aws_credentials.go:109] unable to get ECR credentials from cache, checking ECR API` - `aws_credentials.go:116] Got ECR credentials from ECR API for .dkr.ecr.us-east-1.amazonaws.com` This is based on the kubelet source code: https://github.com/kubernetes/kubernetes/blob/release-1.14/pkg/credentialprovider/aws/aws_credentials.go#L91 This PR is to fix this and to avoid confusion for more people who are troubleshooting the kubelet ECR issue. * Update content/en/docs/concepts/containers/images.md Co-Authored-By: Tim Bannister Co-authored-by: Tim Bannister * Fix deployment name in docs/tasks/administer-cluster/dns-horizontal-autoscaling.md (#18772) * ru/docs/tutorials/hello-minikube.md: sync with English translation. (#18687) * content/ru/docs/concepts/_index.md: use English names for kinds. (#18613) * Fix French typo in "when" section (#18786) * First Japanese l10n work for release-1.16 (#18790) * Translate concepts/services-networking/connect-applications-service/ into Japanese (#17710) * Translate concepts/services-networking/connect-applications-service/ into Japanese * Apply review * Translate content/ja/docs/tasks/_index.md into Japanese (#17789) * add task index * huge page * ja-docs: Update kops Installation Steps (#17804) * Update /ja/docs/tasks/tools/install-minikube/ (#17711) * Update /ja/docs/tasks/tools/install-minikube/ * Apply review * Apply review * Update content/ja/docs/tasks/tools/install-minikube.md Co-Authored-By: inductor * Update content/ja/docs/tasks/tools/install-minikube.md Co-Authored-By: inductor * Translate tasks/configure-pod-container/assign-cpu-resource/ in Japanese (#16160) * copy from content/en/docs/tasks/configure-pod-container/ to ja * translate assign-cpu-resource.md in Japanese * Update content/ja/docs/tasks/configure-pod-container/assign-cpu-resource.md Co-Authored-By: inductor * Update content/ja/docs/tasks/configure-pod-container/assign-cpu-resource.md Co-Authored-By: Naoki Oketani * Update assign-cpu-resource.md ここの *request* と *limit* はほかの文中の単語とは異なり、YAMLのfieldを表すため、訳さないでおく * fix translation "Pod scheduling is based on requests." の箇所。 requestsに基づいているのは事実だが、直訳されたときになにを指すのかあいまいなので、対象を具体的に記述 * Translate concepts/workloads/controllers/deployment/ in Japanese #14848 (#17794) * ja-trans: Translate concepts/workloads/controllers/deployment/ into Japanese (#14848) * ja-trans: Improve Japanese translation in concepts/workloads/controllers/deployment/ (#14848) * ja-trans: Improve Japanese translation in concepts/workloads/controllers/deployment/ (#14848) * ja-trans: Improve Japanese translation in concepts/workloads/controllers/deployment/ (#14848) * little fix (#18135) * update index (#18136) * Update /ja/docs/setup/_index.md (#18139) * Update /ja/docs/tasks/tools/install-kubectl/ (#18137) * update /docs/ja/tasks/tools/install-kubectl/ * fix mongon * apply reveiw * Update /ja/docs/reference/command-line-tools-reference/feature-gates/ (#18141) * Update feature agete * tidy up feature gates list * translate new lines * table caption * blank * する -> します * apply review * fix broken link * Update content/ja/docs/reference/command-line-tools-reference/feature-gates.md Co-Authored-By: Naoki Oketani * update translation * remove line * Update content/ja/docs/reference/command-line-tools-reference/feature-gates.md Co-Authored-By: Naoki Oketani * rollpack * Update /ja/docs/concepts/services-networking/service/ (#18138) * update /ja/docs/concepts/services-networking/service/ * Update content/ja/docs/concepts/services-networking/service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/concepts/services-networking/service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/concepts/services-networking/service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/concepts/services-networking/service.md Co-Authored-By: Naoki Oketani * consider Endpoints as a Kubernetes resource * full * Update content/ja/docs/concepts/_index.md (#18145) * Update concepts * control plane * apply review * fix bold (#18165) * Update /ja/docs/concepts/overview/components.md (#18153) * update /ja/docs/concepts/overview/components.md * some japanese docs are already there * translate prepend * apply upstream changes (#18278) * Translate concepts/services-networking/ingress into Japanese #17741 (#18234) * ja-trans: Translate concepts/services-networking/ingress into Japanese (#17741) * ja-trans: Improve Japanese translation in concepts/services-networking/ingress (#17741) * ja-trans: Improve Japanese translation in concepts/services-networking/ingress (#17741) * Update pod overview in Japanese (#18277) * Update pod-overview * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani * ノード * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani Co-authored-by: Naoki Oketani * Translate concepts/scheduling/scheduler-perf-tuning/ in Japanese #17119 (#17796) * ja-trans: Translate concepts/scheduling/scheduler-perf-tuning/ into Japanese (#17119) * ja-trans: Improve Japanese translation in concepts/scheduling/scheduler-perf-tuning/ (#17119) * ja-trans: Improve Japanese translation in concepts/scheduling/scheduler-perf-tuning/ (#17119) * ja-trans:conetent/ja/casestudies/nav (#18450) * Translate tasks/debug-application-cluster/debug-service/ in Japanese (#18395) * Translate tasks/debug-application-cluster/debug-service/ in Japanese * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor * Change all `Pods` to `Pod` and `Endpoints` to `Endpoint` * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Updated content pointed out in review * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani * Apply suggestions from code review Co-Authored-By: inductor * Apply suggestions from review * Apply suggestions form review * Apply suggestions from review * Apply suggestions from review * Apply suggestions from code review Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani Co-authored-by: inductor Co-authored-by: Naoki Oketani * Translate concepts/extend-kubernetes/api-extension/custom-resources/ into Japanese (#18200) * Translate concepts/extend-kubernetes/api-extension/custom-resources/ into Japanese * Apply suggestions from code review between L1 an L120 by oke-py Co-Authored-By: Naoki Oketani * Apply suggestions from code review by oke-py Co-Authored-By: Naoki Oketani * Update CustomResourceDefinition not to localize into Japanese * Revert the link to customresourcedefinitions to English Co-Authored-By: Naoki Oketani * Apply suggestions from code review by oke-py and inductor Co-Authored-By: Naoki Oketani Co-Authored-By: inductor * Apply a suggestion from review by inductor * Apply a suggestion from code review by oke-py Co-Authored-By: Naoki Oketani Co-authored-by: Naoki Oketani Co-authored-by: inductor * Translate tasks/configure-pod-container/quality-service-pod/ into Japanese (#16173) * copy from content/en/docs/tasks/configure-pod-container/quality-service-pod.md to Ja * Translate tasks/configure-pod-container/quality-service-pod/ into Japanese Guaranteed, Burstable, BestEffortは用語として存在するので訳さない Signed-off-by: Takuma Hashimoto * Update content/ja/docs/tasks/configure-pod-container/quality-service-pod.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/configure-pod-container/quality-service-pod.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/configure-pod-container/quality-service-pod.md Co-Authored-By: Naoki Oketani * Update content/ja/docs/tasks/configure-pod-container/quality-service-pod.md Co-Authored-By: Naoki Oketani Co-authored-by: Naoki Oketani * Translate content/ja/docs/reference/kubectl/cheatsheet.md (#17739) (#18285) * Translate content/ja/docs/reference/kubectl/cheatsheet.md (#17739) * Translated kubectl cheet sheet. * Fix typos in content/ja/docs/reference/kubectl/cheatsheet.md (#17739) * Fix japanese style in content/ja/docs/reference/kubectl/cheatsheet.md * Fix typo in content/ja/docs/reference/kubectl/cheatsheet.md * Fix translation in content/ja/docs/reference/kubectl/cheatsheet.md * Fix typo in content/ja/docs/reference/kubectl/cheatsheet.md * Fix typo in content/ja/docs/reference/kubectl/cheatsheet.md * Modify translation for casestudies (#18767) * modify terminology * add ten * update translation * update * update * update * fix typo (#18769) * remove english comment (#18770) * ja-trans:conetent/ja/casestudies/spotify (#18451) * ja-trans: content/ja/case-studies/spotify * Update content/ja/case-studies/spotify/index.html Updated with the proposal from inductor Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Updated with inductor 's proposal Co-Authored-By: inductor * ja-trans: content/ja/case-studies/spotify * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor Co-authored-by: inductor * Translate Japanese headers (#18776) * translate headers * add index for references * Update content/ja/docs/setup/production-environment/tools/_index.md Co-Authored-By: Naoki Oketani * translate controller Co-authored-by: Naoki Oketani * ja-docs: translate install-kubeadm into Japanese (#18198) * ja-docs: translate install-kubeadm into Japanese * translate table title in install-kubeadm to Japanese * update kubeadm install doc * remove extra spaces * fix translation miss * translate url title into japanese * fix translation miss * remove line break in sentence and translate title * remove extra line break * remove extra line break * fix translation miss Co-authored-by: Naoki Oketani Co-authored-by: Samuel Kihahu Co-authored-by: Takuma Hashimoto Co-authored-by: Keita Akutsu Co-authored-by: Masa Taniguchi Co-authored-by: Soto Sugita Co-authored-by: Kozzy Hasebe <48105562+hasebe@users.noreply.github.com> Co-authored-by: kazuaki harada Co-authored-by: Shunsuke Miyoshi * delete zh SEE ALSO(51-54) (#18788) * Added missing brackets in markdown (#18783) * Fix broken links in api_changes doc (#18743) * fix jump (#18781) * fix redundant note (#18780) * Fix typo: default-manager -> default-scheduler (#18709) like #18649 #18708 * fix issue #18738 (#18773) Signed-off-by: Dominic Yin * Correct description of kubectl (#18172) * Correct description of kubectl Given that `kubectl` is not a [command line interface (CLI)](https://en.wikipedia.org/wiki/Command-line_interface), I suggest calling it what it is -- a control utility (ctl = control). The term "tool" is commonly used in place of "utility," including the `kubectl` docs. A CLI presents the user with a command prompt at which the user can enter multiple command lines that a command-line interpreter interprets and processes. Think of `bash`, `emacs`, or a SQL shell. Since `kubectl` is not run in a shell, it is not a CLI. Here are related docs that correctly refer to `kubectl` as a "command-line tool": - https://kubernetes.io/docs/reference/tools/#kubectl - https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl/ - https://kubernetes.io/docs/reference/kubectl/kubectl/ * Update content/en/docs/reference/kubectl/overview.md Co-Authored-By: Zach Corleissen Co-authored-by: Zach Corleissen * Add blog post: Reviewing 2019 in Docs (#18662) Tiny fix Feedback from onlydole Add missing link Incremental fixes Revise Jim's job title Update content/en/blog/_posts/2020-01-17-Docs-Review-2019.md Co-Authored-By: Celeste Horgan Feedback from celeste, change date * Update OWNERS_ALIASES (#18803) * Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md (#16869) * Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-authored-by: Bob Killen Co-authored-by: Taylor Dolezal * blog: introduce CSI support for ephemeral inline volumes (#16832) * csi-ephemeral-inline-volumes: introduce CSI support for ephemeral inline volumes This was alpha in Kubernetes 1.15 and became beta in 1.16. Several CSI drivers already support it (soon...). * csi-ephemeral-inline-volumes: bump date and address feedback (NodeUnpublishVolume) * csi-ephemeral-inline-volumes: add examples and next steps * csi-ephemeral-inline-volumes: rename file, minor edits * csi-ephemeral-inline-volumes: include Docker example * Create 2019-12-10-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md (#18062) * Create 2019-12-10-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md * Update and rename 2019-12-10-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md to 2019-01-16-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md * Update 2019-01-16-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md * Update and rename 2019-01-16-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md to 2019-01-22-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md Co-authored-by: Kaitlyn Barnard * Revert "Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md (#16869)" (#18805) This reverts commit 2c4545e105570f76b74bfb6af35457c7e2c021d2. * add blog k8s on mips (#18795) * add blog k8s on mips * modify english title to chinese * modify some error * Remove user-journeys legacy content #18615 (#18779) * Use monospace for HostFolder and VM in the French Minikube setup guide. (#18749) * Add French version of persistent volume page concept page (#18706) * Add French version of persistent volume page concept page * Fix * Fix * Fix * Fix * sync content/zh/docs/reference/issues-security/ en zh (#18727) * update zh-translation: /docs/concepts/storage/volume-snapshots.md (#18650) * Clean up user journeys content for zh (#18815) * Followup fixes for: Add resource version section to api-concepts (#18069) * Followup fixes for: Add resource version section to api-concepts documentation * Apply feedback * Apply feedback * Switch paragraph to active voice * Add Community and Code of Conduct for ID (#18828) * Add additional ways to contribute part to update zh doc (#18762) * Add additional ways to contribute part to update zh doc * Add original English text * Update content/zh/docs/contribute/_index.md Co-Authored-By: chentanjun Co-authored-by: chentanjun * Clean up extensions/v1beta1 in docs (#18839) * fix an example path (#18848) * Translating network plugins (#17184) * Fix for a typo (#18822) * tą instalację -> tę instalację / (https://sjp.pwn.pl/poradnia/haslo/te-czy-ta;1598.html) (#18801) * Fix typo in Scalability section (#18866) The phrase `very larger` is not valid, it is supposed to be either `very large` or `larger`. Propose to have it `very large`. Signed-off-by: Mariyan Dimitrov * Add Polish translation of Contribute index page (#18775) Co-Authored-By: Michał Sochoń Co-authored-by: Michał Sochoń * Clean up extensions/v1beta1 in docs (#18838) * Add Indonesian Manage Compute Resources page (#18468) * Add Indonesian Manage Compute Resources page * Updates to id Manage Compute Resources page * Add DaemonSet docs ID localization (#18632) Signed-off-by: giovanism * Fix typo in en/docs/contribute/style/content-guilde.md (#18862) * partial fix for SEE ALSO section under content/zh/docs/reference/setup-tools/kubeadm/generated/ need to be deleted #18411 (#18875) * See Also removed file 31 * see also removed file 32 * see also removed file 33 * see also removed file 34 * see also removed file 35 * Modify pod.md (#18818) website/content/ko/docs/concepts/workloads/pods/pod.md 23 line 쿠버네티스는는 -> 쿠버네티스는 modify * remove $ following the style guide (#18855) * Add Hyperlink to Kubernetes API (#18852) * Drive by copy edit of blog post (#18881) * Medium copy edit. * more fixes * Translate Events Calendar (#18860) * Adding Bahasa Indonesia translation for Device Plugin page #18676 (#18676) Co-Authored-By: Gede Wahyu Adi Pramana Co-authored-by: Gede Wahyu Adi Pramana * change escaped chars to markdown (#18858) Helps to keep doc clean for long term * Fix header layout on Safari (#18888) * Fix references to sig-docs-l10n-admins (#18661) * Add French deployment concept page (#18516) * Add French deployment concept page * Fix * Fix * Fix * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister * Fix * Fix * Fix * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister Co-authored-by: Tim Bannister * Fix ZH security aliases (#18895) * disable simplytunde as an approver due to inactivity. (#18899) Always welcome to come back if able to become active again Signed-off-by: Brad Topol * install container runtimes without prompts (#18893) In Kubernetes docs, all of the packages that are required to set up the Kubernetes are installed without requiring any prompts through the package manager (like apt or yum) except for the container runtimes. https://kubernetes.io/docs/setup/production-environment/container-runtimes/ So, it would be better to have these installations with prompts (yes) disabled. * Fix small typos (#18886) * Fix small typos Small typos noticed and fixed in: - configure-upgrade-etcd.md - reconfigure-kubelet.md Signed-off-by: Mariyan Dimitrov * Rephrase a paragraph on etcd upgrade en\docs\tasks\administer-cluster\configure-upgrade-etcd.md Following a suggestion in #18886, I've rephrased a sentence on etcd upgrade prerequisites. Signed-off-by: Mariyan Dimitrov * Clean up extensions/v1beta1 in docs (#18841) * Update _index.md (#18825) * Run minikube docker-env in a shell-independent way (#18823) * doc: correct pv status for pv protection example. (#18816) * Small editorial fixes in glossary entries (#18807) * Small editorial fixes in glossary entries * Revert the wording in the glossary term for proxy * fix doc conflict regarding postStart (#18806) * kubeadm: improvements to the cert management documentation (#18397) - move the sections about custom certificates and external CA to the kubeadm-certs page - minor cleanups to the kubeadm-certs page, including updated output for the check-expiration command - link the implementation details page to the new locations for custom certs and external CA * fix doc conflict regarding postStart * Grammar (#18785) * grammar: 'to' distributes over 'or' * grammar: reword per app.grammarly.com * grammar: simplify from app.grammarly.com * spelling: etc. * feat: add ephermeral container approach inside pod debug page. (#18754) * doc: add pod security policy reference link to document. (#18729) * doc: add pod security policy reference link to document. * doc: add what's next for pod-security-policy ref. * Revise version requirements (#18688) Assume that the reader is running a version of Kubernetes that supports the Secret resource. * en: Remove kubectl duplicate example (#18656) With #16974 and the removal of --include-uninitialized flag, the second and third examples of kubectl delete become equal, thus leading to duplication and being confusing. Suggest to remove the duplicate and replace it with another example in the future if needed. Observed in v1.16 and v1.17 documentation. Signed-off-by: Mariyan Dimitrov * Fix typo for tasks/access-kubernetes-api/configure-aggregation-layer.md (#18652) * Unify runtime references (#18493) - Use the glossary to correctly reference runtimes - Updated runtime class documentation for CRI-O - Removed rktlet from runtimes since its EOL Signed-off-by: Sascha Grunert * Clean up admission controller deprecation example (#18399) * sync zh-trans content/zh/docs/concepts/workloads/pods/ephemeral-containers.md (#18883) * Remove redundant information when deploy flannel on kubernetes include windows node (#18272) * sync zh-trans content/zh/docs/concepts/workloads/pods/pod-overview.md (#18882) * partial fix for for SEE ALSO section under content/zh/docs/reference/setup-tools/kubeadm/generated/ need to be deleted (#18879) * see also removed from file 36 * see also removed from file 37 * see also removed from file 38 * see also removed from file 39 * see also removed from file 40 * update zh content/zh/docs/contribute/style/write-new-topic.md (#18859) * sync zh-trans /docs/concepts/_index.md and /docs/concepts/example-concept-template.md (#18863) * See also removed file 56 & 57 (#18912) * see also removed file 56 * see also removed file 57 * Third Korean L10n Work For Release 1.17 (#18915) * Changed some words in the IPv4/IPv6 dual-stack korean doc. (#18668) * Update to Outdated files in dev-1.17-ko.3 branch. (#18580) * Translate content/ko/docs/concepts/services-networking/service in Korean (#18195) * Translate docs/tasks/access-application-cluster/port-forward-access-application-cluster.md in Korean (#18721) * Translate controllers/garbage-collection.md in Korean. (#18595) Co-Authored-by: Seokho Son Co-Authored-by: Lawrence Kay Co-Authored-by: Jesang Myung Co-Authored-by: Claudia J.Kang Co-Authored-by: Yuk, Yongsu Co-Authored-By: June Yi Co-authored-by: Yuk, Yongsu Co-authored-by: Seokho Son Co-authored-by: Lawrence Kay Co-authored-by: Jesang Myung Co-authored-by: June Yi * clean up makefile, config (#18517) Added target for createversiondirs (shell script) in Makefile. updates for tagged release regenerate api ref, rm Makefile_temp add parens to pip check * Improve Russian translation of Home page (#17841) * Improve Russian translation of Home page * Update i18n/ru.toml Co-Authored-By: Slava Semushin * Update content/ru/_index.html Co-Authored-By: Slava Semushin * Update content/ru/_index.html Co-Authored-By: Slava Semushin Co-authored-by: Slava Semushin * update ref link for v1.16 (#18837) Related to issue #18820. remove links to prev API refs * Cleanup user journeys related configs and scripts (#18814) * See also removed file 81 to 85 (#18909) * see also removed file 81 * see also removed file 82 * see also removed file 83 * see also removed file 84 * see also removed file 85 * See also removed file 65 to 70 (#18908) * see also removed file 65 * see also removed file 66 * see also removed file 67 * see also removed file 68 * see also removed file 69 * see also removed file 70 * Translate Task index page into Polish (#18876) Co-Authored-By: Karol Pucyński Co-Authored-By: Michał Sochoń Co-authored-by: Karol Pucyński <9209870+kpucynski@users.noreply.github.com> Co-authored-by: Michał Sochoń * Document dry-run authorization requirements (#18235) * Document dry-run write access requirement. - Add section on dry-run authorization - Refer to dry-run authorization for diff - Consistently hyphenate dry-run * Update content/en/docs/reference/using-api/api-concepts.md Co-Authored-By: Tim Bannister Co-authored-by: Tim Bannister * reword storage release note to match the change in k/k PR #87090 (#18921) * sync zh-trans content/zh/docs/concepts/workloads/controllers/ttlafterfinished.md (#18868) * See also removed file 60 to 63 (#18907) * see also removed file 60 * see also removed file 61 * see also removed file 62 * see also removed file 63 * See also removed file 91 to 95 (#18910) * see also removed file 91 * see also removed file 93 * see also removed file 94 * see also removed file 95 * content/zh/docs/concepts/workloads/pods/podpreset.md (#18870) * fix: fixed eating initial 2 spaces inside code. (#18914) * Update Calico section of kubeadm install guide (#18821) * Update Calico section of kubeadm install guide * Address review feedback * See also removed file 96 to 100 (#18911) * see also removed file 96 * see also removed file 97 * see also removed file 98 * see also removed file 99 * see also removed file 100 * repair zh docs in kubeadm (#18949) * repair zh docs about kubeadm (#18950) * Update apparmor.md (#18951) * Update basic-stateful-set.md (#18952) * Add missing hyperlink for pod-overhead (#18936) * Update service.md (#18480) make article reads more smoothly * zh-trans update content/zh/docs/concepts/workloads/controllers/deploy… (#18657) * zh-trans update content/zh/docs/concepts/workloads/controllers/deployment.md * zh-trans update content\zh\docs\concepts\workloads\controllers\deployment.md * Update source-ip documentation (#18760) * sync zh-trans /docs/concepts/workloads/pods/pod.md (#18880) * sync zh-trans /docs/concepts/workloads/controllers/cron-jobs.md and /docs/concepts/workloads/controllers/daemonset.md (#18864) * sync zh-trans content/zh/docs/concepts/workloads/controllers/ttlafterfinished.md (#18867) * Add a French version of Secret concept page (#18604) * Add a French version of Secret concept page * Fix * Fix * Update content/fr/docs/concepts/configuration/secret.md Co-Authored-By: Tim Bannister * Fix * Update content/fr/docs/concepts/configuration/secret.md Co-Authored-By: Aurélien Perrier * Fix Co-authored-by: Tim Bannister Co-authored-by: Aurélien Perrier * (refactor): Corrections (grammatical) in service.md file (#18944) * Update service.md * Fixed the invaild changes Signed-off-by: Udit Gaurav * Update container-runtimes.md (#18608) for debian install of docker, also install gnupg2 for apt-key add to work * Fix that dual-stack does not require Kubenet specifically (#18924) * Fix that dual-stack does not require Kubenet specifically Rather it requires a network plugin that supports dual-stack, and others are available, including Calico. * Update content/en/docs/tasks/network/validate-dual-stack.md Added link to doc about network plugins Co-Authored-By: Tim Bannister Co-authored-by: Tim Bannister * Revert "Configurable Scaling for the HPA (#18157)" (#18963) This reverts commit 5dbfaafe1ac8875e09ea4ef05390ebc47ad290cb. * Update horizontal-pod-autoscale-walkthrough.md (#18960) Update command for creating php-apache deployment due to the following warning: `kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.` * doc: add link for type=LoadBalancer service in tutorial. (#18916) * Typo fix (#18830) * sync zh-trans content/zh/docs/concepts/workloads/controllers/statefulset.md (#18869) * Revise pull request template (#18744) * Revise pull request template * Reference compiled docs in PR template Refer readers to https://k8s.io/contribute/start/ This keeps the template short, and it lets Hugo use templating for the current version. * Update certificates.md (#18970) * Add web-ui-dashboard to French (#17974) * Add web-ui-dashboard to French * Update content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md Co-Authored-By: Tim Bannister * Update content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md Co-Authored-By: Tim Bannister * Fix * Fix * Fix * Update content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md Co-Authored-By: Tim Bannister * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix Co-authored-by: Tim Bannister * Added a translated code of conduct (#18981) * Added a translated code of conduct * fixed some minor mistakes and capitalization * Moved to informal speech * Translate the contribute advanced page to French (#13425) * Translate the contribute advanced page to French * Corrections * Correction * Correction * Correction * Correction * Correction * Fix typo in hello-minikube.md (#18991) * Add note for LB behaviour for cordoned nodes. (#18784) * Add note for LB behaviour for cordoned nodes. See also https://github.com/kubernetes/kubernetes/issues/65013 This is a reasonably common pitfall: `kubectl cordon ` will also drop all LB traffic to the cluster, but this is not documented anywhere but in issues, when found it is usually already too late. * Update with feedback * Add KIND as the options for spinning up a test kubernetes environment (#17860) * fix typo in /ja/docs/concepts/workloads/pods/init-containers (#18997) * hide some original comments in translate docs (#18986) * hide original comment * hide some original comments * Fix code of conduct title (#19006) * Added a note about built-in priority-classes (#18979) * Added a note about build-in priority-classes * Update content/en/docs/concepts/configuration/pod-priority-preemption.md Co-Authored-By: Tim Bannister Co-authored-by: Tim Bannister * Add description for TTL (#19001) * Fix whitespace on deployment page (#18990) * Add details to the API deprecations blog post (#19014) * Document list/map/structType and listMapKeys (#18977) These markers where introduced to describe topology of lists, maps, structs - primarily in support of server-side apply. Secondarily, a small typo fix:) * Remove "Unschedulable" pod condition type from the pod lifecycle docs (#18956) The pod lifecycle documentation erroneously indicated `Unschedulable` as a possible `type` of pod condition. That's not true. Only four condition types exist. The `Unschedulable` value is not a type, but one of the possible reasons of the `PodScheduled` condition type. * Revise “Encrypting Secret Data at Rest” (#18810) * Drop reference to old Kubernetes versions At the time of writing, Kubernetes v1.13 is the oldest supported version, and encryption-at-rest is no longer alpha. * Tidy whitespace * Add table caption * Set metadata for required Kubernetes version * maintain the current relative path when switching to other site versions (#18871) * Update kubectl create configmap section (#18885) * Add common examples to Service Topology documentation (#18712) * service topology: add missing 'enabling service topology' page Signed-off-by: Andrew Sy Kim * service topology: add common examples Signed-off-by: Andrew Sy Kim * updating contrib for ref docs (#18787) more cleanup * fix translate docs format (#19018) * Update nodes.md (#19019) * Translate Contribute index page into Russian (#19022) * Added german translation for Addons page (#19010) * Added german translation for Addons page * Smaller adjustments * removed a english leftover-sentence * consistent spelling of "Add-Ons" * Removed english entry for CoreDNS * Update content/de/docs/concepts/cluster-administration/addons.md Co-Authored-By: Tim Bannister * Translated a heading Co-authored-by: Tim Bannister * (fix) Removed `-n test` from `kubectl get pv` command (#18877) - PV are cluster scoped rather than namespaced scope - So, there is no need to list it by namespace Signed-off-by: Aman Gupta * Link to setup page about Kind (#18996) Link from /docs/setup/ to /docs/setup/learning-environment/kind/ now that the target page exists. * Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md (#18808) * Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update and rename Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md to 2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-authored-by: Bob Killen Co-authored-by: Taylor Dolezal Co-authored-by: Tim Bannister Co-authored-by: Kaitlyn Barnard * Revise glossary entry for Device Plugin (#16291) * Document control plane monitoring (#17578) * Document control plane monitoring * Update content/en/docs/concepts/cluster-administration/monitoring.md Co-Authored-By: Tim Bannister * Update content/en/docs/concepts/cluster-administration/monitoring.md Co-Authored-By: Tim Bannister * Merge controller-metrics.md into monitoring.md Co-authored-by: Tim Bannister * Document none driver compatibility with non docker runtime. (#17952) * Refined unclear sentence on 3rd party dependencies (#18015) * Refined unclear sentence on 3rd party dependencies I reworded the sentence on third party dependencies a bit in order to make it more sound * Update content/en/docs/concepts/security/overview.md Sounds much better Co-Authored-By: Tim Bannister Co-authored-by: Tim Bannister * Improve network policies concept (#18091) * Adopt website style guidelines * Tweak wording Co-Authored-By: cmluciano * Make sample NetworkPolicies downloadable Co-authored-by: cmluciano * clean up secret generators (#18320) * Use built-in version check & metadata (#18542) * Reword kubelet live reconfiguration task (#18629) - Revise version requirements - Use glossary tooltips in summary - Use sentence case for headings - Write kubelet in lowercase where appropriate - Add “What's next” section * fix: add dns search record limit note. (#18913) * Remove duplicate content: Roles & Responsibilities (#18920) * Remove duplicate content: Roles & Responsibilities Signed-off-by: Celeste Address feedback Signed-off-by: Celeste * Apply suggestions from review Co-Authored-By: Zach Corleissen * Link to contribution guidelines Signed-off-by: Celeste Horgan * Address PR feedback Signed-off-by: Celeste Horgan Co-authored-by: Zach Corleissen * Fix of pull request #18960 (#18974) * Fix of pull request #18960 * Add yaml configuration file snippets * Remove redundant code snippet for command * Update cheatsheet.md (#18975) * Update cheatsheet.md "List all pods in the namespace, with more details" command corrected by adding --all-namespaces * Update content/en/docs/reference/kubectl/cheatsheet.md Co-Authored-By: Tim Bannister Co-authored-by: Tim Bannister * Correct description of Knitter CNI plugin (#18983) * Add Elastic metricbeat to examples of DaemonSets and rename logstash (#19024) * Add Elastic metricbeat to examples of DaemonSets The URL points to the docs related to how to configure metricbeat on k8s * Filebeat is the next thing * Separated commands from output (#19023) * Update KubeCon URLs (#19027) The URLs had changed (and were being redirected). Also, added parameters to better identify the traffic source. * remove see also and close issue (#19032) * sync zh-trans content/zh/docs/concepts/workloads/controllers/garbage-collection.md (#18865) * zh trans /docs/reference/access-authn-authz/extensible-admission-controllers.md (#18856) * Update zh/docs/concepts/services-networking/dns-pod-service.md#pods (#18992) * Adding contribution best practice in contribute docs (#18059) * Add kubectl patch example with quotes on Windows (#18853) * Add kubectl patch example with quotes on Windows When running the `kubectl patch` example, on Windows systems you get an error when passing the patch request in single quotes. Passing it in double quotes with the inner ones escaped produced the desired behavior as is in the example given for Linux systems. I've added a small note for Windows users to have that in mind. Signed-off-by: Mariyan Dimitrov * Use Hugo note shortcode Windows note is placed inside a [shortcode](https://kubernetes.io/docs/contribute/style/style-guide/#shortcodes) to be consistent with the style guide. Signed-off-by: Mariyan Dimitrov * Remove shell Markdown syntax I've removed the shell syntax from the Windows example and have changed the description to be the same as the one used in [jsonpath](https://kubernetes.io/docs/reference/kubectl/jsonpath/) document to be more consistent. The jsonpath example uses cmd syntax, though it is note inside a note shortcode, therefore I've opted out of using any syntax as it seems to break rendering inside the shortcode. Signed-off-by: Mariyan Dimitrov * Add cmd markdown syntax and fix order list I've tested this locally with `make docker-serve` on my Linux machine and finally things are looking better, I've managed to address these two issues: - the Windows example is now inside `note` shortcode and also the cmd syntax renders correctly on the page - the list of steps broke after the first one, I've indented a paragraph and now the steps are in the expected order Signed-off-by: Mariyan Dimitrov * Remove command prompt from example According to the [style guide](https://kubernetes.io/docs/contribute/style/style-guide/#don-t-include-the-command-prompt), the command prompt should not be included when showing an example. This commit removes it for consistency with the style guide. Signed-off-by: Mariyan Dimitrov * cleanup /docs/concepts/workloads/pods/pod-lifecycle/ (#19009) * update nodes.md (#18987) 将“用量低”更改为“可用量低”,避免歧义 * Remove command prompt from Windows example (#18906) * Remove command prompt from Windows example According to the [style guide](https://kubernetes.io/docs/contribute/style/style-guide/#don-t-include-the-command-prompt), the command prompt should not be included in the examples. Removing the Windows command prompt from the jsonpath example. Signed-off-by: Mariyan Dimitrov * Put Windows example inside note shortcode I'm putting the Windows example in a Hug note shortcode to be consistent with the rest of the documentation. Signed-off-by: Mariyan Dimitrov * Updated CHANGELOG-11 link (#19036) * update command used to create deployment (#19005) The previous one was showing a deprecation warning when used. * Update Korean localization guide (#19004) rev1-Update Korean localization guide * docs: fix broken etcd's official documents link (#19021) * Update automated-tasks-with-cron-jobs.md (#19043) Co-authored-by: Xin Chen Co-authored-by: Tim Bannister Co-authored-by: lemon Co-authored-by: Slava Semushin Co-authored-by: Olivier Cloirec <5033885+clook@users.noreply.github.com> Co-authored-by: inductor Co-authored-by: Naoki Oketani Co-authored-by: Samuel Kihahu Co-authored-by: Takuma Hashimoto Co-authored-by: Keita Akutsu Co-authored-by: Masa Taniguchi Co-authored-by: Soto Sugita Co-authored-by: Kozzy Hasebe <48105562+hasebe@users.noreply.github.com> Co-authored-by: kazuaki harada Co-authored-by: Shunsuke Miyoshi Co-authored-by: hato wang <26351545+wyyxd2017@users.noreply.github.com> Co-authored-by: xieyanker Co-authored-by: zhouya0 <50729202+zhouya0@users.noreply.github.com> Co-authored-by: littleboy Co-authored-by: camper42 Co-authored-by: Dominic Yin Co-authored-by: Steve Bang Co-authored-by: Zach Corleissen Co-authored-by: Ryan McGinnis Co-authored-by: Shunde Zhang Co-authored-by: Bob Killen Co-authored-by: Taylor Dolezal Co-authored-by: Patrick Ohly Co-authored-by: Eugenio Marzo Co-authored-by: Kaitlyn Barnard Co-authored-by: TimYin Co-authored-by: Shivang Goswami Co-authored-by: Fabian Baumanis Co-authored-by: Rémy Léone Co-authored-by: chentanjun Co-authored-by: helight Co-authored-by: Jie Shen Co-authored-by: Joe Betz Co-authored-by: Danni Setiawan Co-authored-by: GoodGameZoo Co-authored-by: makocchi Co-authored-by: babang Co-authored-by: Sharjeel Aziz Co-authored-by: Wojtek Cichoń Co-authored-by: Mariyan Dimitrov Co-authored-by: Maciej Filocha <12587791+mfilocha@users.noreply.github.com> Co-authored-by: Michał Sochoń Co-authored-by: Yudi A Phanama <11147376+phanama@users.noreply.github.com> Co-authored-by: Giovan Isa Musthofa Co-authored-by: Park Sung Taek Co-authored-by: Kyle Smith Co-authored-by: craigbox Co-authored-by: Afrizal Fikri Co-authored-by: Gede Wahyu Adi Pramana Co-authored-by: Anshu Prateek <333902+anshprat@users.noreply.github.com> Co-authored-by: Sergei Zyubin Co-authored-by: Christoph Blecker Co-authored-by: Brad Topol Co-authored-by: Venkata Harshavardhan Reddy Allu Co-authored-by: KYamani Co-authored-by: Trishank Karthik Kuppusamy <33133073+trishankatdatadog@users.noreply.github.com> Co-authored-by: Jacky Wu Co-authored-by: Gerasimos Dimitriadis Co-authored-by: Rajat Toshniwal Co-authored-by: Josh Soref Co-authored-by: Sascha Grunert Co-authored-by: wawa Co-authored-by: Claudia J.Kang Co-authored-by: Yuk, Yongsu Co-authored-by: Seokho Son Co-authored-by: Lawrence Kay Co-authored-by: Jesang Myung Co-authored-by: June Yi Co-authored-by: Karen Bradshaw Co-authored-by: Alexey Pyltsyn Co-authored-by: Karol Pucyński <9209870+kpucynski@users.noreply.github.com> Co-authored-by: Julian V. Modesto Co-authored-by: Jeremy L. Morris Co-authored-by: Casey Davenport Co-authored-by: zhanwang Co-authored-by: wwgfhf <51694849+wwgfhf@users.noreply.github.com> Co-authored-by: harleyliao <357857613@qq.com> Co-authored-by: ten2ton <50288981+ten2ton@users.noreply.github.com> Co-authored-by: Aurélien Perrier Co-authored-by: UDIT GAURAV <35391335+uditgaurav@users.noreply.github.com> Co-authored-by: Rene Luria Co-authored-by: Neil Jerram Co-authored-by: Arjun Co-authored-by: Katarzyna Kańska Co-authored-by: Laurens Versluis Co-authored-by: Ray76 Co-authored-by: Alexander Zimmermann <7714821+alexzimmer96@users.noreply.github.com> Co-authored-by: Christian Meter Co-authored-by: MMeent Co-authored-by: RA489 Co-authored-by: Akira Tanimura Co-authored-by: Patouche Co-authored-by: Jordan Liggitt Co-authored-by: Maria Ntalla Co-authored-by: Marko Lukša Co-authored-by: John Morrissey Co-authored-by: Andrew Sy Kim Co-authored-by: ngsw Co-authored-by: Aman Gupta Co-authored-by: Marek Siarkowicz Co-authored-by: tom1299 Co-authored-by: cmluciano Co-authored-by: Celeste Horgan Co-authored-by: Prasad Honavar Co-authored-by: Sam Co-authored-by: Victor Martinez Co-authored-by: Dan Kohn Co-authored-by: vishakha <54327666+vishakhanihore@users.noreply.github.com> Co-authored-by: liyinda246 Co-authored-by: Kabir Kwatra Co-authored-by: Armand Grillet <2117580+armandgrillet@users.noreply.github.com> Co-authored-by: Junwoo Ji Co-authored-by: rm --- .github/PULL_REQUEST_TEMPLATE.md | 37 +- OWNERS_ALIASES | 3 - assets/sass/_base.sass | 3 +- content/de/community/code-of-conduct.md | 26 + content/de/community/static/README.md | 2 + .../community/static/cncf-code-of-conduct.md | 30 + .../concepts/cluster-administration/addons.md | 56 + content/de/docs/reference/glossary/etcd.md | 2 +- content/de/docs/tutorials/hello-minikube.md | 2 +- content/en/_index.html | 4 +- ...19-07-18-some-apis-are-being-deprecated.md | 46 +- .../_posts/2020-01-21-Docs-Review-2019.md | 99 + ...2020-01-21-csi-ephemeral-inline-volumes.md | 251 ++ ...d-Chaos-Engineering-Tool-for-Kubernetes.md | 110 + ...l-OpenStack-Cloud-Provider-With-Kubeadm.md | 760 ++++++ .../en/docs/concepts/architecture/nodes.md | 6 + .../concepts/cluster-administration/addons.md | 2 +- .../controller-metrics.md | 50 - .../cluster-administration/monitoring.md | 132 + .../configuration/pod-priority-preemption.md | 6 + .../en/docs/concepts/configuration/secret.md | 553 ++-- .../configuration/taint-and-toleration.md | 4 +- content/en/docs/concepts/containers/images.md | 6 +- .../docs/concepts/containers/runtime-class.md | 13 +- .../api-extension/custom-resources.md | 10 +- .../en/docs/concepts/policy/limit-range.md | 4 +- .../concepts/policy/pod-security-policy.md | 8 +- content/en/docs/concepts/security/overview.md | 2 +- .../services-networking/dual-stack.md | 5 +- .../services-networking/network-policies.md | 99 +- .../services-networking/service-topology.md | 109 +- .../concepts/services-networking/service.md | 21 +- .../concepts/storage/persistent-volumes.md | 2 +- .../workloads/controllers/daemonset.md | 4 +- .../workloads/controllers/replicaset.md | 6 +- .../workloads/controllers/ttlafterfinished.md | 2 +- .../concepts/workloads/pods/pod-lifecycle.md | 30 +- content/en/docs/contribute/_index.md | 96 +- .../contribute/generate-ref-docs/_index.md | 11 +- .../generate-ref-docs/contribute-upstream.md | 40 +- .../contribute/generate-ref-docs/kubectl.md | 101 +- .../generate-ref-docs/kubernetes-api.md | 140 +- .../kubernetes-components.md | 210 +- .../prerequisites-ref-docs.md | 21 + .../generate-ref-docs/quickstart.md | 260 ++ content/en/docs/contribute/localization.md | 16 +- content/en/docs/contribute/participating.md | 173 +- content/en/docs/contribute/start.md | 33 +- .../en/docs/contribute/style/content-guide.md | 4 +- .../docs/contribute/style/write-new-topic.md | 5 +- content/en/docs/reference/_index.md | 24 +- .../extensible-admission-controllers.md | 7 +- .../reference/glossary/container-runtime.md | 8 +- .../docs/reference/glossary/device-plugin.md | 18 +- .../reference/glossary/service-account.md | 2 +- .../en/docs/reference/glossary/upstream.md | 2 +- .../en/docs/reference/kubectl/cheatsheet.md | 8 +- content/en/docs/reference/kubectl/jsonpath.md | 6 +- content/en/docs/reference/kubectl/overview.md | 5 +- .../docs/reference/using-api/api-concepts.md | 75 +- content/en/docs/setup/_index.md | 4 +- .../docs/setup/best-practices/certificates.md | 2 +- .../docs/setup/learning-environment/kind.md | 23 + .../setup/learning-environment/minikube.md | 12 +- .../container-runtimes.md | 20 +- .../tools/kubeadm/create-cluster-kubeadm.md | 9 +- .../tools/kubeadm/install-kubeadm.md | 2 +- .../production-environment/tools/kubespray.md | 6 +- content/en/docs/setup/release/notes.md | 2 +- .../configure-aggregation-layer.md | 2 +- .../custom-resource-definition-versioning.md | 9 +- .../http-proxy-access-api.md | 4 + .../change-pv-reclaim-policy.md | 11 +- .../configure-upgrade-etcd.md | 4 +- .../dns-debugging-resolution.md | 9 +- .../dns-horizontal-autoscaling.md | 2 +- .../enabling-service-topology.md | 54 + .../tasks/administer-cluster/encrypt-data.md | 27 +- .../administer-cluster/reconfigure-kubelet.md | 164 +- .../running-cloud-controller.md | 2 +- .../configure-pod-configmap.md | 95 +- .../debug-pod-replication-controller.md | 2 + .../distribute-credentials-secure.md | 3 +- .../job/automated-tasks-with-cron-jobs.md | 2 +- .../declarative-config.md | 13 +- .../imperative-config.md | 4 +- .../docs/tasks/network/validate-dual-stack.md | 7 +- .../horizontal-pod-autoscale-walkthrough.md | 11 +- .../horizontal-pod-autoscale.md | 25 +- .../create-cluster/cluster-intro.html | 2 +- .../update/update-intro.html | 2 +- .../en/docs/tutorials/services/source-ip.md | 2 +- .../expose-external-ip-address.md | 13 +- .../users/application-developer/advanced.md | 120 - .../application-developer/foundational.md | 260 -- .../application-developer/intermediate.md | 166 -- .../users/cluster-operator/foundational.md | 96 - .../users/cluster-operator/intermediate.md | 109 - .../en/examples/application/php-apache.yaml | 39 + .../network-policy-allow-all-egress.yaml | 11 + .../network-policy-allow-all-ingress.yaml | 11 + .../network-policy-default-deny-all.yaml | 9 + .../network-policy-default-deny-egress.yaml | 9 + .../network-policy-default-deny-ingress.yaml | 9 + .../fr/docs/concepts/configuration/secret.md | 981 +++++++ .../concepts/storage/persistent-volumes.md | 756 ++++++ .../workloads/controllers/deployment.md | 1225 +++++++++ .../concepts/workloads/pods/pod-lifecycle.md | 2 +- content/fr/docs/contribute/advanced.md | 94 + .../setup/learning-environment/minikube.md | 14 +- .../web-ui-dashboard.md | 221 ++ .../controllers/nginx-deployment.yaml | 21 + content/id/community/_index.html | 236 ++ content/id/community/code-of-conduct.md | 24 + .../community/static/cncf-code-of-conduct.md | 31 + .../manage-compute-resources-container.md | 631 +++++ .../compute-storage-net/_index.md | 4 + .../compute-storage-net/device-plugins.md | 234 ++ .../compute-storage-net/network-plugins.md | 158 ++ .../workloads/controllers/daemonset.md | 236 ++ content/id/docs/reference/glossary/etcd.md | 2 +- .../id/examples/controllers/daemonset.yaml | 42 + content/ja/_index.html | 7 +- content/ja/case-studies/appdirect/index.html | 4 +- .../ja/case-studies/chinaunicom/index.html | 18 +- content/ja/case-studies/nav/index.html | 93 + .../ja/case-studies/nav/nav_featured_logo.png | Bin 0 -> 4218 bytes content/ja/case-studies/nordstrom/index.html | 2 +- content/ja/case-studies/sos/index.html | 24 +- content/ja/case-studies/spotify/index.html | 120 + .../case-studies/spotify/spotify-featured.svg | 1 + .../spotify/spotify_featured_logo.png | Bin 0 -> 6383 bytes content/ja/docs/concepts/_index.md | 21 +- .../ja/docs/concepts/architecture/_index.md | 2 +- .../ja/docs/concepts/architecture/nodes.md | 2 +- .../concepts/cluster-administration/_index.md | 5 + .../ja/docs/concepts/configuration/_index.md | 5 + content/ja/docs/concepts/containers/_index.md | 3 +- .../api-extension/custom-resources.md | 223 ++ content/ja/docs/concepts/overview/_index.md | 3 +- .../ja/docs/concepts/overview/components.md | 15 +- .../docs/concepts/overview/kubernetes-api.md | 2 +- .../overview/working-with-objects/_index.md | 2 +- .../scheduling/scheduler-perf-tuning.md | 74 + .../connect-applications-service.md | 420 +++ .../concepts/services-networking/ingress.md | 403 +++ .../concepts/services-networking/service.md | 75 +- content/ja/docs/concepts/storage/_index.md | 2 +- content/ja/docs/concepts/workloads/_index.md | 2 +- .../concepts/workloads/controllers/_index.md | 3 +- .../workloads/controllers/deployment.md | 999 ++++++++ .../ja/docs/concepts/workloads/pods/_index.md | 3 +- .../workloads/pods/init-containers.md | 2 +- .../concepts/workloads/pods/pod-overview.md | 29 +- content/ja/docs/reference/_index.md | 8 +- .../command-line-tools-reference/_index.md | 5 + .../feature-gates.md | 237 +- content/ja/docs/reference/glossary/cluster.md | 18 + content/ja/docs/reference/glossary/ingress.md | 19 + content/ja/docs/reference/kubectl/_index.md | 5 + .../ja/docs/reference/kubectl/cheatsheet.md | 384 +++ content/ja/docs/setup/_index.md | 23 +- .../setup/production-environment/_index.md | 2 +- .../production-environment/tools/_index.md | 2 +- .../production-environment/tools/kops.md | 70 +- .../tools/kubeadm/_index.md | 2 +- .../tools/kubeadm/install-kubeadm.md | 192 +- .../production-environment/turnkey/_index.md | 2 +- content/ja/docs/setup/release/_index.md | 2 +- content/ja/docs/tasks/_index.md | 85 + .../access-application-cluster/_index.md | 4 + .../docs/tasks/administer-cluster/_index.md | 4 + .../assign-cpu-resource.md | 237 ++ .../quality-service-pod.md | 256 ++ .../tasks/debug-application-cluster/_index.md | 4 + .../debug-service.md | 597 +++++ .../ja/docs/tasks/run-application/_index.md | 5 + content/ja/docs/tasks/tools/_index.md | 3 +- .../ja/docs/tasks/tools/install-kubectl.md | 56 +- .../ja/docs/tasks/tools/install-minikube.md | 164 +- .../connect-applications-service.md | 2 +- .../services-networking/dual-stack.md | 8 +- .../concepts/services-networking/service.md | 1201 +++++++++ .../workloads/controllers/cron-jobs.md | 5 + .../controllers/garbage-collection.md | 182 ++ .../concepts/workloads/pods/pod-overview.md | 2 +- .../ko/docs/concepts/workloads/pods/pod.md | 2 +- content/ko/docs/contribute/localization_ko.md | 305 ++- .../glossary/customresourcedefinition.md | 2 +- content/ko/docs/reference/glossary/etcd.md | 2 +- .../ko/docs/reference/kubectl/cheatsheet.md | 5 +- content/ko/docs/setup/_index.md | 6 +- .../docs/setup/best-practices/certificates.md | 2 +- .../windows/user-guide-windows-nodes.md | 6 - ...port-forward-access-application-cluster.md | 153 ++ .../ko/examples/controllers/replicaset.yaml | 17 + content/pl/docs/contribute/_index.md | 81 + content/pl/docs/tasks/_index.md | 87 + .../tutorials/kubernetes-basics/_index.html | 2 +- content/ru/_index.html | 24 +- content/ru/docs/concepts/_index.md | 10 +- content/ru/docs/contribute/_index.md | 61 + content/ru/docs/tutorials/hello-minikube.md | 21 +- .../_posts/2020-01-15-Kubernetes-on-MIPS.md | 272 ++ content/zh/docs/concepts/_index.md | 29 +- .../zh/docs/concepts/architecture/nodes.md | 2 +- .../organize-cluster-access-kubeconfig.md | 2 +- .../concepts/configuration/pod-overhead.md | 2 +- .../docs/concepts/example-concept-template.md | 48 +- .../docs/concepts/overview/kubernetes-api.md | 2 +- .../services-networking/dns-pod-service.md | 12 +- .../concepts/services-networking/service.md | 8 +- .../docs/concepts/storage/storage-classes.md | 5 + .../docs/concepts/storage/volume-snapshots.md | 8 +- .../workloads/controllers/cron-jobs.md | 204 +- .../workloads/controllers/daemonset.md | 143 +- .../workloads/controllers/deployment.md | 2269 ++++++++++++----- .../controllers/garbage-collection.md | 75 +- .../controllers/replicationcontroller.md | 185 +- .../workloads/controllers/statefulset.md | 209 +- .../workloads/controllers/ttlafterfinished.md | 61 +- .../workloads/pods/ephemeral-containers.md | 37 +- .../concepts/workloads/pods/pod-overview.md | 134 +- .../zh/docs/concepts/workloads/pods/pod.md | 318 ++- .../docs/concepts/workloads/pods/podpreset.md | 64 +- content/zh/docs/contribute/_index.md | 18 + content/zh/docs/contribute/intermediate.md | 4 +- .../docs/contribute/style/page-templates.md | 2 +- .../docs/contribute/style/write-new-topic.md | 75 +- .../extensible-admission-controllers.md | 1253 ++++----- .../docs/reference/access-authn-authz/node.md | 3 + content/zh/docs/reference/glossary/etcd.md | 4 +- .../docs/reference/issues-security/issues.md | 21 +- .../reference/issues-security/security.md | 20 +- content/zh/docs/reference/kubectl/kubectl.md | 6 + .../kubeadm_config_print_init-defaults.md | 12 - .../kubeadm_config_print_join-defaults.md | 13 - .../kubeadm/generated/kubeadm_config_view.md | 12 - .../kubeadm/generated/kubeadm_init.md | 13 - .../kubeadm/generated/kubeadm_init_phase.md | 34 - .../generated/kubeadm_init_phase_addon.md | 17 - .../generated/kubeadm_init_phase_addon_all.md | 12 - .../kubeadm_init_phase_addon_coredns.md | 11 - .../kubeadm_init_phase_addon_kube-proxy.md | 12 - .../kubeadm_init_phase_bootstrap-token.md | 12 - ...kubeadm_init_phase_certs_front-proxy-ca.md | 12 - ...adm_init_phase_certs_front-proxy-client.md | 14 +- .../generated/kubeadm_init_phase_certs_sa.md | 12 - .../kubeadm_init_phase_control-plane.md | 20 - .../kubeadm_init_phase_control-plane_all.md | 13 +- ...eadm_init_phase_control-plane_apiserver.md | 12 - ..._phase_control-plane_controller-manager.md | 12 - ...eadm_init_phase_control-plane_scheduler.md | 12 - .../kubeadm_init_phase_kubeconfig.md | 22 - .../kubeadm_init_phase_kubeconfig_admin.md | 13 - .../kubeadm_init_phase_kubeconfig_all.md | 13 - ...nit_phase_kubeconfig_controller-manager.md | 10 - ...kubeadm_init_phase_kubeconfig_scheduler.md | 12 - .../kubeadm_init_phase_kubelet-start.md | 12 - .../kubeadm_init_phase_mark-control-plane.md | 9 - .../generated/kubeadm_init_phase_preflight.md | 12 - .../kubeadm_init_phase_upload-certs.md | 12 - .../kubeadm_init_phase_upload-config.md | 15 - ...ubeadm_init_phase_upload-config_kubelet.md | 243 +- ...dm_join_phase_control-plane-prepare_all.md | 12 - ..._join_phase_control-plane-prepare_certs.md | 12 - ...ase_control-plane-prepare_control-plane.md | 12 - ...se_control-plane-prepare_download-certs.md | 12 - ..._phase_control-plane-prepare_kubeconfig.md | 12 - .../kubeadm_reset_phase_preflight.md | 12 - ...beadm_reset_phase_update-cluster-status.md | 12 - .../kubeadm/generated/kubeadm_token.md | 19 - .../kubeadm/generated/kubeadm_token_create.md | 12 - .../kubeadm/generated/kubeadm_token_delete.md | 12 - .../generated/kubeadm_token_generate.md | 10 - .../kubeadm/generated/kubeadm_token_list.md | 11 - .../kubeadm/generated/kubeadm_upgrade.md | 18 - .../generated/kubeadm_upgrade_apply.md | 11 - .../independent/create-cluster-kubeadm.md | 6 +- .../configure-access-multiple-clusters.md | 2 +- .../create-external-load-balancer.md | 1 + .../web-ui-dashboard.md | 2 +- .../dns-horizontal-autoscaling.md | 4 +- .../tasks/administer-cluster/ip-masq-agent.md | 5 + .../tasks/administer-cluster/nodelocaldns.md | 1 + .../configure-volume-storage.md | 7 +- .../federation/administer-federation/job.md | 1 + .../distribute-credentials-secure.md | 2 +- ...nward-api-volume-expose-pod-information.md | 2 +- ...ronment-variable-expose-pod-information.md | 2 +- .../zh/docs/tutorials/clusters/apparmor.md | 2 +- .../zh/docs/tutorials/services/source-ip.md | 168 +- .../basic-stateful-set.md | 2 +- .../users/cluster-operator/foundational.md | 194 -- .../users/cluster-operator/intermediate.md | 221 -- data/concepts.yml | 2 +- data/setup.yml | 21 - data/user-personas/users/app-developer.yaml | 40 - .../user-personas/users/cluster-operator.yaml | 26 - i18n/ko.toml | 2 +- i18n/ru.toml | 6 +- layouts/docs/docsportal.html | 12 - layouts/partials/docs/user-journey.html | 22 - layouts/partials/header.html | 4 +- .../templates/user-journey-content.html | 23 - .../reference/generated/kubectl/navData.js | 2 +- .../generated/kubernetes-api/v1.17/index.html | 160 +- static/js/user-journeys/home.js | 339 --- static/js/user-journeys/toc.js | 30 - update-imported-docs/Makefile_temp | 128 - update-imported-docs/README.md | 17 +- update-imported-docs/reference.yml | 9 +- update-imported-docs/update-imported-docs.py | 6 +- 313 files changed, 18433 insertions(+), 6544 deletions(-) create mode 100644 content/de/community/code-of-conduct.md create mode 100644 content/de/community/static/README.md create mode 100644 content/de/community/static/cncf-code-of-conduct.md create mode 100644 content/de/docs/concepts/cluster-administration/addons.md create mode 100644 content/en/blog/_posts/2020-01-21-Docs-Review-2019.md create mode 100644 content/en/blog/_posts/2020-01-21-csi-ephemeral-inline-volumes.md create mode 100644 content/en/blog/_posts/2020-01-22-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md create mode 100644 content/en/blog/_posts/2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md delete mode 100644 content/en/docs/concepts/cluster-administration/controller-metrics.md create mode 100644 content/en/docs/concepts/cluster-administration/monitoring.md create mode 100644 content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md create mode 100644 content/en/docs/contribute/generate-ref-docs/quickstart.md create mode 100644 content/en/docs/setup/learning-environment/kind.md create mode 100644 content/en/docs/tasks/administer-cluster/enabling-service-topology.md delete mode 100644 content/en/docs/user-journeys/users/application-developer/advanced.md delete mode 100644 content/en/docs/user-journeys/users/application-developer/foundational.md delete mode 100644 content/en/docs/user-journeys/users/application-developer/intermediate.md delete mode 100644 content/en/docs/user-journeys/users/cluster-operator/foundational.md delete mode 100644 content/en/docs/user-journeys/users/cluster-operator/intermediate.md create mode 100644 content/en/examples/application/php-apache.yaml create mode 100644 content/en/examples/service/networking/network-policy-allow-all-egress.yaml create mode 100644 content/en/examples/service/networking/network-policy-allow-all-ingress.yaml create mode 100644 content/en/examples/service/networking/network-policy-default-deny-all.yaml create mode 100644 content/en/examples/service/networking/network-policy-default-deny-egress.yaml create mode 100644 content/en/examples/service/networking/network-policy-default-deny-ingress.yaml create mode 100644 content/fr/docs/concepts/configuration/secret.md create mode 100644 content/fr/docs/concepts/storage/persistent-volumes.md create mode 100644 content/fr/docs/concepts/workloads/controllers/deployment.md create mode 100644 content/fr/docs/contribute/advanced.md create mode 100644 content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md create mode 100644 content/fr/examples/controllers/nginx-deployment.yaml create mode 100644 content/id/community/_index.html create mode 100644 content/id/community/code-of-conduct.md create mode 100644 content/id/community/static/cncf-code-of-conduct.md create mode 100644 content/id/docs/concepts/configuration/manage-compute-resources-container.md create mode 100644 content/id/docs/concepts/extend-kubernetes/compute-storage-net/_index.md create mode 100644 content/id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md create mode 100644 content/id/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md create mode 100644 content/id/docs/concepts/workloads/controllers/daemonset.md create mode 100644 content/id/examples/controllers/daemonset.yaml create mode 100644 content/ja/case-studies/nav/index.html create mode 100644 content/ja/case-studies/nav/nav_featured_logo.png create mode 100644 content/ja/case-studies/spotify/index.html create mode 100644 content/ja/case-studies/spotify/spotify-featured.svg create mode 100644 content/ja/case-studies/spotify/spotify_featured_logo.png create mode 100755 content/ja/docs/concepts/cluster-administration/_index.md create mode 100755 content/ja/docs/concepts/configuration/_index.md create mode 100644 content/ja/docs/concepts/extend-kubernetes/api-extension/custom-resources.md create mode 100644 content/ja/docs/concepts/scheduling/scheduler-perf-tuning.md create mode 100644 content/ja/docs/concepts/services-networking/connect-applications-service.md create mode 100644 content/ja/docs/concepts/services-networking/ingress.md create mode 100644 content/ja/docs/concepts/workloads/controllers/deployment.md create mode 100644 content/ja/docs/reference/command-line-tools-reference/_index.md create mode 100644 content/ja/docs/reference/glossary/cluster.md create mode 100755 content/ja/docs/reference/glossary/ingress.md create mode 100755 content/ja/docs/reference/kubectl/_index.md create mode 100644 content/ja/docs/reference/kubectl/cheatsheet.md create mode 100644 content/ja/docs/tasks/_index.md create mode 100755 content/ja/docs/tasks/access-application-cluster/_index.md create mode 100755 content/ja/docs/tasks/administer-cluster/_index.md create mode 100644 content/ja/docs/tasks/configure-pod-container/assign-cpu-resource.md create mode 100644 content/ja/docs/tasks/configure-pod-container/quality-service-pod.md create mode 100755 content/ja/docs/tasks/debug-application-cluster/_index.md create mode 100644 content/ja/docs/tasks/debug-application-cluster/debug-service.md create mode 100755 content/ja/docs/tasks/run-application/_index.md create mode 100644 content/ko/docs/concepts/services-networking/service.md create mode 100644 content/ko/docs/concepts/workloads/controllers/garbage-collection.md create mode 100644 content/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md create mode 100644 content/ko/examples/controllers/replicaset.yaml create mode 100644 content/pl/docs/contribute/_index.md create mode 100644 content/pl/docs/tasks/_index.md create mode 100644 content/ru/docs/contribute/_index.md create mode 100644 content/zh/blog/_posts/2020-01-15-Kubernetes-on-MIPS.md delete mode 100644 content/zh/docs/user-journeys/users/cluster-operator/foundational.md delete mode 100644 content/zh/docs/user-journeys/users/cluster-operator/intermediate.md delete mode 100644 data/user-personas/users/app-developer.yaml delete mode 100644 layouts/docs/docsportal.html delete mode 100644 layouts/partials/docs/user-journey.html delete mode 100644 layouts/partials/templates/user-journey-content.html delete mode 100644 static/js/user-journeys/home.js delete mode 100644 static/js/user-journeys/toc.js delete mode 100644 update-imported-docs/Makefile_temp diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 0f1c347cfcd48..c7040f7fa8133 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,17 +1,20 @@ ->^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -> Remember to delete this note before submitting your pull request. -> -> For pull requests on 1.18 Features: set Milestone to 1.18 and Base Branch to dev-1.18 -> -> For pull requests on Chinese localization, set Base Branch to release-1.16 -> Feel free to ask questions in #kubernetes-docs-zh -> -> For pull requests on Korean Localization: set Base Branch to dev-1.16-ko.\ -> -> If you need Help on editing and submitting pull requests, visit: -> https://kubernetes.io/docs/contribute/start/#improve-existing-content. -> -> If you need Help on choosing which branch to use, visit: -> https://kubernetes.io/docs/contribute/start#choose-which-git-branch-to-use. ->^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -> + diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index d91daf4ce5b17..6a04a87b77d7d 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -52,7 +52,6 @@ aliases: - Rajakavitha1 - ryanmcginnis - sftim - - simplytunde - steveperry-53 - tengqm - vineethreddy02 @@ -70,9 +69,7 @@ aliases: - kbhawkey - makoscafee - rajakavitha1 - - ryanmcginnis - sftim - - simplytunde - steveperry-53 - tengqm - xiangpengzhao diff --git a/assets/sass/_base.sass b/assets/sass/_base.sass index 55d38ac0a93d9..2315ae5e94e85 100644 --- a/assets/sass/_base.sass +++ b/assets/sass/_base.sass @@ -107,7 +107,6 @@ header z-index: 8888 background-color: transparent box-shadow: 0 0 0 transparent - overflow: hidden transition: 0.3s text-align: center @@ -245,6 +244,8 @@ header background-color: white #mainNav + display: none + h5 color: $blue font-weight: normal diff --git a/content/de/community/code-of-conduct.md b/content/de/community/code-of-conduct.md new file mode 100644 index 0000000000000..ac0532a1af0ec --- /dev/null +++ b/content/de/community/code-of-conduct.md @@ -0,0 +1,26 @@ +--- +title: Community +layout: basic +cid: community +css: /css/community.css +--- + +
+

Kubernetes Community Code of Conduct

+ +Kubernetes folgt dem +CNCF Verhaltenskodex. +Der Kodex befindet sich weiter unten auf der Seite, wie er auch in +Commit 214585e gefunden werden kann. +Wenn dir auffällt, dass die hier gezeigte Version nicht mehr aktuell ist, +eröffne bitte ein Issue. + +Wenn dir bei einem Event, einem Meeting, in Slack oder einem anderen +Kommunikationskanal ein Verstoß gegen den Verhaltenskodex auffällt, wende dich an das Kubernetes Code of Conduct Committee. +Du kannst das Komitee über E-Mail erreichen: conduct@kubernetes.io. +Deine Anonymität wird geschützt. + +
+{{< include "/static/cncf-code-of-conduct.md" >}} +
+
diff --git a/content/de/community/static/README.md b/content/de/community/static/README.md new file mode 100644 index 0000000000000..ef8e8d5a3e6bc --- /dev/null +++ b/content/de/community/static/README.md @@ -0,0 +1,2 @@ +The files in this directory have been imported from other sources. Do not +edit them directly, except by replacing them with new versions. \ No newline at end of file diff --git a/content/de/community/static/cncf-code-of-conduct.md b/content/de/community/static/cncf-code-of-conduct.md new file mode 100644 index 0000000000000..e94bc7b7face9 --- /dev/null +++ b/content/de/community/static/cncf-code-of-conduct.md @@ -0,0 +1,30 @@ + +## CNCF Gemeinschafts-Verhaltenskodex v1.0 + +### Verhaltenskodex für Mitwirkende + +Als Mitwirkende und Betreuer dieses Projekts und im Interesse der Förderung einer offenen und einladenden Gemeinschaft verpflichten wir uns dazu, alle Menschen zu respektieren, die durch Berichterstattung, Veröffentlichung von Eigenschaftsanfragen, Aktualisierung der Dokumentation, Einreichung von Pull-Anfragen oder Patches und anderen Aktivitäten einen Beitrag leisten. + +Wir sind bestrebt, die Teilnahme an diesem Projekt für alle zu einer belästigungsfreien Erfahrung zu machen, unabhängig von Erfahrungsstand, Geschlecht, geschlechtsspezifischer Identität und Ausdruck, sexueller Orientierung, Behinderung, persönlichem Aussehen, Körpergröße, Rasse, ethnischer Herkunft, Alter, Religion oder Nationalität. + +Beispiele für unzumutbares Verhalten der Teilnehmer sind: + +- Der Gebrauch von sexualisierter Sprache oder Bildern +- Persönliche Angriffe +- Trolling oder beleidigende/herabwürdigende Kommentare +- Öffentliche oder private Belästigungen +- Veröffentlichung privater Informationen anderer, wie z.B. physischer oder elektronischer Adressen, ohne ausdrückliche Genehmigung +- Anderes unethisches oder unprofessionelles Verhalten. + +Projektbetreuer haben das Recht und die Verantwortung, Kommentare, Commits, Code, Wiki-Bearbeitungen, Probleme und andere Beiträge zu entfernen, zu bearbeiten oder abzulehnen, die nicht mit diesem Verhaltenskodex übereinstimmen. Mit der Annahme dieses Verhaltenskodex verpflichten sich die Projektbetreuer, diese Grundsätze fair und konsequent auf jeden Aspekt der Projektleitung anzuwenden. Projektbetreuer, die den Verhaltenskodex nicht befolgen oder durchsetzen, können dauerhaft vom Projektteam ausgeschlossen werden. + +Dieser Verhaltenskodex gilt sowohl innerhalb von Projekträumen als auch in öffentlichen Räumen, wenn eine Person das Projekt oder seine Gemeinschaft vertritt. + +Fälle von missbräuchlichem, belästigendem oder anderweitig unzumutbarem Verhalten in Kubernetes können gemeldet werden, indem Sie sich an das [Kubernetes Komitee für Verhaltenskodex](https://git.k8s.io/community/committee-code-of-conduct) wenden unter . Für andere Projekte wenden Sie sich bitte an einen CNCF-Projektbetreuer oder an unseren Mediator, Mishi Choudhary . + +Dieser Verhaltenskodex wurde aus dem Contributor Covenant übernommen (http://contributor-covenant.org), Version 1.2.0, verfügbar unter http://contributor-covenant.org/version/1/2/0/ + +### CNCF Verhaltenskodex für Veranstaltungen + +Für CNCF Veranstaltungen gilt der Verhaltenskodex der Linux Foundation, der auf der Veranstaltungsseite verfügbar ist. Diese ist so konzipiert, dass sie mit der oben genannten Richtlinie kompatibel ist und enthält auch weitere Details zur Reaktion auf Vorfälle. diff --git a/content/de/docs/concepts/cluster-administration/addons.md b/content/de/docs/concepts/cluster-administration/addons.md new file mode 100644 index 0000000000000..4d26b57da8bce --- /dev/null +++ b/content/de/docs/concepts/cluster-administration/addons.md @@ -0,0 +1,56 @@ +--- +title: Addons Installieren +content_template: templates/concept +--- + +{{% capture overview %}} + + +Add-Ons erweitern die Funktionalität von Kubernetes. + +Diese Seite gibt eine Übersicht über einige verfügbare Add-Ons und verweist auf die entsprechenden Installationsanleitungen. + +Die Add-Ons in den einzelnen Kategorien sind alphabetisch sortiert - Die Reihenfolge impliziert keine bevorzugung einzelner Projekte. + +{{% /capture %}} + + +{{% capture body %}} + +## Networking und Network Policy + +* [ACI](https://www.github.com/noironetworks/aci-containers) bietet Container-Networking und Network-Security mit Cisco ACI. +* [Calico](https://docs.projectcalico.org/latest/introduction/) ist ein Networking- und Network-Policy-Provider. Calico unterstützt eine Reihe von Networking-Optionen, damit Du die richtige für deinen Use-Case wählen kannst. Dies beinhaltet Non-Overlaying and Overlaying-Networks mit oder ohne BGP. Calico nutzt die gleiche Engine um Network-Policies für Hosts, Pods und (falls Du Istio & Envoy benutzt) Anwendungen auf Service-Mesh-Ebene durchzusetzen. +* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen. +* [Cilium](https://github.com/cilium/cilium) ist ein L3 Network- and Network-Policy-Plugin welches das transparent HTTP/API/L7-Policies durchsetzen kann. Sowohl Routing- als auch Overlay/Encapsulation-Modes werden uterstützt. Außerdem kann Cilium auf andere CNI-Plugins aufsetzen. +* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave. +* [Contiv](http://contiv.github.io) bietet konfigurierbares Networking (Native L3 auf BGP, Overlay mit vxlan, Klassisches L2, Cisco-SDN/ACI) für verschiedene Anwendungszwecke und auch umfangreiches Policy-Framework. Das Contiv-Projekt ist vollständig [Open Source](http://github.com/contiv). Der [installer](http://github.com/contiv/install) bietet sowohl kubeadm als auch nicht-kubeadm basierte Installationen. +* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), basierend auf [Tungsten Fabric](https://tungsten.io), ist eine Open Source, multi-Cloud Netzwerkvirtualisierungs- und Policy-Management Plattform. Contrail und Tungsten Fabric sind mit Orechstratoren wie z.B. Kubernetes, OpenShift, OpenStack und Mesos integriert und bieten Isolationsmodi für Virtuelle Maschinen, Container (bzw. Pods) und Bare Metal workloads. +* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann. +* [Knitter](https://github.com/ZTE/Knitter/) ist eine Network-Lösung die Mehrfach-Network in Kubernetes ermöglicht. +* [Multus](https://github.com/Intel-Corp/multus-cni) ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen. +* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift. +* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) ist eine SDN-Plattform die Policy-Basiertes Networking zwischen Kubernetes Pods und nicht-Kubernetes Umgebungen inklusive Sichtbarkeit und Security-Monitoring bereitstellt. +* [Romana](http://romana.io) ist eine Layer 3 Network-Lösung für Pod-Netzwerke welche auch die [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) unterstützt. Details zur Installation als kubeadm Add-On sind [hier](https://github.com/romana/romana/tree/master/containerize) verfügbar. +* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) bietet Networking and Network-Policies und arbeitet auf beiden Seiten der Network-Partition ohne auf eine externe Datenbank angwiesen zu sein. + +## Service-Discovery + +* [CoreDNS](https://coredns.io) ist ein flexibler, erweiterbarer DNS-Server der in einem Cluster [installiert](https://github.com/coredns/deployment/tree/master/kubernetes) werden kann und das Cluster-interne DNS für Pods bereitzustellen. + +## Visualisierung & Überwachung + +* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) ist ein Dashboard Web Interface für Kubernetes. +* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) ist ein Tool um Container, Pods, Services usw. Grafisch zu visualieren. Kann in Verbindung mit einem [Weave Cloud Account](https://cloud.weave.works/) genutzt oder selbst gehosted werden. + +## Infrastruktur + +* [KubeVirt](https://kubevirt.io/user-guide/docs/latest/administration/intro.html#cluster-side-add-on-deployment) ist ein Add-On um Virtuelle Maschinen in Kubernetes auszuführen. Wird typischer auf Bare-Metal Clustern eingesetzt. + +## Legacy Add-Ons + +Es gibt einige weitere Add-Ons die in dem abgekündigten [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons)-Verzeichnis dokumentiert sind. + +Add-Ons die ordentlich gewartet werden dürfen gerne hier aufgezählt werden. Wir freuen uns auf PRs! + +{{% /capture %}} diff --git a/content/de/docs/reference/glossary/etcd.md b/content/de/docs/reference/glossary/etcd.md index e98215a688558..147d4ce4bfb74 100755 --- a/content/de/docs/reference/glossary/etcd.md +++ b/content/de/docs/reference/glossary/etcd.md @@ -15,5 +15,5 @@ tags: -Halten Sie immer einen Sicherungsplan für etcds Daten für Ihren Kubernetes-Cluster bereit. Ausführliche Informationen zu etcd finden Sie in der [etcd Dokumentation](https://github.com/coreos/etcd/blob/master/Documentation/docs.md). +Halten Sie immer einen Sicherungsplan für etcds Daten für Ihren Kubernetes-Cluster bereit. Ausführliche Informationen zu etcd finden Sie in der [etcd Dokumentation](https://etcd.io/docs). diff --git a/content/de/docs/tutorials/hello-minikube.md b/content/de/docs/tutorials/hello-minikube.md index 3532b91f39d01..2e244c3f599a0 100644 --- a/content/de/docs/tutorials/hello-minikube.md +++ b/content/de/docs/tutorials/hello-minikube.md @@ -145,7 +145,7 @@ Um den "Hallo-Welt"-Container außerhalb des virtuellen Netzwerks von Kubernetes ``` Bei Cloud-Anbietern, die Load-Balancer unterstützen, wird eine externe IP-Adresse für den Zugriff auf den Dienst bereitgestellt. - Bei Minikube ermöglicht der Typ `LoadBalancer` den Dienst über den Befehl `minikube service` verfuügbar zu machen. + Bei Minikube ermöglicht der Typ `LoadBalancer` den Dienst über den Befehl `minikube service` verfügbar zu machen. 3. Führen Sie den folgenden Befehl aus: diff --git a/content/en/_index.html b/content/en/_index.html index 6882036d47a1a..0be8e7137501f 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -45,12 +45,12 @@

The Challenges of Migrating 150+ Microservices to Kubernetes




- Attend KubeCon in Amsterdam on Mar. 30-Apr. 2, 2020 + Attend KubeCon in Amsterdam on Mar. 30-Apr. 2, 2020



- Attend KubeCon in Shanghai on July 28-30, 2020 + Attend KubeCon in Shanghai on July 28-30, 2020
diff --git a/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md b/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md index 68bd0dcae13ab..9639e87a5366a 100644 --- a/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md +++ b/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md @@ -12,21 +12,45 @@ When APIs evolve, the old API is deprecated and eventually removed. The **v1.16** release will stop serving the following deprecated API versions in favor of newer and more stable API versions: -* NetworkPolicy (in the **extensions/v1beta1** API group) - * Migrate to use the **networking.k8s.io/v1** API, available since v1.8. - Existing persisted data can be retrieved/updated via the **networking.k8s.io/v1** API. -* PodSecurityPolicy (in the **extensions/v1beta1** API group) +* NetworkPolicy in the **extensions/v1beta1** API version is no longer served + * Migrate to use the **networking.k8s.io/v1** API version, available since v1.8. + Existing persisted data can be retrieved/updated via the new version. +* PodSecurityPolicy in the **extensions/v1beta1** API version * Migrate to use the **policy/v1beta1** API, available since v1.10. - Existing persisted data can be retrieved/updated via the **policy/v1beta1** API. -* DaemonSet, Deployment, StatefulSet, and ReplicaSet (in the **extensions/v1beta1** and **apps/v1beta2** API groups) - * Migrate to use the **apps/v1** API, available since v1.9. - Existing persisted data can be retrieved/updated via the **apps/v1** API. + Existing persisted data can be retrieved/updated via the new version. +* DaemonSet in the **extensions/v1beta1** and **apps/v1beta2** API versions is no longer served + * Migrate to use the **apps/v1** API version, available since v1.9. + Existing persisted data can be retrieved/updated via the new version. + * Notable changes: + * `spec.templateGeneration` is removed + * `spec.selector` is now required and immutable after creation + * `spec.updateStrategy.type` now defaults to `RollingUpdate` +* Deployment in the **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions is no longer served + * Migrate to use the **apps/v1** API version, available since v1.9. + Existing persisted data can be retrieved/updated via the new version. + * Notable changes: + * `spec.rollbackTo` is removed + * `spec.selector` is now required and immutable after creation + * `spec.progressDeadlineSeconds` now defaults to `600` seconds + * `spec.revisionHistoryLimit` now defaults to `10` + * `maxSurge` and `maxUnavailable` now default to `25%` +* StatefulSet in the **apps/v1beta1** and **apps/v1beta2** API versions is no longer served + * Migrate to use the **apps/v1** API version, available since v1.9. + Existing persisted data can be retrieved/updated via the new version. + * Notable changes: + * `spec.selector` is now required and immutable after creation + * `spec.updateStrategy.type` now defaults to `RollingUpdate` +* ReplicaSet in the **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions is no longer served + * Migrate to use the **apps/v1** API version, available since v1.9. + Existing persisted data can be retrieved/updated via the new version. + * Notable changes: + * `spec.selector` is now required and immutable after creation The **v1.20** release will stop serving the following deprecated API versions in favor of newer and more stable API versions: -* Ingress (in the **extensions/v1beta1** API group) - * Migrate to use the **networking.k8s.io/v1beta1** API, serving Ingress since v1.14. - Existing persisted data can be retrieved/updated via the **networking.k8s.io/v1beta1** API. +* Ingress in the **extensions/v1beta1** API version will no longer be served + * Migrate to use the **networking.k8s.io/v1beta1** API version, available since v1.14. + Existing persisted data can be retrieved/updated via the new version. # What To Do diff --git a/content/en/blog/_posts/2020-01-21-Docs-Review-2019.md b/content/en/blog/_posts/2020-01-21-Docs-Review-2019.md new file mode 100644 index 0000000000000..cf163c4b1e2b9 --- /dev/null +++ b/content/en/blog/_posts/2020-01-21-Docs-Review-2019.md @@ -0,0 +1,99 @@ +--- +layout: blog +title: "Reviewing 2019 in Docs" +date: 2020-01-21 +slug: reviewing-2019-in-docs +--- + +**Author:** Zach Corleissen (Cloud Native Computing Foundation) + +Hi, folks! I'm one of the co-chairs for the Kubernetes documentation special interest group (SIG Docs). This blog post is a review of SIG Docs in 2019. Our contributors did amazing work last year, and I want to highlight their successes. + +Although I review 2019 in this post, my goal is to point forward to 2020. I observe some trends in SIG Docs–some good, others troubling. I want to raise visibility before those challenges increase in severity. + +## The good + +There was much to celebrate in SIG Docs in 2019. + +Kubernetes docs started the year with three localizations in progress. By the end of the year, we ended with ten localizations available, four of which (Chinese, French, Japanese, Korean) are reasonably complete. The Korean and French teams deserve special mentions for their contributions to git best practices across all localizations (Korean team) and help bootstrapping other localizations (French team). + +Despite significant transition over the year, SIG Docs [improved its review velocity](https://k8s.devstats.cncf.io/d/44/pr-time-to-approve-and-merge?orgId=1&var-period=w&var-repogroup_name=SIG%20Docs&var-apichange=All&var-size_name=All&var-kind_name=All), with a median review time from PR open to merge of just over 24 hours. + +Issue triage improved significantly in both volume and speed, largely due to the efforts of GitHub users @sftim, @tengqm, and @kbhawkey. + +Doc sprints remain valuable at KubeCon contributor days, introducing new contributors to Kubernetes documentation. + +The docs component of Kubernetes quarterly releases improved over 2019, thanks to iterative playbook improvements from release leads and their teams. + +Site traffic increased over the year. The website ended the year with ~6 million page views per month in December, up from ~5M page views in January. The kubernetes.io website had 851k site visitors in October, a new all-time high. Reader satisfaction [remains general](https://kubernetes.io/blog/2019/10/29/kubernetes-documentation-end-user-survey/). + +We onboarded a new SIG chair: @jimangel, a Cloud Architect at General Motors. Jim was a docs contributor for a year, during which he led the 1.14 docs release, before stepping up as chair. + + + +## The not so good + +While reader satisfaction is decent, **most respondents indicated dissatisfaction with stale content** in every area: concepts, tasks, tutorials, and reference. Additionally, readers requested more diagrams, advanced conceptual content, and code samples—things that technical writers excel at providing. + +SIG Docs continues to solve how best to handle [third-party content](https://github.com/kubernetes/enhancements/pull/1327). **There's too much vendor content on kubernetes.io**, and guidelines for adding or rejecting third-party content remain unclear. The discussion so far has been powerful, including pushback demanding greater collaborative input—a powerful reminder that Kubernetes is in all ways a communal effort. + + +We're in the middle of our third chair transition in 18 months. Each chair transition has been healthy and collegial, but it's still a lot of turnover in a short time. Chairing any open source project is difficult, but especially so with SIG Docs. Chairship of SIG Docs requires a steep learning curve across multiple domains: docs (both written and generated from spec), information architecture, specialized contribution paths (for example, localization), how to run a release cycle, website development, CI/CD, community management, on and on. It's a role that requires multiple people to function successfully without burning people out. Training replacements is time-intensive. + +Perhaps most pressing in the Not So Good category is that SIG Docs currently has only one technical writer dedicated full-time to Kubernetes docs. This has impacts on Kubernetes docs: some obvious, some less so. + +## Impacts of understaffing on Kubernetes docs + + + +If Kubernetes continues through 2020 without more technical writers dedicated to the docs, here's what I see as the most likely possibilities. + +### But first, a disclaimer + +{{< caution >}} + +It is very hard to predict, especially the future. +-Niels Bohr + +{{< /caution >}} + + +Some of my predictions are almost certainly wrong. Any errors are mine alone. + +That said... + +### Effects in 2020 + +Current levels of function aren't self-sustaining. Even with a strong playbook, the release cycle still requires expert support from at least one (and usually two) chairs during every cycle. Without fail, each release breaks in new and unexpected ways, and it requires familiarity and expertise to diagnose and resolve. As chairs continue to cycle—and to be clear, regular transitions are part of a healthy project—we accrue the risks associated with a pool lacking sufficient professional depth and employer support. + +Oddly enough, one of the challenges to staffing is that the docs appear good enough. Based on site analytics and survey responses, readers are pleased with the quality of the docs. When folks visit the site, they generally find what they need and behave like satisfied visitors. + +The danger is that this will change over time: slowly with occasional losses of function, annoying at first, then increasingly critical. The more time passes without adequate staffing, the more difficult and costly fixes will become. + +I suspect this is true because the challenges we face now at decent levels of reader satisfaction are already difficult to fix. API reference generation is complex and brittle; the site's UI is outdated; and our most consistent requests are for more tutorials, advanced concepts, diagrams, and code samples, all of which require ongoing, dedicated time to create. + +**Release support remains strong.** + +The release team continues a solid habit of leaving each successive team with better support than the previous release. This mostly takes the form of iterative improvements to the [docs release playbook](https://github.com/kubernetes/community/tree/master/sig-release#docs-lead), producing better documentation and reducing siloed knowledge. + +**Staleness accelerates.** + +Conceptual content becomes less accurate or relevant as features change or deprecate. Tutorial content degrades for the same reason. + +The content structure will also degrade: the categories of concepts, tasks, and tutorials are legacy categories that may not best fit the needs of current readers, let alone future ones. + +Cruft accumulates for both readers and contributors. Reference docs become increasingly brittle without intervention. + +**Critical knowledge vanishes.** + +As I mentioned previously, SIG Docs has a wide range of functions, some with a steep learning curve. As contributors change roles or jobs, their expertise and availability will diminish or reduce to zero. Contributors with specific knowledge may not be available for consultation, exposing critical vulnerabilities in docs function. Specific examples include reference generation and chair leadership. + +### That's a lot to take in + +It's difficult to strike a balance between the importance of SIG Docs' work to the community and our users, the joy it brings me personally, and the fact that things can't remain as they are without significant negative impacts (eventually). SIG Docs is by no means dying; it's a vibrant community with active contributors doing cool things. It's also a community with some critical knowledge and capacity shortages that can only be remedied with trained, paid staff dedicated to documentation. + +## What the community can do for healthy docs + +Hire technical writers dedicated to Kubernetes docs. Support advanced content creation, not just release docs and incremental feature updates. + +Thanks, and Happy 2020. diff --git a/content/en/blog/_posts/2020-01-21-csi-ephemeral-inline-volumes.md b/content/en/blog/_posts/2020-01-21-csi-ephemeral-inline-volumes.md new file mode 100644 index 0000000000000..46570a3e5a91f --- /dev/null +++ b/content/en/blog/_posts/2020-01-21-csi-ephemeral-inline-volumes.md @@ -0,0 +1,251 @@ +--- +title: CSI Ephemeral Inline Volumes +date: 2020-01-21 +--- + +**Author:** Patrick Ohly (Intel) + +Typically, volumes provided by an external storage driver in +Kubernetes are *persistent*, with a lifecycle that is completely +independent of pods or (as a special case) loosely coupled to the +first pod which uses a volume ([late binding +mode](https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode)). +The mechanism for requesting and defining such volumes in Kubernetes +are [Persistent Volume Claim (PVC) and Persistent Volume +(PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) +objects. Originally, volumes that are backed by a Container Storage Interface +(CSI) driver could only be used via this PVC/PV mechanism. + +But there are also use cases for data volumes whose content and +lifecycle is tied to a pod. For example, a driver might populate a +volume with dynamically created secrets that are specific to the +application running in the pod. Such volumes need to be created +together with a pod and can be deleted as part of pod termination +(*ephemeral*). They get defined as part of the pod spec (*inline*). + +Since Kubernetes 1.15, CSI drivers can also be used for such +*ephemeral inline* volumes. The [CSIInlineVolume feature +gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) +had to be set to enable it in 1.15 because support was still in alpha +state. In 1.16, the feature reached beta state, which typically means +that it is enabled in clusters by default. + +CSI drivers have to be adapted to support this because although two +existing CSI gRPC calls are used (`NodePublishVolume` and `NodeUnpublishVolume`), +the way how they are +used is different and not covered by the CSI spec: for ephemeral +volumes, only `NodePublishVolume` is invoked by `kubelet` when asking +the CSI driver for a volume. All other calls +(like `CreateVolume`, `NodeStageVolume`, etc.) are skipped. The volume +parameters are provided in the pod spec and from there copied into the +`NodePublishVolumeRequest.volume_context` field. There are currently +no standardized parameters; even common ones like size must be +provided in a format that is defined by the CSI driver. Likewise, only +`NodeUnpublishVolume` gets called after the pod has terminated and the +volume needs to be removed. + +Initially, the assumption was that CSI drivers would be specifically +written to provide either persistent or ephemeral volumes. But there +are also drivers which provide storage that is useful in both modes: +for example, [PMEM-CSI](https://github.com/intel/pmem-csi) manages +persistent memory (PMEM), a new kind of local storage that is provided +by [Intel® Optane™ DC Persistent +Memory](https://www.intel.com/content/www/us/en/architecture-and-technology/optane-dc-persistent-memory.html). Such +memory is useful both as persistent data storage (faster than normal SSDs) +and as ephemeral scratch space (higher capacity than DRAM). + +Therefore the support in Kubernetes 1.16 was extended: +* Kubernetes and users can determine which kind of volumes a driver + supports via the `volumeLifecycleModes` field in the [`CSIDriver` + object](https://kubernetes-csi.github.io/docs/csi-driver-object.html#what-fields-does-the-csidriver-object-have). +* Drivers can get information about the volume mode by enabling the + ["pod info on + mount"](https://kubernetes-csi.github.io/docs/pod-info.html) feature + which then will add the new `csi.storage.k8s.io/ephemeral` entry to + the `NodePublishRequest.volume_context`. + +For more information about implementing support of ephemeral inline +volumes in a CSI driver, see the [Kubernetes-CSI +documentation](https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html) +and the [original design +document](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20190122-csi-inline-volumes.md). + +What follows in this blog post are usage examples based on real drivers +and a summary at the end. + +# Examples + +## [PMEM-CSI](https://github.com/intel/pmem-csi) + +Support for ephemeral inline volumes was added in [release +v0.6.0](https://github.com/intel/pmem-csi/releases/tag/v0.6.0). The +driver can be used on hosts with real Intel® Optane™ DC Persistent +Memory, on [special machines in +GCE](https://github.com/intel/pmem-csi/blob/v0.6.0/examples/gce.md) or +with hardware emulated by QEMU. The latter is fully [integrated into +the +makefile](https://github.com/intel/pmem-csi/tree/v0.6.0#qemu-and-kubernetes) +and only needs Go, Docker and KVM, so that approach was used for this +example: + +```sh +git clone --branch release-0.6 https://github.com/intel/pmem-csi +cd pmem-csi +TEST_DISTRO=clear TEST_DISTRO_VERSION=32080 TEST_PMEM_REGISTRY=intel make start +``` + +Bringing up the four-node cluster can take a while but eventually should end with: + +``` +The test cluster is ready. Log in with /work/pmem-csi/_work/pmem-govm/ssh-pmem-govm, run kubectl once logged in. +Alternatively, KUBECONFIG=/work/pmem-csi/_work/pmem-govm/kube.config can also be used directly. + +To try out the pmem-csi driver persistent volumes: +... + +To try out the pmem-csi driver ephemeral volumes: + cat deploy/kubernetes-1.17/pmem-app-ephemeral.yaml | /work/pmem-csi/_work/pmem-govm/ssh-pmem-govm kubectl create -f - +``` + +`deploy/kubernetes-1.17/pmem-app-ephemeral.yaml` specifies one volume: + +``` +kind: Pod +apiVersion: v1 +metadata: + name: my-csi-app-inline-volume +spec: + containers: + - name: my-frontend + image: busybox + command: [ "sleep", "100000" ] + volumeMounts: + - mountPath: "/data" + name: my-csi-volume + volumes: + - name: my-csi-volume + csi: + driver: pmem-csi.intel.com + fsType: "xfs" + volumeAttributes: + size: "2Gi" + nsmode: "fsdax" +``` + +Once we have created that pod, we can inspect the result: + +```sh +kubectl describe pods/my-csi-app-inline-volume +``` + +``` +Name: my-csi-app-inline-volume +... +Volumes: + my-csi-volume: + Type: CSI (a Container Storage Interface (CSI) volume source) + Driver: pmem-csi.intel.com + FSType: xfs + ReadOnly: false + VolumeAttributes: nsmode=fsdax + size=2Gi +``` + +```sh +kubectl exec my-csi-app-inline-volume -- df -h /data +``` + +``` +Filesystem Size Used Available Use% Mounted on +/dev/ndbus0region0fsdax/d7eb073f2ab1937b88531fce28e19aa385e93696 + 1.9G 34.2M 1.8G 2% /data +``` + + +## [Image Populator](https://github.com/kubernetes-csi/csi-driver-image-populator) + +The image populator automatically unpacks a container image and makes +its content available as an ephemeral volume. It's still in +development, but canary images are already available which can be +installed with: + +```sh +kubectl create -f https://github.com/kubernetes-csi/csi-driver-image-populator/raw/master/deploy/kubernetes-1.16/csi-image-csidriverinfo.yaml +kubectl create -f https://github.com/kubernetes-csi/csi-driver-image-populator/raw/master/deploy/kubernetes-1.16/csi-image-daemonset.yaml +``` + +This example pod will run nginx and have it serve data that +comes from the `kfox1111/misc:test` image: + +```sh +kubectl create -f - <> /etc/hosts + +hostnamectl set-hostname master1 +``` +### Install Docker and Kubernetes + +Next, we'll follow the official documents to install docker and Kubernetes using kubeadm. + +Install Docker following the steps from the [container runtime](/docs/setup/production-environment/container-runtimes/) documentation. + +Note that it is a [best practice to use systemd as the cgroup driver](/docs/setup/production-environment/container-runtimes/#cgroup-drivers) for Kubernetes. +If you use an internal container registry, add them to the docker config. +```shell +# Install Docker CE +## Set up the repository +### Install required packages. + +yum install yum-utils device-mapper-persistent-data lvm2 + +### Add Docker repository. + +yum-config-manager \ + --add-repo \ + https://download.docker.com/linux/centos/docker-ce.repo + +## Install Docker CE. + +yum update && yum install docker-ce-18.06.2.ce + +## Create /etc/docker directory. + +mkdir /etc/docker + +# Configure the Docker daemon + +cat > /etc/docker/daemon.json < /etc/yum.repos.d/kubernetes.repo +[kubernetes] +name=Kubernetes +baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 +enabled=1 +gpgcheck=1 +repo_gpgcheck=1 +gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg +EOF + +# Set SELinux in permissive mode (effectively disabling it) +# Caveat: In a production environment you may not want to disable SELinux, please refer to Kubernetes documents about SELinux +setenforce 0 +sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config + +yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes + +systemctl enable --now kubelet + +cat < /etc/sysctl.d/k8s.conf +net.bridge.bridge-nf-call-ip6tables = 1 +net.bridge.bridge-nf-call-iptables = 1 +EOF +sysctl --system + +# check if br_netfilter module is loaded +lsmod | grep br_netfilter + +# if not, load it explicitly with +modprobe br_netfilter +``` + +The official document about how to create a single control-plane cluster can be found from the [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) documentation. + +We'll largely follow that document but also add additional things for the cloud provider. +To make things more clear, we'll use a `kubeadm-config.yml` for the control-plane node. +In this config we specify to use an external OpenStack cloud provider, and where to find its config. +We also enable storage API in API server's runtime config so we can use OpenStack volumes as persistent volumes in Kubernetes. + +```yaml +apiVersion: kubeadm.k8s.io/v1beta1 +kind: InitConfiguration +nodeRegistration: + kubeletExtraArgs: + cloud-provider: "external" +--- +apiVersion: kubeadm.k8s.io/v1beta2 +kind: ClusterConfiguration +kubernetesVersion: "v1.15.1" +apiServer: + extraArgs: + enable-admission-plugins: NodeRestriction + runtime-config: "storage.k8s.io/v1=true" +controllerManager: + extraArgs: + external-cloud-volume-plugin: openstack + extraVolumes: + - name: "cloud-config" + hostPath: "/etc/kubernetes/cloud-config" + mountPath: "/etc/kubernetes/cloud-config" + readOnly: true + pathType: File +networking: + serviceSubnet: "10.96.0.0/12" + podSubnet: "10.224.0.0/16" + dnsDomain: "cluster.local" +``` + +Now we'll create the cloud config, `/etc/kubernetes/cloud-config`, for OpenStack. +Note that the tenant here is the one we created for all Kubernetes VMs in the beginning. +All VMs should be launched in this project/tenant. +In addition you need to create a user in this tenant for Kubernetes to do queries. +The ca-file is the CA root certificate for OpenStack's API endpoint, for example `https://openstack.cloud:5000/v3` +At the time of writing the cloud provider doesn't allow insecure connections (skip CA check). + +```ini +[Global] +region=RegionOne +username=username +password=password +auth-url=https://openstack.cloud:5000/v3 +tenant-id=14ba698c0aec4fd6b7dc8c310f664009 +domain-id=default +ca-file=/etc/kubernetes/ca.pem + +[LoadBalancer] +subnet-id=b4a9a292-ea48-4125-9fb2-8be2628cb7a1 +floating-network-id=bc8a590a-5d65-4525-98f3-f7ef29c727d5 + +[BlockStorage] +bs-version=v2 + +[Networking] +public-network-name=public +ipv6-support-disabled=false +``` + +Next run kubeadm to initiate the control-plane node +```shell +kubeadm init --config=kubeadm-config.yml +``` + +With the initialization completed, copy admin config to .kube +```shell + mkdir -p $HOME/.kube + sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config + sudo chown $(id -u):$(id -g) $HOME/.kube/config +``` + +At this stage, the control-plane node is created but not ready. All the nodes have the taint `node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule` and are waiting to be initialized by the cloud-controller-manager. +```console +# kubectl describe no master1 +Name: master1 +Roles: master +...... +Taints: node-role.kubernetes.io/master:NoSchedule + node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule + node.kubernetes.io/not-ready:NoSchedule +...... +``` +Now deploy the OpenStack cloud controller manager into the cluster, following [using controller manager with kubeadm](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-controller-manager-with-kubeadm.md). + +Create a secret with the cloud-config for the openstack cloud provider. +```shell +kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yaml +kubectl apply -f cloud-config-secret.yaml +``` + +Get the CA certificate for OpenStack API endpoints and put that into `/etc/kubernetes/ca.pem`. + +Create RBAC resources. +```shell +kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-roles.yaml +kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml +``` + +We'll run the OpenStack cloud controller manager as a DaemonSet rather than a pod. +The manager will only run on the control-plane node, so if there are multiple control-plane nodes, multiple pods will be run for high availability. +Create `openstack-cloud-controller-manager-ds.yaml` containing the following manifests, then apply it. + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: cloud-controller-manager + namespace: kube-system +--- +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: openstack-cloud-controller-manager + namespace: kube-system + labels: + k8s-app: openstack-cloud-controller-manager +spec: + selector: + matchLabels: + k8s-app: openstack-cloud-controller-manager + updateStrategy: + type: RollingUpdate + template: + metadata: + labels: + k8s-app: openstack-cloud-controller-manager + spec: + nodeSelector: + node-role.kubernetes.io/master: "" + securityContext: + runAsUser: 1001 + tolerations: + - key: node.cloudprovider.kubernetes.io/uninitialized + value: "true" + effect: NoSchedule + - key: node-role.kubernetes.io/master + effect: NoSchedule + - effect: NoSchedule + key: node.kubernetes.io/not-ready + serviceAccountName: cloud-controller-manager + containers: + - name: openstack-cloud-controller-manager + image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.15.0 + args: + - /bin/openstack-cloud-controller-manager + - --v=1 + - --cloud-config=$(CLOUD_CONFIG) + - --cloud-provider=openstack + - --use-service-account-credentials=true + - --address=127.0.0.1 + volumeMounts: + - mountPath: /etc/kubernetes/pki + name: k8s-certs + readOnly: true + - mountPath: /etc/ssl/certs + name: ca-certs + readOnly: true + - mountPath: /etc/config + name: cloud-config-volume + readOnly: true + - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec + name: flexvolume-dir + - mountPath: /etc/kubernetes + name: ca-cert + readOnly: true + resources: + requests: + cpu: 200m + env: + - name: CLOUD_CONFIG + value: /etc/config/cloud.conf + hostNetwork: true + volumes: + - hostPath: + path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec + type: DirectoryOrCreate + name: flexvolume-dir + - hostPath: + path: /etc/kubernetes/pki + type: DirectoryOrCreate + name: k8s-certs + - hostPath: + path: /etc/ssl/certs + type: DirectoryOrCreate + name: ca-certs + - name: cloud-config-volume + secret: + secretName: cloud-config + - name: ca-cert + secret: + secretName: openstack-ca-cert +``` + +When the controller manager is running, it will query OpenStack to get information about the nodes and remove the taint. In the node info you'll see the VM's UUID in OpenStack. +```console +# kubectl describe no master1 +Name: master1 +Roles: master +...... +Taints: node-role.kubernetes.io/master:NoSchedule + node.kubernetes.io/not-ready:NoSchedule +...... +sage:docker: network plugin is not ready: cni config uninitialized +...... +PodCIDR: 10.224.0.0/24 +ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5 + +``` +Now install your favourite CNI and the control-plane node will become ready. + +For example, to install Weave Net, run this command: +```shell +kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" +``` + +Next we'll set up worker nodes. + +Firstly, install docker and kubeadm in the same way as how they were installed in the control-plane node. +To join them to the cluster we need a token and ca cert hash from the output of control-plane node installation. +If it is expired or lost we can recreate it using these commands. + +```shell +# check if token is expired +kubeadm token list + +# re-create token and show join command +kubeadm token create --print-join-command + +``` + +Create `kubeadm-config.yml` for worker nodes with the above token and ca cert hash. +```yaml +apiVersion: kubeadm.k8s.io/v1beta2 +discovery: + bootstrapToken: + apiServerEndpoint: 192.168.1.7:6443 + token: 0c0z4p.dnafh6vnmouus569 + caCertHashes: ["sha256:fcb3e956a6880c05fc9d09714424b827f57a6fdc8afc44497180905946527adf"] +kind: JoinConfiguration +nodeRegistration: + kubeletExtraArgs: + cloud-provider: "external" + +``` +apiServerEndpoint is the control-plane node, token and caCertHashes can be taken from the join command printed in the output of 'kubeadm token create' command. + +Run kubeadm and the worker nodes will be joined to the cluster. +```shell +kubeadm join --config kubeadm-config.yml +``` + +At this stage we'll have a working Kubernetes cluster with an external OpenStack cloud provider. +The provider tells Kubernetes about the mapping between Kubernetes nodes and OpenStack VMs. +If Kubernetes wants to attach a persistent volume to a pod, it can find out which OpenStack VM the pod is running on from the mapping, and attach the underlying OpenStack volume to the VM accordingly. + +### Deploy Cinder CSI + +The integration with Cinder is provided by an external Cinder CSI plugin, as described in the [Cinder CSI](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md) documentation. + +We'll perform the following steps to install the Cinder CSI plugin. +Firstly, create a secret with CA certs for OpenStack's API endpoints. It is the same cert file as what we use in cloud provider above. +```shell +kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml +kubectl apply -f openstack-ca-cert.yaml +``` +Then create RBAC resources. +```shell +kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/release-1.15/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml +kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml +``` + +The Cinder CSI plugin includes a controller plugin and a node plugin. +The controller communicates with Kubernetes APIs and Cinder APIs to create/attach/detach/delete Cinder volumes. The node plugin in-turn runs on each worker node to bind a storage device (attached volume) to a pod, and unbind it during deletion. +Create `cinder-csi-controllerplugin.yaml` and apply it to create csi controller. +```yaml +kind: Service +apiVersion: v1 +metadata: + name: csi-cinder-controller-service + namespace: kube-system + labels: + app: csi-cinder-controllerplugin +spec: + selector: + app: csi-cinder-controllerplugin + ports: + - name: dummy + port: 12345 + +--- +kind: StatefulSet +apiVersion: apps/v1 +metadata: + name: csi-cinder-controllerplugin + namespace: kube-system +spec: + serviceName: "csi-cinder-controller-service" + replicas: 1 + selector: + matchLabels: + app: csi-cinder-controllerplugin + template: + metadata: + labels: + app: csi-cinder-controllerplugin + spec: + serviceAccount: csi-cinder-controller-sa + containers: + - name: csi-attacher + image: quay.io/k8scsi/csi-attacher:v1.0.1 + args: + - "--v=5" + - "--csi-address=$(ADDRESS)" + env: + - name: ADDRESS + value: /var/lib/csi/sockets/pluginproxy/csi.sock + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /var/lib/csi/sockets/pluginproxy/ + - name: csi-provisioner + image: quay.io/k8scsi/csi-provisioner:v1.0.1 + args: + - "--provisioner=csi-cinderplugin" + - "--csi-address=$(ADDRESS)" + env: + - name: ADDRESS + value: /var/lib/csi/sockets/pluginproxy/csi.sock + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /var/lib/csi/sockets/pluginproxy/ + - name: csi-snapshotter + image: quay.io/k8scsi/csi-snapshotter:v1.0.1 + args: + - "--connection-timeout=15s" + - "--csi-address=$(ADDRESS)" + env: + - name: ADDRESS + value: /var/lib/csi/sockets/pluginproxy/csi.sock + imagePullPolicy: Always + volumeMounts: + - mountPath: /var/lib/csi/sockets/pluginproxy/ + name: socket-dir + - name: cinder-csi-plugin + image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 + args : + - /bin/cinder-csi-plugin + - "--v=5" + - "--nodeid=$(NODE_ID)" + - "--endpoint=$(CSI_ENDPOINT)" + - "--cloud-config=$(CLOUD_CONFIG)" + - "--cluster=$(CLUSTER_NAME)" + env: + - name: NODE_ID + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: CSI_ENDPOINT + value: unix://csi/csi.sock + - name: CLOUD_CONFIG + value: /etc/config/cloud.conf + - name: CLUSTER_NAME + value: kubernetes + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /csi + - name: secret-cinderplugin + mountPath: /etc/config + readOnly: true + - mountPath: /etc/kubernetes + name: ca-cert + readOnly: true + volumes: + - name: socket-dir + hostPath: + path: /var/lib/csi/sockets/pluginproxy/ + type: DirectoryOrCreate + - name: secret-cinderplugin + secret: + secretName: cloud-config + - name: ca-cert + secret: + secretName: openstack-ca-cert +``` + + +Create `cinder-csi-nodeplugin.yaml` and apply it to create csi node. +```yaml +kind: DaemonSet +apiVersion: apps/v1 +metadata: + name: csi-cinder-nodeplugin + namespace: kube-system +spec: + selector: + matchLabels: + app: csi-cinder-nodeplugin + template: + metadata: + labels: + app: csi-cinder-nodeplugin + spec: + serviceAccount: csi-cinder-node-sa + hostNetwork: true + containers: + - name: node-driver-registrar + image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0 + args: + - "--v=5" + - "--csi-address=$(ADDRESS)" + - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)" + lifecycle: + preStop: + exec: + command: ["/bin/sh", "-c", "rm -rf /registration/cinder.csi.openstack.org /registration/cinder.csi.openstack.org-reg.sock"] + env: + - name: ADDRESS + value: /csi/csi.sock + - name: DRIVER_REG_SOCK_PATH + value: /var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock + - name: KUBE_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /csi + - name: registration-dir + mountPath: /registration + - name: cinder-csi-plugin + securityContext: + privileged: true + capabilities: + add: ["SYS_ADMIN"] + allowPrivilegeEscalation: true + image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 + args : + - /bin/cinder-csi-plugin + - "--nodeid=$(NODE_ID)" + - "--endpoint=$(CSI_ENDPOINT)" + - "--cloud-config=$(CLOUD_CONFIG)" + env: + - name: NODE_ID + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: CSI_ENDPOINT + value: unix://csi/csi.sock + - name: CLOUD_CONFIG + value: /etc/config/cloud.conf + imagePullPolicy: "IfNotPresent" + volumeMounts: + - name: socket-dir + mountPath: /csi + - name: pods-mount-dir + mountPath: /var/lib/kubelet/pods + mountPropagation: "Bidirectional" + - name: kubelet-dir + mountPath: /var/lib/kubelet + mountPropagation: "Bidirectional" + - name: pods-cloud-data + mountPath: /var/lib/cloud/data + readOnly: true + - name: pods-probe-dir + mountPath: /dev + mountPropagation: "HostToContainer" + - name: secret-cinderplugin + mountPath: /etc/config + readOnly: true + - mountPath: /etc/kubernetes + name: ca-cert + readOnly: true + volumes: + - name: socket-dir + hostPath: + path: /var/lib/kubelet/plugins/cinder.csi.openstack.org + type: DirectoryOrCreate + - name: registration-dir + hostPath: + path: /var/lib/kubelet/plugins_registry/ + type: Directory + - name: kubelet-dir + hostPath: + path: /var/lib/kubelet + type: Directory + - name: pods-mount-dir + hostPath: + path: /var/lib/kubelet/pods + type: Directory + - name: pods-cloud-data + hostPath: + path: /var/lib/cloud/data + type: Directory + - name: pods-probe-dir + hostPath: + path: /dev + type: Directory + - name: secret-cinderplugin + secret: + secretName: cloud-config + - name: ca-cert + secret: + secretName: openstack-ca-cert + +``` +When they are both running, create a storage class for Cinder. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: csi-sc-cinderplugin +provisioner: csi-cinderplugin +``` +Then we can create a PVC with this class. +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: myvol +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: csi-sc-cinderplugin + +``` + +When the PVC is created, a Cinder volume is created correspondingly. +```console +# kubectl get pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +myvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO csi-sc-cinderplugin 3s + +``` +In OpenStack the volume name will match the Kubernetes persistent volume generated name. In this example it would be: _pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad_ + +Now we can create a pod with the PVC. +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: web +spec: + containers: + - name: web + image: nginx + ports: + - name: web + containerPort: 80 + hostPort: 8081 + protocol: TCP + volumeMounts: + - mountPath: "/usr/share/nginx/html" + name: mypd + volumes: + - name: mypd + persistentVolumeClaim: + claimName: myvol +``` +When the pod is running, the volume will be attached to the pod. +If we go back to OpenStack, we can see the Cinder volume is mounted to the worker node where the pod is running on. +```console +# openstack volume show 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f ++--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Field | Value | ++--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| attachments | [{u'server_id': u'1c5e1439-edfa-40ed-91fe-2a0e12bc7eb4', u'attachment_id': u'11a15b30-5c24-41d4-86d9-d92823983a32', u'attached_at': u'2019-07-24T05:02:34.000000', u'host_name': u'compute-6', u'volume_id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f', u'device': u'/dev/vdb', u'id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f'}] | +| availability_zone | nova | +| bootable | false | +| consistencygroup_id | None | +| created_at | 2019-07-24T05:02:18.000000 | +| description | Created by OpenStack Cinder CSI driver | +| encrypted | False | +| id | 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f | +| migration_status | None | +| multiattach | False | +| name | pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad | +| os-vol-host-attr:host | rbd:volumes@rbd#rbd | +| os-vol-mig-status-attr:migstat | None | +| os-vol-mig-status-attr:name_id | None | +| os-vol-tenant-attr:tenant_id | 14ba698c0aec4fd6b7dc8c310f664009 | +| properties | attached_mode='rw', cinder.csi.openstack.org/cluster='kubernetes' | +| replication_status | None | +| size | 1 | +| snapshot_id | None | +| source_volid | None | +| status | in-use | +| type | rbd | +| updated_at | 2019-07-24T05:02:35.000000 | +| user_id | 5f6a7a06f4e3456c890130d56babf591 | ++--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +``` + +### Summary + +In this walk-through, we deployed a Kubernetes cluster on OpenStack VMs and integrated it with OpenStack using an external OpenStack cloud provider. Then on this Kubernetes cluster we deployed Cinder CSI plugin which can create Cinder volumes and expose them in Kubernetes as persistent volumes. diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 2288fdc1c48f5..cb5b78d55ec63 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -275,6 +275,12 @@ and do not respect the unschedulable attribute on a node. This assumes that daem the machine even if it is being drained of applications while it prepares for a reboot. {{< /note >}} +{{< caution >}} +`kubectl cordon` marks a node as 'unschedulable', which has the side effect of the service +controller removing the node from any LoadBalancer node target lists it was previously +eligible for, effectively removing incoming load balancer traffic from the cordoned node(s). +{{< /caution >}} + ### Node capacity The capacity of the node (number of cpus and amount of memory) is part of the node object. diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index b82edcd2755a5..75bf1b6a223c1 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -28,7 +28,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply * [Contiv](http://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](http://github.com/contiv). The [installer](http://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options. * [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads. * [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes. -* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes. +* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod. * [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes. * [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift. * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring. diff --git a/content/en/docs/concepts/cluster-administration/controller-metrics.md b/content/en/docs/concepts/cluster-administration/controller-metrics.md deleted file mode 100644 index 57ed5c16d657a..0000000000000 --- a/content/en/docs/concepts/cluster-administration/controller-metrics.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Controller manager metrics -content_template: templates/concept -weight: 100 ---- - -{{% capture overview %}} -Controller manager metrics provide important insight into the performance and health of -the controller manager. - -{{% /capture %}} - -{{% capture body %}} -## What are controller manager metrics - -Controller manager metrics provide important insight into the performance and health of the controller manager. -These metrics include common Go language runtime metrics such as go_routine count and controller specific metrics such as -etcd request latencies or Cloudprovider (AWS, GCE, OpenStack) API latencies that can be used -to gauge the health of a cluster. - -Starting from Kubernetes 1.7, detailed Cloudprovider metrics are available for storage operations for GCE, AWS, Vsphere and OpenStack. -These metrics can be used to monitor health of persistent volume operations. - -For example, for GCE these metrics are called: - -``` -cloudprovider_gce_api_request_duration_seconds { request = "instance_list"} -cloudprovider_gce_api_request_duration_seconds { request = "disk_insert"} -cloudprovider_gce_api_request_duration_seconds { request = "disk_delete"} -cloudprovider_gce_api_request_duration_seconds { request = "attach_disk"} -cloudprovider_gce_api_request_duration_seconds { request = "detach_disk"} -cloudprovider_gce_api_request_duration_seconds { request = "list_disk"} -``` - - - -## Configuration - - -In a cluster, controller-manager metrics are available from `http://localhost:10252/metrics` -from the host where the controller-manager is running. - -The metrics are emitted in [prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/) and are human readable. - -In a production environment you may want to configure prometheus or some other metrics scraper -to periodically gather these metrics and make them available in some kind of time series database. - -{{% /capture %}} - - diff --git a/content/en/docs/concepts/cluster-administration/monitoring.md b/content/en/docs/concepts/cluster-administration/monitoring.md new file mode 100644 index 0000000000000..92b74b6634c22 --- /dev/null +++ b/content/en/docs/concepts/cluster-administration/monitoring.md @@ -0,0 +1,132 @@ +--- +title: Metrics For The Kubernetes Control Plane +reviewers: +- brancz +- logicalhan +- RainbowMango +content_template: templates/concept +weight: 60 +aliases: +- controller-metrics.md +--- + +{{% capture overview %}} + +System component metrics can give a better look into what is happening inside them. Metrics are particularly useful for building dashboards and alerts. + +Metrics in Kubernetes control plane are emitted in [prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/) and are human readable. + +{{% /capture %}} + +{{% capture body %}} + +## Metrics in Kubernetes + +In most cases metrics are available on `/metrics` endpoint of the HTTP server. For components that doesn't expose endpoint by default it can be enabled using `--bind-address` flag. + +Examples of those components: +* {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} +* {{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}} +* {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} +* {{< glossary_tooltip term_id="kube-scheduler" text="kube-scheduler" >}} +* {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} + +In a production environment you may want to configure [Prometheus Server](https://prometheus.io/) or some other metrics scraper +to periodically gather these metrics and make them available in some kind of time series database. + +Note that {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} also exposes metrics in `/metrics/cadvisor`, `/metrics/resource` and `/metrics/probes` endpoints. Those metrics do not have same lifecycle. + +If your cluster uses {{< glossary_tooltip term_id="rbac" text="RBAC" >}}, reading metrics requires authorization via a user, group or ServiceAccount with a ClusterRole that allows accessing `/metrics`. +For example: +``` +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: prometheus +rules: + - nonResourceURLs: + - "/metrics" + verbs: + - get +``` + +## Metric lifecycle + +Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deletion + +Alpha metrics have no stability guarantees; as such they can be modified or deleted at any time. + +Stable metrics can be guaranteed to not change; Specifically, stability means: + +* the metric itself will not be deleted (or renamed) +* the type of metric will not be modified + +Deprecated metric signal that the metric will eventually be deleted; to find which version, you need to check annotation, which includes from which kubernetes version that metric will be considered deprecated. + +Before deprecation: + +``` +# HELP some_counter this counts things +# TYPE some_counter counter +some_counter 0 +``` + +After deprecation: + +``` +# HELP some_counter (Deprecated since 1.15.0) this counts things +# TYPE some_counter counter +some_counter 0 +``` + +Once a metric is hidden then by default the metrics is not published for scraping. To use a hidden metric, you need to override the configuration for the relevant cluster component. + +Once a metric is deleted, the metric is not published. You cannot change this using an override. + + +## Show Hidden Metrics + +As described above, admins can enable hidden metrics through a command-line flag on a specific binary. This intends to be used as an escape hatch for admins if they missed the migration of the metrics deprecated in the last release. + +The flag `show-hidden-metrics-for-version` takes a version for which you want to show metrics deprecated in that release. The version is expressed as x.y, where x is the major version, y is the minor version. The patch version is not needed even though a metrics can be deprecated in a patch release, the reason for that is the metrics deprecation policy runs against the minor release. + +The flag can only take the previous minor version as it's value. All metrics hidden in previous will be emitted if admins set the previous version to `show-hidden-metrics-for-version`. The too old version is not allowed because this violates the metrics deprecated policy. + +Take metric `A` as an example, here assumed that `A` is deprecated in 1.n. According to metrics deprecated policy, we can reach the following conclusion: + +* In release `1.n`, the metric is deprecated, and it can be emitted by default. +* In release `1.n+1`, the metric is hidden by default and it can be emitted by command line `show-hidden-metrics-for-version=1.n`. +* In release `1.n+2`, the metric should be removed from the codebase. No escape hatch anymore. + +If you're upgrading from release `1.12` to `1.13`, but still depend on a metric `A` deprecated in `1.12`, you should set hidden metrics via command line: `--show-hidden-metrics=1.12` and remember to remove this metric dependency before upgrading to `1.14` + +## Component metrics + +### kube-controller-manager metrics + +Controller manager metrics provide important insight into the performance and health of the controller manager. +These metrics include common Go language runtime metrics such as go_routine count and controller specific metrics such as +etcd request latencies or Cloudprovider (AWS, GCE, OpenStack) API latencies that can be used +to gauge the health of a cluster. + +Starting from Kubernetes 1.7, detailed Cloudprovider metrics are available for storage operations for GCE, AWS, Vsphere and OpenStack. +These metrics can be used to monitor health of persistent volume operations. + +For example, for GCE these metrics are called: + +``` +cloudprovider_gce_api_request_duration_seconds { request = "instance_list"} +cloudprovider_gce_api_request_duration_seconds { request = "disk_insert"} +cloudprovider_gce_api_request_duration_seconds { request = "disk_delete"} +cloudprovider_gce_api_request_duration_seconds { request = "attach_disk"} +cloudprovider_gce_api_request_duration_seconds { request = "detach_disk"} +cloudprovider_gce_api_request_duration_seconds { request = "list_disk"} +``` + +{{% /capture %}} + +{{% capture whatsnext %}} +* Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) for metrics +* See the list of [stable Kubernetes metrics](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml) +* Read about the [Kubernetes deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior ) +{{% /capture %}} diff --git a/content/en/docs/concepts/configuration/pod-priority-preemption.md b/content/en/docs/concepts/configuration/pod-priority-preemption.md index 4b490b827eaa9..399f8b4e22d7f 100644 --- a/content/en/docs/concepts/configuration/pod-priority-preemption.md +++ b/content/en/docs/concepts/configuration/pod-priority-preemption.md @@ -62,6 +62,12 @@ To use priority and preemption in Kubernetes 1.11 and later, follow these steps: Keep reading for more information about these steps. +{{< note >}} +Kubernetes already ships with two PriorityClasses: +`system-cluster-critical` and `system-node-critical`. +These are common classes and are used to [ensure that critical components are always scheduled first](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/). +{{< /note >}} + If you try the feature and then decide to disable it, you must remove the PodPriority command-line flag or set it to `false`, and then restart the API server and scheduler. After the feature is disabled, the existing Pods keep diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index 471f5e6c0f495..61995234e3db7 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -10,11 +10,10 @@ feature: weight: 50 --- - {{% capture overview %}} -Kubernetes `secret` objects let you store and manage sensitive information, such -as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` +Kubernetes Secrets let you store and manage sensitive information, such +as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a {{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information. @@ -25,78 +24,94 @@ is safer and more flexible than putting it verbatim in a ## Overview of Secrets A Secret is an object that contains a small amount of sensitive data such as -a password, a token, or a key. Such information might otherwise be put in a -Pod specification or in an image; putting it in a Secret object allows for -more control over how it is used, and reduces the risk of accidental exposure. +a password, a token, or a key. Such information might otherwise be put in a +Pod specification or in an image. Users can create secrets and the system +also creates some secrets. -Users can create secrets, and the system also creates some secrets. +To use a secret, a Pod needs to reference the secret. +A secret can be used with a Pod in two ways: -To use a secret, a pod needs to reference the secret. -A secret can be used with a pod in two ways: as files in a +- As files in a {{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of -its containers, or used by kubelet when pulling images for the pod. +its containers. +- By the kubelet when pulling images for the Pod. ### Built-in Secrets -#### Service Accounts Automatically Create and Attach Secrets with API Credentials +#### Service accounts automatically create and attach Secrets with API credentials Kubernetes automatically creates secrets which contain credentials for -accessing the API and it automatically modifies your pods to use this type of +accessing the API and automatically modifies your Pods to use this type of secret. The automatic creation and use of API credentials can be disabled or overridden -if desired. However, if all you need to do is securely access the apiserver, +if desired. However, if all you need to do is securely access the API server, this is the recommended workflow. -See the [Service Account](/docs/tasks/configure-pod-container/configure-service-account/) documentation for more -information on how Service Accounts work. +See the [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) +documentation for more information on how service accounts work. ### Creating your own Secrets -#### Creating a Secret Using kubectl create secret +#### Creating a Secret Using `kubectl` -Say that some pods need to access a database. The -username and password that the pods should use is in the files -`./username.txt` and `./password.txt` on your local machine. +Secrets can contain user credentials required by Pods to access a database. +For example, a database connection string +consists of a username and password. You can store the username in a file `./username.txt` +and the password in a file `./password.txt` on your local machine. ```shell -# Create files needed for rest of example. +# Create files needed for the rest of the example. echo -n 'admin' > ./username.txt echo -n '1f2d1e2e67df' > ./password.txt ``` -The `kubectl create secret` command -packages these files into a Secret and creates -the object on the Apiserver. +The `kubectl create secret` command packages these files into a Secret and creates +the object on the API server. ```shell kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt ``` + +The output is similar to: + ``` secret "db-user-pass" created ``` + {{< note >}} -Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: +Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping. +In most shells, the easiest way to escape the password is to surround it with single quotes (`'`). +For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: -``` +```shell kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' ``` - You do not need to escape special characters in passwords from files (`--from-file`). +You do not need to escape special characters in passwords from files (`--from-file`). {{< /note >}} -You can check that the secret was created like this: +You can check that the secret was created: ```shell kubectl get secrets ``` + +The output is similar to: + ``` NAME TYPE DATA AGE db-user-pass Opaque 2 51s ``` + +You can view a description of the secret: + ```shell kubectl describe secrets/db-user-pass ``` + +The output is similar to: + ``` Name: db-user-pass Namespace: default @@ -112,30 +127,43 @@ username.txt: 5 bytes ``` {{< note >}} -`kubectl get` and `kubectl describe` avoid showing the contents of a secret by -default. -This is to protect the secret from being exposed accidentally to an onlooker, +The commands `kubectl get` and `kubectl describe` avoid showing the contents of a secret by +default. This is to protect the secret from being exposed accidentally to an onlooker, or from being stored in a terminal log. {{< /note >}} -See [decoding a secret](#decoding-a-secret) for how to see the contents of a secret. +See [decoding a secret](#decoding-a-secret) to learn how to view the contents of a secret. -#### Creating a Secret Manually +#### Creating a Secret manually -You can also create a Secret in a file first, in json or yaml format, +You can also create a Secret in a file first, in JSON or YAML format, and then create that object. The -[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) contains two maps: -data and stringData. The data field is used to store arbitrary data, encoded using -base64. The stringData field is provided for convenience, and allows you to provide +[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) +contains two maps: +`data` and `stringData`. The `data` field is used to store arbitrary data, encoded using +base64. The `stringData` field is provided for convenience, and allows you to provide secret data as unencoded strings. -For example, to store two strings in a Secret using the data field, convert -them to base64 as follows: +For example, to store two strings in a Secret using the `data` field, convert +the strings to base64 as follows: ```shell echo -n 'admin' | base64 +``` + +The output is similar to: + +``` YWRtaW4= +``` + +```shell echo -n '1f2d1e2e67df' | base64 +``` + +The output is similar to: + +``` MWYyZDFlMmU2N2Rm ``` @@ -157,11 +185,14 @@ Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/ ```shell kubectl apply -f ./secret.yaml ``` + +The output is similar to: + ``` secret "mysecret" created ``` -For certain scenarios, you may wish to use the stringData field instead. This +For certain scenarios, you may wish to use the `stringData` field instead. This field allows you to put a non-base64 encoded string directly into the Secret, and the string will be encoded for you when the Secret is created or updated. @@ -169,7 +200,7 @@ A practical example of this might be where you are deploying an application that uses a Secret to store a configuration file, and you want to populate parts of that configuration file during your deployment process. -If your application uses the following configuration file: +For example, if your application uses the following configuration file: ```yaml apiUrl: "https://my.api.com/api/v1" @@ -177,7 +208,7 @@ username: "user" password: "password" ``` -You could store this in a Secret using the following: +You could store this in a Secret using the following definition: ```yaml apiVersion: v1 @@ -195,14 +226,14 @@ stringData: Your deployment tool could then replace the `{{username}}` and `{{password}}` template variables before running `kubectl apply`. -stringData is a write-only convenience field. It is never output when +The `stringData` field is a write-only convenience field. It is never output when retrieving Secrets. For example, if you run the following command: ```shell kubectl get secret mysecret -o yaml ``` -The output will be similar to: +The output is similar to: ```yaml apiVersion: v1 @@ -218,8 +249,8 @@ data: config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19 ``` -If a field is specified in both data and stringData, the value from stringData -is used. For example, the following Secret definition: +If a field, such as `username`, is specified in both `data` and `stringData`, +the value from `stringData` is used. For example, the following Secret definition: ```yaml apiVersion: v1 @@ -233,7 +264,7 @@ stringData: username: administrator ``` -Results in the following secret: +Results in the following Secret: ```yaml apiVersion: v1 @@ -251,26 +282,31 @@ data: Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. -The keys of data and stringData must consist of alphanumeric characters, +The keys of `data` and `stringData` must consist of alphanumeric characters, '-', '_' or '.'. -**Encoding Note:** The serialized JSON and YAML values of secret data are -encoded as base64 strings. Newlines are not valid within these strings and must -be omitted. When using the `base64` utility on Darwin/macOS users should avoid -using the `-b` option to split long lines. Conversely Linux users *should* add +{{< note >}} +The serialized JSON and YAML values of secret data are +encoded as base64 strings. Newlines are not valid within these strings and must +be omitted. When using the `base64` utility on Darwin/macOS, users should avoid +using the `-b` option to split long lines. Conversely, Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if -`-w` option is not available. +the `-w` option is not available. +{{< /note >}} + +#### Creating a Secret from a generator + +Since Kubernetes v1.14, `kubectl` supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/). Kustomize provides resource Generators to +create Secrets and ConfigMaps. The Kustomize generators should be specified in a +`kustomization.yaml` file inside a directory. After generating the Secret, +you can create the Secret on the API server with `kubectl apply`. + +#### Generating a Secret from files -#### Creating a Secret from Generator -Kubectl supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/) -since 1.14. With this new feature, -you can also create a Secret from generators and then apply it to create the object on -the Apiserver. The generators -should be specified in a `kustomization.yaml` inside a directory. +You can generate a Secret by defining a `secretGenerator` from the +files ./username.txt and ./password.txt: -For example, to generate a Secret from files `./username.txt` and `./password.txt` ```shell -# Create a kustomization.yaml file with SecretGenerator cat <./kustomization.yaml secretGenerator: - name: db-user-pass @@ -279,20 +315,39 @@ secretGenerator: - password.txt EOF ``` -Apply the kustomization directory to create the Secret object. + +Apply the directory, containing the `kustomization.yaml`, to create the Secret. + ```shell -$ kubectl apply -k . +kubectl apply -k . +``` + +The output is similar to: + +``` secret/db-user-pass-96mffmfh4k created ``` -You can check that the secret was created like this: +You can check that the secret was created: ```shell -$ kubectl get secrets +kubectl get secrets +``` + +The output is similar to: + +``` NAME TYPE DATA AGE db-user-pass-96mffmfh4k Opaque 2 51s +``` + +```shell +kubectl describe secrets/db-user-pass-96mffmfh4k +``` -$ kubectl describe secrets/db-user-pass-96mffmfh4k +The output is similar to: + +``` Name: db-user-pass Namespace: default Labels: @@ -306,11 +361,13 @@ password.txt: 12 bytes username.txt: 5 bytes ``` -For example, to generate a Secret from literals `username=admin` and `password=secret`, -you can specify the secret generator in `kustomization.yaml` as +#### Generating a Secret from string literals + +You can create a Secret by defining a `secretGenerator` +from literals `username=admin` and `password=secret`: + ```shell -# Create a kustomization.yaml file with SecretGenerator -$ cat <./kustomization.yaml +cat <./kustomization.yaml secretGenerator: - name: db-user-pass literals: @@ -318,24 +375,38 @@ secretGenerator: - password=secret EOF ``` -Apply the kustomization directory to create the Secret object. + +Apply the directory, containing the `kustomization.yaml`, to create the Secret. + ```shell -$ kubectl apply -k . +kubectl apply -k . +``` + +The output is similar to: + +``` secret/db-user-pass-dddghtt9b5 created ``` + {{< note >}} -The generated Secrets name has a suffix appended by hashing the contents. This ensures that a new -Secret is generated each time the contents is modified. +When a Secret is generated, the Secret name is created by hashing +the Secret data and appending this value to the name. This ensures that +a new Secret is generated each time the data is modified. {{< /note >}} #### Decoding a Secret -Secrets can be retrieved via the `kubectl get secret` command. For example, to retrieve the secret created in the previous section: +Secrets can be retrieved by running `kubectl get secret`. +For example, you can view the Secret created in the previous section by +running the following command: ```shell kubectl get secret mysecret -o yaml ``` -``` + +The output is similar to: + +```yaml apiVersion: v1 kind: Secret metadata: @@ -350,26 +421,29 @@ data: password: MWYyZDFlMmU2N2Rm ``` -Decode the password field: +Decode the `password` field: ```shell echo 'MWYyZDFlMmU2N2Rm' | base64 --decode ``` + +The output is similar to: + ``` 1f2d1e2e67df ``` #### Editing a Secret -An existing secret may be edited with the following command: +An existing Secret may be edited with the following command: ```shell kubectl edit secrets mysecret ``` -This will open the default configured editor and allow for updating the base64 encoded secret values in the `data` field: +This will open the default configured editor and allow for updating the base64 encoded Secret values in the `data` field: -``` +```yaml # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. @@ -392,23 +466,23 @@ type: Opaque ## Using Secrets -Secrets can be mounted as data volumes or be exposed as +Secrets can be mounted as data volumes or exposed as {{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}} -to be used by a container in a pod. They can also be used by other parts of the -system, without being directly exposed to the pod. For example, they can hold +to be used by a container in a Pod. Secrets can also be used by other parts of the +system, without being directly exposed to the Pod. For example, Secrets can hold credentials that other parts of the system should use to interact with external systems on your behalf. -### Using Secrets as Files from a Pod +### Using Secrets as files from a Pod To consume a Secret in a volume in a Pod: -1. Create a secret or use an existing one. Multiple pods can reference the same secret. -1. Modify your Pod definition to add a volume under `.spec.volumes[]`. Name the volume anything, and have a `.spec.volumes[].secret.secretName` field equal to the name of the secret object. -1. Add a `.spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `.spec.containers[].volumeMounts[].readOnly = true` and `.spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear. -1. Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`. +1. Create a secret or use an existing one. Multiple Pods can reference the same secret. +1. Modify your Pod definition to add a volume under `.spec.volumes[]`. Name the volume anything, and have a `.spec.volumes[].secret.secretName` field equal to the name of the Secret object. +1. Add a `.spec.containers[].volumeMounts[]` to each container that needs the secret. Specify `.spec.containers[].volumeMounts[].readOnly = true` and `.spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the secrets to appear. +1. Modify your image or command line so that the program looks for files in that directory. Each key in the secret `data` map becomes the filename under `mountPath`. -This is an example of a pod that mounts a secret in a volume: +This is an example of a Pod that mounts a Secret in a volume: ```yaml apiVersion: v1 @@ -429,17 +503,17 @@ spec: secretName: mysecret ``` -Each secret you want to use needs to be referred to in `.spec.volumes`. +Each Secret you want to use needs to be referred to in `.spec.volumes`. -If there are multiple containers in the pod, then each container needs its -own `volumeMounts` block, but only one `.spec.volumes` is needed per secret. +If there are multiple containers in the Pod, then each container needs its +own `volumeMounts` block, but only one `.spec.volumes` is needed per Secret. You can package many files into one secret, or use many secrets, whichever is convenient. -**Projection of secret keys to specific paths** +#### Projection of Secret keys to specific paths -We can also control the paths within the volume where Secret keys are projected. -You can use `.spec.volumes[].secret.items` field to change target path of each key: +You can also control the paths within the volume where Secret keys are projected. +You can use the `.spec.volumes[].secret.items` field to change the target path of each key: ```yaml apiVersion: v1 @@ -466,17 +540,17 @@ spec: What will happen: * `username` secret is stored under `/etc/foo/my-group/my-username` file instead of `/etc/foo/username`. -* `password` secret is not projected +* `password` secret is not projected. If `.spec.volumes[].secret.items` is used, only keys specified in `items` are projected. To consume all keys from the secret, all of them must be listed in the `items` field. All listed keys must exist in the corresponding secret. Otherwise, the volume is not created. -**Secret files permissions** +#### Secret files permissions -You can also specify the permission mode bits files part of a secret will have. -If you don't specify any, `0644` is used by default. You can specify a default -mode for the whole secret volume and override per key if needed. +You can set the file access permission bits for a single Secret key. +If you don't specify any permissions, `0644` is used by default. +You can also set a default mode for the entire Secret volume and override per key if needed. For example, you can specify a default mode like this: @@ -503,11 +577,11 @@ Then, the secret will be mounted on `/etc/foo` and all the files created by the secret volume mount will have permission `0400`. Note that the JSON spec doesn't support octal notation, so use the value 256 for -0400 permissions. If you use yaml instead of json for the pod, you can use octal +0400 permissions. If you use YAML instead of JSON for the Pod, you can use octal notation to specify permissions in a more natural way. You can also use mapping, as in the previous example, and specify different -permission for different files like this: +permissions for different files like this: ```yaml apiVersion: v1 @@ -538,16 +612,18 @@ in decimal notation. Note that this permission value might be displayed in decimal notation if you read it later. -**Consuming Secret Values from Volumes** +#### Consuming Secret values from volumes Inside the container that mounts a secret volume, the secret keys appear as -files and the secret values are base-64 decoded and stored inside these files. -This is the result of commands -executed inside the container from the example above: +files and the secret values are base64 decoded and stored inside these files. +This is the result of commands executed inside the container from the example above: ```shell ls /etc/foo/ ``` + +The output is similar to: + ``` username password @@ -556,14 +632,19 @@ password ```shell cat /etc/foo/username ``` + +The output is similar to: + ``` admin ``` - ```shell cat /etc/foo/password ``` + +The output is similar to: + ``` 1f2d1e2e67df ``` @@ -571,19 +652,19 @@ cat /etc/foo/password The program in a container is responsible for reading the secrets from the files. -**Mounted Secrets are updated automatically** +#### Mounted Secrets are updated automatically -When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. -Kubelet is checking whether the mounted secret is fresh on every periodic sync. -However, it is using its local cache for getting the current value of the Secret. -The type of the cache is configurable using the (`ConfigMapAndSecretChangeDetectionStrategy` field in -[KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)). -It can be either propagated via watch (default), ttl-based, or simply redirecting -all requests to directly kube-apiserver. +When a secret currently consumed in a volume is updated, projected keys are eventually updated as well. +The kubelet checks whether the mounted secret is fresh on every periodic sync. +However, the kubelet uses its local cache for getting the current value of the Secret. +The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in +the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go). +A Secret can be either propagated by watch (default), ttl-based, or simply redirecting +all requests directly to the API server. As a result, the total delay from the moment when the Secret is updated to the moment -when new keys are projected to the Pod can be as long as kubelet sync period + cache -propagation delay, where cache propagation delay depends on the chosen cache type -(it equals to watch propagation delay, ttl of cache, or zero corespondingly). +when new keys are projected to the Pod can be as long as the kubelet sync period + cache +propagation delay, where the cache propagation delay depends on the chosen cache type +(it equals to watch propagation delay, ttl of cache, or zero correspondingly). {{< note >}} A container using a Secret as a @@ -591,16 +672,16 @@ A container using a Secret as a Secret updates. {{< /note >}} -### Using Secrets as Environment Variables +### Using Secrets as environment variables To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}} -in a pod: +in a Pod: -1. Create a secret or use an existing one. Multiple pods can reference the same secret. -1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`. -1. Modify your image and/or command line so that the program looks for values in the specified environment variables +1. Create a secret or use an existing one. Multiple Pods can reference the same secret. +1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`. +1. Modify your image and/or command line so that the program looks for values in the specified environment variables. -This is an example of a pod that uses secrets from environment variables: +This is an example of a Pod that uses secrets from environment variables: ```yaml apiVersion: v1 @@ -625,46 +706,55 @@ spec: restartPolicy: Never ``` -**Consuming Secret Values from Environment Variables** +#### Consuming Secret Values from environment variables Inside a container that consumes a secret in an environment variables, the secret keys appear as -normal environment variables containing the base-64 decoded values of the secret data. +normal environment variables containing the base64 decoded values of the secret data. This is the result of commands executed inside the container from the example above: ```shell echo $SECRET_USERNAME ``` + +The output is similar to: + ``` admin ``` + ```shell echo $SECRET_PASSWORD ``` + +The output is similar to: + ``` 1f2d1e2e67df ``` ### Using imagePullSecrets -An imagePullSecret is a way to pass a secret that contains a Docker (or other) image registry -password to the Kubelet so it can pull a private image on behalf of your Pod. +The `imagePullSecrets` field is a list of references to secrets in the same namespace. +You can use an `imagePullSecrets` to pass a secret that contains a Docker (or other) image registry +password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod. +See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field. -**Manually specifying an imagePullSecret** +#### Manually specifying an imagePullSecret -Use of imagePullSecrets is described in the [images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) +You can learn how to specify `ImagePullSecrets` from the [container images documentation](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). -### Arranging for imagePullSecrets to be Automatically Attached +### Arranging for imagePullSecrets to be automatically attached -You can manually create an imagePullSecret, and reference it from -a serviceAccount. Any pods created with that serviceAccount -or that default to use that serviceAccount, will get their imagePullSecret +You can manually create `imagePullSecrets`, and reference it from +a ServiceAccount. Any Pods created with that ServiceAccount +or created with that ServiceAccount by default, will get their `imagePullSecrets` field set to that of the service account. See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for a detailed explanation of that process. -### Automatic Mounting of Manually Created Secrets +### Automatic mounting of manually created Secrets -Manually created secrets (e.g. one containing a token for accessing a github account) +Manually created secrets (for example, one containing a token for accessing a GitHub account) can be automatically attached to pods based on their service account. See [Injecting Information into Pods Using a PodPreset](/docs/tasks/inject-data-application/podpreset/) for a detailed explanation of that process. @@ -673,76 +763,83 @@ See [Injecting Information into Pods Using a PodPreset](/docs/tasks/inject-data- ### Restrictions Secret volume sources are validated to ensure that the specified object -reference actually points to an object of type `Secret`. Therefore, a secret -needs to be created before any pods that depend on it. +reference actually points to an object of type Secret. Therefore, a secret +needs to be created before any Pods that depend on it. -Secret API objects reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}. -They can only be referenced by pods in that same namespace. +Secret resources reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}. +Secrets can only be referenced by Pods in that same namespace. -Individual secrets are limited to 1MiB in size. This is to discourage creation -of very large secrets which would exhaust apiserver and kubelet memory. -However, creation of many smaller secrets could also exhaust memory. More +Individual secrets are limited to 1MiB in size. This is to discourage creation +of very large secrets which would exhaust the API server and kubelet memory. +However, creation of many smaller secrets could also exhaust memory. More comprehensive limits on memory usage due to secrets is a planned feature. -Kubelet only supports use of secrets for Pods it gets from the API server. -This includes any pods created using kubectl, or indirectly via a replication -controller. It does not include pods created via the kubelets +The kubelet only supports the use of secrets for Pods where the secrets +are obtained from the API server. +This includes any Pods created using `kubectl`, or indirectly via a replication +controller. It does not include Pods created as a result of the kubelet `--manifest-url` flag, its `--config` flag, or its REST API (these are -not common ways to create pods.) +not common ways to create Pods.) -Secrets must be created before they are consumed in pods as environment -variables unless they are marked as optional. References to Secrets that do -not exist will prevent the pod from starting. +Secrets must be created before they are consumed in Pods as environment +variables unless they are marked as optional. References to secrets that do +not exist will prevent the Pod from starting. -References via `secretKeyRef` to keys that do not exist in a named Secret -will prevent the pod from starting. +References (`secretKeyRef` field) to keys that do not exist in a named Secret +will prevent the Pod from starting. -Secrets used to populate environment variables via `envFrom` that have keys +Secrets used to populate environment variables by the `envFrom` field that have keys that are considered invalid environment variable names will have those keys -skipped. The pod will be allowed to start. There will be an event whose +skipped. The Pod will be allowed to start. There will be an event whose reason is `InvalidVariableNames` and the message will contain the list of invalid keys that were skipped. The example shows a pod which refers to the -default/mysecret that contains 2 invalid keys, 1badkey and 2alsobad. +default/mysecret that contains 2 invalid keys: `1badkey` and `2alsobad`. ```shell kubectl get events ``` + +The output is similar to: + ``` LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. ``` -### Secret and Pod Lifetime interaction +### Secret and Pod lifetime interaction -When a pod is created via the API, there is no check whether a referenced -secret exists. Once a pod is scheduled, the kubelet will try to fetch the -secret value. If the secret cannot be fetched because it does not exist or -because of a temporary lack of connection to the API server, kubelet will -periodically retry. It will report an event about the pod explaining the -reason it is not started yet. Once the secret is fetched, the kubelet will -create and mount a volume containing it. None of the pod's containers will -start until all the pod's volumes are mounted. +When a Pod is created by calling the Kubernetes API, there is no check if a referenced +secret exists. Once a Pod is scheduled, the kubelet will try to fetch the +secret value. If the secret cannot be fetched because it does not exist or +because of a temporary lack of connection to the API server, the kubelet will +periodically retry. It will report an event about the Pod explaining the +reason it is not started yet. Once the secret is fetched, the kubelet will +create and mount a volume containing it. None of the Pod's containers will +start until all the Pod's volumes are mounted. ## Use cases ### Use-Case: Pod with ssh keys -Create a kustomization.yaml with SecretGenerator containing some ssh keys: +Create a secret containing some ssh keys: ```shell kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub ``` +The output is similar to: + ``` secret "ssh-key-secret" created ``` +You can also create a `kustomization.yaml` with a `secretGenerator` field containing ssh keys. + {{< caution >}} -Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to be accessible to all the users with whom you share the Kubernetes cluster, and can revoke if they are compromised. +Think carefully before sending your own ssh keys: other users of the cluster may have access to the secret. Use a service account which you want to be accessible to all the users with whom you share the Kubernetes cluster, and can revoke this account if the users are compromised. {{< /caution >}} - -Now we can create a pod which references the secret with the ssh key and +Now you can create a Pod which references the secret with the ssh key and consumes it in a volume: ```yaml @@ -768,7 +865,7 @@ spec: When the container's command runs, the pieces of the key will be available in: -```shell +``` /etc/secret-volume/ssh-publickey /etc/secret-volume/ssh-privatekey ``` @@ -777,15 +874,19 @@ The container is then free to use the secret data to establish an ssh connection ### Use-Case: Pods with prod / test credentials -This example illustrates a pod which consumes a secret containing prod -credentials and another pod which consumes a secret with test environment +This example illustrates a Pod which consumes a secret containing production +credentials and another Pod which consumes a secret with test environment credentials. -Make the kustomization.yaml with SecretGenerator +You can create a `kustomization.yaml` with a `secretGenerator` field or run +`kubectl create secret`. ```shell kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 ``` + +The output is similar to: + ``` secret "prod-db-secret" created ``` @@ -793,23 +894,29 @@ secret "prod-db-secret" created ```shell kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests ``` + +The output is similar to: + ``` secret "test-db-secret" created ``` + {{< note >}} -Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) and require escaping. In most common shells, the easiest way to escape the password is to surround it with single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: +Special characters such as `$`, `\`, `*`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping. +In most shells, the easiest way to escape the password is to surround it with single quotes (`'`). +For example, if your actual password is `S!B\*d$zDsb`, you should execute the command this way: -``` +```shell kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' ``` You do not need to escape special characters in passwords from files (`--from-file`). {{< /note >}} -Now make the pods: +Now make the Pods: ```shell -$ cat < pod.yaml +cat < pod.yaml apiVersion: v1 kind: List items: @@ -852,15 +959,16 @@ items: EOF ``` -Add the pods to the same kustomization.yaml +Add the pods to the same kustomization.yaml: + ```shell -$ cat <> kustomization.yaml +cat <> kustomization.yaml resources: - pod.yaml EOF ``` -Apply all those objects on the Apiserver by +Apply all those objects on the API server by running: ```shell kubectl apply -k . @@ -868,17 +976,20 @@ kubectl apply -k . Both containers will have the following files present on their filesystems with the values for each container's environment: -```shell +``` /etc/secret-volume/username /etc/secret-volume/password ``` -Note how the specs for the two pods differ only in one field; this facilitates -creating pods with different capabilities from a common pod config template. +Note how the specs for the two Pods differ only in one field; this facilitates +creating Pods with different capabilities from a common Pod template. + +You could further simplify the base Pod specification by using two service accounts: + +1. `prod-user` with the `prod-db-secret` +1. `test-user` with the `test-db-secret` -You could further simplify the base pod specification by using two Service Accounts: -one called, say, `prod-user` with the `prod-db-secret`, and one called, say, -`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example: +The Pod specification is shortened to: ```yaml apiVersion: v1 @@ -894,10 +1005,11 @@ spec: image: myClientImage ``` -### Use-case: Dotfiles in secret volume +### Use-case: dotfiles in a secret volume -In order to make piece of data 'hidden' (i.e., in a file whose name begins with a dot character), simply -make that key begin with a dot. For example, when the following secret is mounted into a volume: +You can make your data "hidden" by defining a key that begins with a dot. +This key represents a dotfile or "hidden" file. For example, when the following secret +is mounted into a volume, `secret-volume`: ```yaml apiVersion: v1 @@ -929,8 +1041,7 @@ spec: mountPath: "/etc/secret-volume" ``` - -The `secret-volume` will contain a single file, called `.secret-file`, and +The volume will contain a single file, called `.secret-file`, and the `dotfile-test-container` will have this file present at the path `/etc/secret-volume/.secret-file`. @@ -939,17 +1050,17 @@ Files beginning with dot characters are hidden from the output of `ls -l`; you must use `ls -la` to see them when listing directory contents. {{< /note >}} -### Use-case: Secret visible to one container in a pod +### Use-case: Secret visible to one container in a Pod Consider a program that needs to handle HTTP requests, do some complex business -logic, and then sign some messages with an HMAC. Because it has complex +logic, and then sign some messages with an HMAC. Because it has complex application logic, there might be an unnoticed remote file reading exploit in the server, which could expose the private key to an attacker. This could be divided into two processes in two containers: a frontend container which handles user interaction and business logic, but which cannot see the private key; and a signer container that can see the private key, and responds -to simple signing requests from the frontend (e.g. over localhost networking). +to simple signing requests from the frontend (for example, over localhost networking). With this partitioned approach, an attacker now has to trick the application server into doing something rather arbitrary, which may be harder than getting @@ -959,10 +1070,10 @@ it to read a file. ## Best practices -### Clients that use the secrets API +### Clients that use the Secret API -When deploying applications that interact with the secrets API, access should be -limited using [authorization policies]( +When deploying applications that interact with the Secret API, you should +limit access using [authorization policies]( /docs/reference/access-authn-authz/authorization/) such as [RBAC]( /docs/reference/access-authn-authz/rbac/). @@ -978,7 +1089,7 @@ the clients to inspect the values of all secrets that are in that namespace. The `watch` and `list` all secrets in a cluster should be reserved for only the most privileged, system-level components. -Applications that need to access the secrets API should perform `get` requests on +Applications that need to access the Secret API should perform `get` requests on the secrets they need. This lets administrators restrict access to all secrets while [white-listing access to individual instances]( /docs/reference/access-authn-authz/rbac/#referring-to-resources) that @@ -991,33 +1102,32 @@ https://github.com/kubernetes/community/blob/master/contributors/design-proposal to let clients `watch` individual resources has also been proposed, and will likely be available in future releases of Kubernetes. -## Security Properties - +## Security properties ### Protections -Because `secret` objects can be created independently of the `pods` that use +Because secrets can be created independently of the Pods that use them, there is less risk of the secret being exposed during the workflow of -creating, viewing, and editing pods. The system can also take additional -precautions with `secret` objects, such as avoiding writing them to disk where +creating, viewing, and editing Pods. The system can also take additional +precautions with Secrets, such as avoiding writing them to disk where possible. -A secret is only sent to a node if a pod on that node requires it. -Kubelet stores the secret into a `tmpfs` so that the secret is not written -to disk storage. Once the Pod that depends on the secret is deleted, kubelet +A secret is only sent to a node if a Pod on that node requires it. +The kubelet stores the secret into a `tmpfs` so that the secret is not written +to disk storage. Once the Pod that depends on the secret is deleted, the kubelet will delete its local copy of the secret data as well. -There may be secrets for several pods on the same node. However, only the -secrets that a pod requests are potentially visible within its containers. +There may be secrets for several Pods on the same node. However, only the +secrets that a Pod requests are potentially visible within its containers. Therefore, one Pod does not have access to the secrets of another Pod. -There may be several containers in a pod. However, each container in a pod has +There may be several containers in a Pod. However, each container in a Pod has to request the secret volume in its `volumeMounts` for it to be visible within -the container. This can be used to construct useful [security partitions at the +the container. This can be used to construct useful [security partitions at the Pod level](#use-case-secret-visible-to-one-container-in-a-pod). -On most Kubernetes-project-maintained distributions, communication between user -to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. +On most Kubernetes distributions, communication between users +and the API server, and from the API server to the kubelets, is protected by SSL/TLS. Secrets are protected when transmitted over these channels. {{< feature-state for_k8s_version="v1.13" state="beta" >}} @@ -1027,11 +1137,11 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa ### Risks - - In the API server secret data is stored in {{< glossary_tooltip term_id="etcd" >}}; + - In the API server, secret data is stored in {{< glossary_tooltip term_id="etcd" >}}; therefore: - - Administrators should enable encryption at rest for cluster data (requires v1.13 or later) - - Administrators should limit access to etcd to admin users - - Administrators may want to wipe/shred disks used by etcd when no longer in use + - Administrators should enable encryption at rest for cluster data (requires v1.13 or later). + - Administrators should limit access to etcd to admin users. + - Administrators may want to wipe/shred disks used by etcd when no longer in use. - If running etcd in a cluster, administrators should make sure to use SSL/TLS for etcd peer-to-peer communication. - If you configure the secret through a manifest (JSON or YAML) file which has @@ -1040,15 +1150,10 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa encryption method and is considered the same as plain text. - Applications still need to protect the value of secret after reading it from the volume, such as not accidentally logging it or transmitting it to an untrusted party. - - A user who can create a pod that uses a secret can also see the value of that secret. Even - if apiserver policy does not allow that user to read the secret object, the user could - run a pod which exposes the secret. - - Currently, anyone with root on any node can read _any_ secret from the apiserver, - by impersonating the kubelet. It is a planned feature to only send secrets to + - A user who can create a Pod that uses a secret can also see the value of that secret. Even + if the API server policy does not allow that user to read the Secret, the user could + run a Pod which exposes the secret. + - Currently, anyone with root permission on any node can read _any_ secret from the API server, + by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node. - - -{{% capture whatsnext %}} - -{{% /capture %}} diff --git a/content/en/docs/concepts/configuration/taint-and-toleration.md b/content/en/docs/concepts/configuration/taint-and-toleration.md index 67a5574ac117a..eac6267e799bd 100644 --- a/content/en/docs/concepts/configuration/taint-and-toleration.md +++ b/content/en/docs/concepts/configuration/taint-and-toleration.md @@ -73,6 +73,7 @@ A toleration "matches" a taint if the keys are the same and the effects are the `Operator` defaults to `Equal` if not specified. {{< note >}} + There are two special cases: * An empty `key` with operator `Exists` matches all keys, values and effects which means this @@ -88,8 +89,9 @@ tolerations: ```yaml tolerations: - key: "key" - operator: "Exists" + operator: "Exists" ``` + {{< /note >}} The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`. diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 4e95d273acdd6..e22a742d36689 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -124,9 +124,9 @@ Troubleshooting: - Verify all requirements above. - Get $REGION (e.g. `us-west-2`) credentials on your workstation. SSH into the host and run Docker manually with those creds. Does it work? - Verify kubelet is running with `--cloud-provider=aws`. -- Check kubelet logs (e.g. `journalctl -u kubelet`) for log lines like: - - `plugins.go:56] Registering credential provider: aws-ecr-key` - - `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider` +- Increase kubelet log level verbosity to at least 3 and check kubelet logs (e.g. `journalctl -u kubelet`) for log lines like: + - `aws_credentials.go:109] unable to get ECR credentials from cache, checking ECR API` + - `aws_credentials.go:116] Got ECR credentials from ECR API for .dkr.ecr..amazonaws.com` ### Using Azure Container Registry (ACR) When using [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index 4177d203a4bf0..00bd9fae34a4f 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -120,7 +120,7 @@ For more details on setting up CRI runtimes, see [CRI installation](/docs/setup/ Kubernetes built-in dockershim CRI does not support runtime handlers. -#### [containerd](https://containerd.io/) +#### {{< glossary_tooltip term_id="containerd" >}} Runtime handlers are configured through containerd's configuration at `/etc/containerd/config.toml`. Valid handlers are configured under the runtimes section: @@ -132,19 +132,20 @@ Runtime handlers are configured through containerd's configuration at See containerd's config documentation for more details: https://github.com/containerd/cri/blob/master/docs/config.md -#### [cri-o](https://cri-o.io/) +#### {{< glossary_tooltip term_id="cri-o" >}} -Runtime handlers are configured through cri-o's configuration at `/etc/crio/crio.conf`. Valid +Runtime handlers are configured through CRI-O's configuration at `/etc/crio/crio.conf`. Valid handlers are configured under the [crio.runtime -table](https://github.com/kubernetes-sigs/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table): +table](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md#crioruntime-table): ``` [crio.runtime.runtimes.${HANDLER_NAME}] runtime_path = "${PATH_TO_BINARY}" ``` -See cri-o's config documentation for more details: -https://github.com/kubernetes-sigs/cri-o/blob/master/cmd/crio/config.go +See CRI-O's [config documentation][100] for more details. + +[100]: https://raw.githubusercontent.com/cri-o/cri-o/9f11d1d/docs/crio.conf.5.md ### Scheduling diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index e9e7069fce98b..4d3da6ad118f2 100644 --- a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -44,7 +44,7 @@ The controller interprets the structured data as a record of the user's desired state, and continually maintains this state. You can deploy and update a custom controller on a running cluster, independently -of the cluster's own lifecycle. Custom controllers can work with any kind of resource, +of the cluster's lifecycle. Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources. The [Operator pattern](https://coreos.com/blog/introducing-operators.html) combines custom resources and custom controllers. You can use custom controllers to encode domain knowledge @@ -61,7 +61,7 @@ When creating a new API, consider whether to [aggregate your API with the Kubern | You want to view your new types in a Kubernetes UI, such as dashboard, alongside built-in types. | Kubernetes UI support is not required. | | You are developing a new API. | You already have a program that serves your API and works well. | | You are willing to accept the format restriction that Kubernetes puts on REST resource paths, such as API Groups and Namespaces. (See the [API Overview](/docs/concepts/overview/kubernetes-api/).) | You need to have specific REST paths to be compatible with an already defined REST API. | -| Your resources are naturally scoped to a cluster or to namespaces of a cluster. | Cluster or namespace scoped resources are a poor fit; you need control over the specifics of resource paths. | +| Your resources are naturally scoped to a cluster or namespaces of a cluster. | Cluster or namespace scoped resources are a poor fit; you need control over the specifics of resource paths. | | You want to reuse [Kubernetes API support features](#common-features). | You don't need those features. | ### Declarative APIs @@ -83,7 +83,7 @@ Signs that your API might not be declarative include: - You talk about Remote Procedure Calls (RPCs). - Directly storing large amounts of data (e.g. > a few kB per object, or >1000s of objects). - High bandwidth access (10s of requests per second sustained) needed. - - Store end-user data (such as images, PII, etc) or other large-scale data processed by applications. + - Store end-user data (such as images, PII, etc.) or other large-scale data processed by applications. - The natural operations on the objects are not CRUD-y. - The API is not easily modeled as objects. - You chose to represent pending operations with an operation ID or an operation object. @@ -96,7 +96,7 @@ Use a ConfigMap if any of the following apply: * You want to put the entire config file into one key of a configMap. * The main use of the config file is for a program running in a Pod on your cluster to consume the file to configure itself. * Consumers of the file prefer to consume via file in a Pod or environment variable in a pod, rather than the Kubernetes API. -* You want to perform rolling updates via Deployment, etc, when the file is updated. +* You want to perform rolling updates via Deployment, etc., when the file is updated. {{< note >}} Use a [secret](/docs/concepts/configuration/secret/) for sensitive data, which is similar to a configMap but more secure. @@ -140,7 +140,7 @@ and use a controller to handle events. ## API server aggregation -Usually, each resource in the Kubernetes API requires code that handles REST requests and manages persistent storage of objects. The main Kubernetes API server handles built-in resources like *pods* and *services*, and can also handle custom resources in a generic way through [CRDs](#customresourcedefinitions). +Usually, each resource in the Kubernetes API requires code that handles REST requests and manages persistent storage of objects. The main Kubernetes API server handles built-in resources like *pods* and *services*, and can also generically handle custom resources through [CRDs](#customresourcedefinitions). The [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) allows you to provide specialized implementations for your custom resources by writing and deploying your own standalone API server. diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md index 92b32796febec..b4a9579a36308 100644 --- a/content/en/docs/concepts/policy/limit-range.md +++ b/content/en/docs/concepts/policy/limit-range.md @@ -305,7 +305,7 @@ PersistentVolumeClaim storage 1Gi 2Gi - - - {{< codenew file="admin/resource/pvc-limit-lower.yaml" >}} ```shell -kubectl create -f https://k8s.io/examples/admin/resource//pvc-limit-lower.yaml -n limitrange-demo +kubectl create -f https://k8s.io/examples/admin/resource/pvc-limit-lower.yaml -n limitrange-demo ``` While creating a PVC with `requests.storage` lower than the Min value in the LimitRange, an Error thrown by the server: @@ -341,7 +341,7 @@ kubectl apply -f https://k8s.io/examples/admin/resource/limit-memory-ratio-pod.y Describe the LimitRange with the following kubectl command: ```shell -$ kubectl describe limitrange/limit-memory-ratio-pod +kubectl describe limitrange/limit-memory-ratio-pod ``` ```shell diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index fcf7ff493a559..45b48f62aea94 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -22,7 +22,7 @@ updates. ## What is a Pod Security Policy? A _Pod Security Policy_ is a cluster-level resource that controls security -sensitive aspects of the pod specification. The `PodSecurityPolicy` objects +sensitive aspects of the pod specification. The [PodSecurityPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) objects define a set of conditions that a pod must run with in order to be accepted into the system, as well as defaults for the related fields. They allow an administrator to control the following: @@ -626,3 +626,9 @@ Refer to the [Sysctl documentation]( /docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy). {{% /capture %}} + +{{% capture whatsnext %}} + +Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details. + +{{% /capture %}} diff --git a/content/en/docs/concepts/security/overview.md b/content/en/docs/concepts/security/overview.md index bcec8727f9a4a..8c06308afc721 100644 --- a/content/en/docs/concepts/security/overview.md +++ b/content/en/docs/concepts/security/overview.md @@ -142,7 +142,7 @@ Area of Concern for Code | Recommendation | --------------------------------------------- | ------------ | Access over TLS only | If your code needs to communicate via TCP, ideally it would be performing a TLS handshake with the client ahead of time. With the exception of a few cases, the default behavior should be to encrypt everything in transit. Going one step further, even "behind the firewall" in our VPC's it's still a good idea to encrypt network traffic between services. This can be done through a process known as mutual or [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) which performs a two sided verification of communication between two certificate holding services. There are numerous tools that can be used to accomplish this in Kubernetes such as [Linkerd](https://linkerd.io/) and [Istio](https://istio.io/). | Limiting port ranges of communication | This recommendation may be a bit self-explanatory, but wherever possible you should only expose the ports on your service that are absolutely essential for communication or metric gathering. | -3rd Party Dependency Security | Since our applications tend to have dependencies outside of our own codebases, it is a good practice to ensure that a regular scan of the code's dependencies are still secure with no CVE's currently filed against them. Each language has a tool for performing this check automatically. | +3rd Party Dependency Security | Since our applications tend to have dependencies outside of our own codebases, it is a good practice to regularly scan the code's dependencies to ensure that they are still secure with no vulnerabilities currently filed against them. Each language has a tool for performing this check automatically. | Static Code Analysis | Most languages provide a way for a snippet of code to be analyzed for any potentially unsafe coding practices. Whenever possible you should perform checks using automated tooling that can scan codebases for common security errors. Some of the tools can be found here: https://www.owasp.org/index.php/Source_Code_Analysis_Tools | Dynamic probing attacks | There are a few automated tools that are able to be run against your service to try some of the well known attacks that commonly befall services. These include SQL injection, CSRF, and XSS. One of the most popular dynamic analysis tools is the OWASP Zed Attack proxy https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project | diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md index 3b660db019cc5..0e34fa926f8a7 100644 --- a/content/en/docs/concepts/services-networking/dual-stack.md +++ b/content/en/docs/concepts/services-networking/dual-stack.md @@ -31,7 +31,6 @@ Enabling IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following * Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod) * IPv4 and IPv6 enabled Services (each Service must be for a single address family) - * Kubenet multi address family support (IPv4 and IPv6) * Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces ## Prerequisites @@ -40,7 +39,7 @@ The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack * Kubernetes 1.16 or later * Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces) - * Kubenet network plugin + * A network plugin that supports dual-stack (such as Kubenet or Calico) * Kube-proxy running in mode IPVS ## Enable IPv4/IPv6 dual-stack @@ -56,7 +55,7 @@ To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/ * `--feature-gates="IPv6DualStack=true"` * kube-proxy: * `--proxy-mode=ipvs` - * `--cluster-cidrs=,` + * `--cluster-cidrs=,` * `--feature-gates="IPv6DualStack=true"` {{< caution >}} diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md index 5c085bcddc2dc..989e31992061c 100644 --- a/content/en/docs/concepts/services-networking/network-policies.md +++ b/content/en/docs/concepts/services-networking/network-policies.md @@ -11,16 +11,16 @@ weight: 50 {{< toc >}} {{% capture overview %}} -A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints. +A network policy is a specification of how groups of {{< glossary_tooltip text="pods" term_id="pod">}} are allowed to communicate with each other and other network endpoints. -`NetworkPolicy` resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods. +NetworkPolicy resources use {{< glossary_tooltip text="labels" term_id="label">}} to select pods and define rules which specify what traffic is allowed to the selected pods. {{% /capture %}} {{% capture body %}} ## Prerequisites -Network policies are implemented by the network plugin, so you must be using a networking solution which supports `NetworkPolicy` - simply creating the resource without a controller to implement it will have no effect. +Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect. ## Isolated and Non-isolated Pods @@ -30,11 +30,11 @@ Pods become isolated by having a NetworkPolicy that selects them. Once there is Network policies do not conflict, they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result. -## The `NetworkPolicy` Resource +## The NetworkPolicy resource {#networkpolicy-resource} -See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) for a full definition of the resource. +See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) reference for a full definition of the resource. -An example `NetworkPolicy` might look like this: +An example NetworkPolicy might look like this: ```yaml apiVersion: networking.k8s.io/v1 @@ -73,23 +73,25 @@ spec: port: 5978 ``` -*POSTing this to the API server will have no effect unless your chosen networking solution supports network policy.* +{{< note >}} +POSTing this to the API server for your cluster will have no effect unless your chosen networking solution supports network policy. +{{< /note >}} -__Mandatory Fields__: As with all other Kubernetes config, a `NetworkPolicy` +__Mandatory Fields__: As with all other Kubernetes config, a NetworkPolicy needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [Configure Containers Using a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), and [Object Management](/docs/concepts/overview/working-with-objects/object-management). -__spec__: `NetworkPolicy` [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace. +__spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) has all the information needed to define a particular network policy in the given namespace. -__podSelector__: Each `NetworkPolicy` includes a `podSelector` which selects the grouping of pods to which the policy applies. The example policy selects pods with the label "role=db". An empty `podSelector` selects all pods in the namespace. +__podSelector__: Each NetworkPolicy includes a `podSelector` which selects the grouping of pods to which the policy applies. The example policy selects pods with the label "role=db". An empty `podSelector` selects all pods in the namespace. -__policyTypes__: Each `NetworkPolicy` includes a `policyTypes` list which may include either `Ingress`, `Egress`, or both. The `policyTypes` field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no `policyTypes` are specified on a NetworkPolicy then by default `Ingress` will always be set and `Egress` will be set if the NetworkPolicy has any egress rules. +__policyTypes__: Each NetworkPolicy includes a `policyTypes` list which may include either `Ingress`, `Egress`, or both. The `policyTypes` field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no `policyTypes` are specified on a NetworkPolicy then by default `Ingress` will always be set and `Egress` will be set if the NetworkPolicy has any egress rules. -__ingress__: Each `NetworkPolicy` may include a list of whitelist `ingress` rules. Each rule allows traffic which matches both the `from` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`. +__ingress__: Each NetworkPolicy may include a list of whitelist `ingress` rules. Each rule allows traffic which matches both the `from` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an `ipBlock`, the second via a `namespaceSelector` and the third via a `podSelector`. -__egress__: Each `NetworkPolicy` may include a list of whitelist `egress` rules. Each rule allows traffic which matches both the `to` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port to any destination in `10.0.0.0/24`. +__egress__: Each NetworkPolicy may include a list of whitelist `egress` rules. Each rule allows traffic which matches both the `to` and `ports` sections. The example policy contains a single rule, which matches traffic on a single port to any destination in `10.0.0.0/24`. So, the example NetworkPolicy: @@ -107,7 +109,7 @@ See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network- There are four kinds of selectors that can be specified in an `ingress` `from` section or `egress` `to` section: -__podSelector__: This selects particular Pods in the same namespace as the `NetworkPolicy` which should be allowed as ingress sources or egress destinations. +__podSelector__: This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations. __namespaceSelector__: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations. @@ -168,16 +170,7 @@ in that namespace. You can create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods. -```yaml -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-deny -spec: - podSelector: {} - policyTypes: - - Ingress -``` +{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}} This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated. This policy does not change the default egress isolation behavior. @@ -185,33 +178,13 @@ This ensures that even pods that aren't selected by any other NetworkPolicy will If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all traffic in that namespace. -```yaml -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: allow-all -spec: - podSelector: {} - ingress: - - {} - policyTypes: - - Ingress -``` +{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}} ### Default deny all egress traffic You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any egress traffic from those pods. -```yaml -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-deny -spec: - podSelector: {} - policyTypes: - - Egress -``` +{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}} This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not change the default ingress isolation behavior. @@ -220,34 +193,13 @@ change the default ingress isolation behavior. If you want to allow all traffic from all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all egress traffic in that namespace. -```yaml -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: allow-all -spec: - podSelector: {} - egress: - - {} - policyTypes: - - Egress -``` +{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}} ### Default deny all ingress and all egress traffic You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by creating the following NetworkPolicy in that namespace. -```yaml -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-deny -spec: - podSelector: {} - policyTypes: - - Ingress - - Egress -``` +{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}} This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic. @@ -255,9 +207,12 @@ This ensures that even pods that aren't selected by any other NetworkPolicy will {{< feature-state for_k8s_version="v1.12" state="alpha" >}} -Kubernetes supports SCTP as a `protocol` value in `NetworkPolicy` definitions as an alpha feature. To enable this feature, the cluster administrator needs to enable the `SCTPSupport` feature gate on the apiserver, for example, `“--feature-gates=SCTPSupport=true,...”`. When the feature gate is enabled, users can set the `protocol` field of a `NetworkPolicy` to `SCTP`. Kubernetes sets up the network accordingly for the SCTP associations, just like it does for TCP connections. +To use this feature, you (or your cluster administrator) will need to enable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=true,…`. +When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`. -The CNI plugin has to support SCTP as `protocol` value in `NetworkPolicy`. +{{< note >}} +You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP protocol NetworkPolicies. +{{< /note >}} {{% /capture %}} @@ -266,6 +221,6 @@ The CNI plugin has to support SCTP as `protocol` value in `NetworkPolicy`. - See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples. -- See more [Recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource. +- See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource. {{% /capture %}} diff --git a/content/en/docs/concepts/services-networking/service-topology.md b/content/en/docs/concepts/services-networking/service-topology.md index 223cf86c3eef4..7b3c58a84a546 100644 --- a/content/en/docs/concepts/services-networking/service-topology.md +++ b/content/en/docs/concepts/services-networking/service-topology.md @@ -46,23 +46,6 @@ with it, while intrazonal traffic does not. Other common needs include being abl to route traffic to a local Pod managed by a DaemonSet, or keeping traffic to Nodes connected to the same top-of-rack switch for the lowest latency. -## Prerequisites - -The following prerequisites are needed in order to enable topology aware service -routing: - - * Kubernetes 1.17 or later - * Kube-proxy running in iptables mode or IPVS mode - * Enable [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/) - -## Enable Service Topology - -To enable service topology, enable the `ServiceTopology` feature gate for -kube-apiserver and kube-proxy: - -``` ---feature-gates="ServiceTopology=true" -``` ## Using Service Topology @@ -117,6 +100,98 @@ traffic as follows. it is used. +## Examples + +The following are common examples of using the Service Topology feature. + +### Only Node Local Endpoints + +A Service that only routes to node local endpoints. If no endpoints exist on the node, traffic is dropped: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "kubernetes.io/hostname" +``` + +### Prefer Node Local Endpoints + +A Service that prefers node local Endpoints but falls back to cluster wide endpoints if node local endpoints do not exist: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "kubernetes.io/hostname" + - "*" +``` + + +### Only Zonal or Regional Endpoints + +A Service that prefers zonal then regional endpoints. If no endpoints exist in either, traffic is dropped. + + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "topology.kubernetes.io/zone" + - "topology.kubernetes.io/region" +``` + +### Prefer Node Local, Zonal, then Regional Endpoints + +A Service that prefers node local, zonal, then regional endpoints but falls back to cluster wide endpoints. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 9376 + topologyKeys: + - "kubernetes.io/hostname" + - "topology.kubernetes.io/zone" + - "topology.kubernetes.io/region" + - "*" +``` + + {{% /capture %}} {{% capture whatsnext %}} diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index f1a210d56bc56..c568b36231fde 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -225,7 +225,8 @@ There are a few reasons for using proxying for Services: In this mode, kube-proxy watches the Kubernetes master for the addition and removal of Service and Endpoint objects. For each Service it opens a port (randomly chosen) on the local node. Any connections to this "proxy port" -is proxied to one of the Service's backend Pods (as reported via +are +proxied to one of the Service's backend Pods (as reported via Endpoints). kube-proxy takes the `SessionAffinity` setting of the Service into account when deciding which backend Pod to use. @@ -276,9 +277,9 @@ state. When accessing a Service, IPVS directs traffic to one of the backend Pods. The IPVS proxy mode is based on netfilter hook function that is similar to -iptables mode, but uses hash table as the underlying data structure and works +iptables mode, but uses a hash table as the underlying data structure and works in the kernel space. -That means kube-proxy in IPVS mode redirects traffic with a lower latency than +That means kube-proxy in IPVS mode redirects traffic with lower latency than kube-proxy in iptables mode, with much better performance when synchronising proxy rules. Compared to the other proxy modes, IPVS mode also supports a higher throughput of network traffic. @@ -310,7 +311,7 @@ about Kubernetes or Services or Pods. If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based -on client's IP addresses by setting `service.spec.sessionAffinity` to "ClientIP" +on the client's IP addresses by setting `service.spec.sessionAffinity` to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting `service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` appropriately. @@ -421,7 +422,7 @@ Pods in other Namespaces must qualify the name as `my-service.my-ns`. These name will resolve to the cluster IP assigned for the Service. Kubernetes also supports DNS SRV (Service) records for named ports. If the -`"my-service.my-ns"` Service has a port named `"http"` with protocol set to +`"my-service.my-ns"` Service has a port named `"http"` with the protocol set to `TCP`, you can do a DNS SRV query for `_http._tcp.my-service.my-ns` to discover the port number for `"http"`, as well as the IP address. @@ -506,7 +507,7 @@ For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` If you want a specific port number, you can specify a value in the `nodePort` field. The control plane will either allocate you that port or report that the API transaction failed. -This means that you need to take care about possible port collisions yourself. +This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use. @@ -549,7 +550,7 @@ status: Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced. For LoadBalancer type of Services, when there is more than one port defined, all -ports must have the same protocol and the protocol must be one of `TCP`, `UDP` +ports must have the same protocol and the protocol must be one of `TCP`, `UDP`, and `SCTP`. Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created @@ -677,7 +678,7 @@ SSL, the ELB expects the Pod to authenticate itself over the encrypted connection, using a certificate. HTTP and HTTPS selects layer 7 proxying: the ELB terminates -the connection with the user, parse headers and inject the `X-Forwarded-For` +the connection with the user, parses headers, and injects the `X-Forwarded-For` header with the user's IP address (Pods only see the IP address of the ELB at the other end of its connection) when forwarding requests. @@ -849,7 +850,7 @@ traffic. Nodes without any Pods for a particular LoadBalancer Service will fail the NLB Target Group's health check on the auto-assigned `.spec.healthCheckNodePort` and not receive any traffic. -In order to achieve even traffic, either use a DaemonSet, or specify a +In order to achieve even traffic, either use a DaemonSet or specify a [pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) to not locate on the same node. @@ -1182,7 +1183,7 @@ virtual IP address will simply transport the packets there. The Kubernetes project intends to improve support for L7 (HTTP) Services. The Kubernetes project intends to have more flexible ingress modes for Services -which encompass the current ClusterIP, NodePort, and LoadBalancer modes and more. +that encompass the current ClusterIP, NodePort, and LoadBalancer modes and more. {{% /capture %}} diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index a0ec08063a6dc..8346fc562ce9e 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -115,7 +115,7 @@ Labels: type=local Annotations: Finalizers: [kubernetes.io/pv-protection] StorageClass: standard -Status: Available +Status: Terminating Claim: Reclaim Policy: Delete Access Modes: RWO diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md index 72118108a0c27..86279784272ee 100644 --- a/content/en/docs/concepts/workloads/controllers/daemonset.md +++ b/content/en/docs/concepts/workloads/controllers/daemonset.md @@ -19,8 +19,8 @@ collected. Deleting a DaemonSet will clean up the Pods it created. Some typical uses of a DaemonSet are: - running a cluster storage daemon, such as `glusterd`, `ceph`, on each node. -- running a logs collection daemon on every node, such as `fluentd` or `logstash`. -- running a node monitoring daemon on every node, such as [Prometheus Node Exporter](https://github.com/prometheus/node_exporter), [Flowmill](https://github.com/Flowmill/flowmill-k8s/), [Sysdig Agent](https://docs.sysdig.com), `collectd`, [Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/), [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes), [Datadog agent](https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/), [New Relic agent](https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration), Ganglia `gmond` or [Instana Agent](https://www.instana.com/supported-integrations/kubernetes-monitoring/). +- running a logs collection daemon on every node, such as `fluentd` or `filebeat`. +- running a node monitoring daemon on every node, such as [Prometheus Node Exporter](https://github.com/prometheus/node_exporter), [Flowmill](https://github.com/Flowmill/flowmill-k8s/), [Sysdig Agent](https://docs.sysdig.com), `collectd`, [Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/), [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes), [Datadog agent](https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/), [New Relic agent](https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration), Ganglia `gmond`, [Instana Agent](https://www.instana.com/supported-integrations/kubernetes-monitoring/) or [Elastic Metricbeat](https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-kubernetes.html). In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex setup might use multiple DaemonSets for a single type of daemon, but with diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 59c77a4ae4a27..7077bd5ad3ca8 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -136,7 +136,7 @@ metadata: name: frontend-9si5l namespace: default ownerReferences: - - apiVersion: extensions/v1beta1 + - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet @@ -261,7 +261,7 @@ the -d option. For example: ```shell kubectl proxy --port=8080 -curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \ +curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \ > -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \ > -H "Content-Type: application/json" ``` @@ -273,7 +273,7 @@ When using the REST API or the `client-go` library, you must set `propagationPol For example: ```shell kubectl proxy --port=8080 -curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \ +curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \ > -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \ > -H "Content-Type: application/json" ``` diff --git a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md index 3359616009fc9..5547a977eb9c1 100644 --- a/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md +++ b/content/en/docs/concepts/workloads/controllers/ttlafterfinished.md @@ -10,7 +10,7 @@ weight: 65 {{< feature-state for_k8s_version="v1.12" state="alpha" >}} -The TTL controller provides a TTL mechanism to limit the lifetime of resource +The TTL controller provides a TTL (time to live) mechanism to limit the lifetime of resource objects that have finished execution. TTL controller only handles [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) for now, and may be expanded to handle other resources that will finish execution, diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index a55378ba3c1fc..b54a8b6ca8bcc 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -55,7 +55,7 @@ array has six possible fields: * The `message` field is a human-readable message indicating details about the transition. - + * The `reason` field is a unique, one-word, CamelCase reason for the condition's last transition. * The `status` field is a string, with possible values "`True`", "`False`", and "`Unknown`". @@ -67,8 +67,6 @@ array has six possible fields: balancing pools of all matching Services; * `Initialized`: all [init containers](/docs/concepts/workloads/pods/init-containers) have started successfully; - * `Unschedulable`: the scheduler cannot schedule the Pod right now, for example - due to lack of resources or other constraints; * `ContainersReady`: all containers in the Pod are ready. @@ -185,18 +183,18 @@ Once Pod is assigned to a node by scheduler, kubelet starts creating containers Reason: ErrImagePull ... ``` - -* `Running`: Indicates that the container is executing without issues. Once a container enters into Running, `postStart` hook (if any) is executed. This state also displays the time when the container entered Running state. - + +* `Running`: Indicates that the container is executing without issues. The `postStart` hook (if any) is executed prior to the container entering a Running state. This state also displays the time when the container entered Running state. + ```yaml ... State: Running Started: Wed, 30 Jan 2019 16:46:38 +0530 ... - ``` - + ``` + * `Terminated`: Indicates that the container completed its execution and has stopped running. A container enters into this when it has successfully completed execution or when it has failed for some reason. Regardless, a reason and exit code is displayed, as well as the container's start and finish time. Before a container enters into Terminated, `preStop` hook (if any) is executed. - + ```yaml ... State: Terminated @@ -205,7 +203,7 @@ Once Pod is assigned to a node by scheduler, kubelet starts creating containers Started: Wed, 30 Jan 2019 11:45:26 +0530 Finished: Wed, 30 Jan 2019 11:45:26 +0530 ... - ``` + ``` ## Pod readiness gate @@ -216,7 +214,7 @@ extra feedback or signals into `PodStatus`, Kubernetes 1.11 introduced a feature named [Pod ready++](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md). You can use the new field `ReadinessGate` in the `PodSpec` to specify additional conditions to be evaluated for Pod readiness. If Kubernetes cannot find such a -condition in the `status.conditions` field of a Pod, the status of the condition +condition in the `status.conditions` field of a Pod, the status of the condition is default to "`False`". Below is an example: ```yaml @@ -255,12 +253,6 @@ when both the following statements are true: To facilitate this change to Pod readiness evaluation, a new Pod condition `ContainersReady` is introduced to capture the old Pod `Ready` condition. -In K8s 1.11, as an alpha feature, the "Pod Ready++" feature has to be explicitly enabled by -setting the `PodReadinessGates` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -to true. - -In K8s 1.12, the feature is enabled by default. - ## Restart policy A PodSpec has a `restartPolicy` field with possible values Always, OnFailure, @@ -277,8 +269,8 @@ once bound to a node, a Pod will never be rebound to another node. ## Pod lifetime In general, Pods remain until a human or controller process explicitly removes them. -The control plane cleans up terminated Pods (with a phase of `Succeeded` or -`Failed`), when the number of Pods exceeds the configured threshold +The control plane cleans up terminated Pods (with a phase of `Succeeded` or +`Failed`), when the number of Pods exceeds the configured threshold (determined by `terminated-pod-gc-threshold` in the kube-controller-manager). This avoids a resource leak as Pods are created and terminated over time. diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md index 64abcae3f1486..c58c72f28f818 100644 --- a/content/en/docs/contribute/_index.md +++ b/content/en/docs/contribute/_index.md @@ -13,68 +13,50 @@ we're happy to have your help! Anyone can contribute, whether you're new to the project or you've been around a long time, and whether you self-identify as a developer, an end user, or someone who just can't stand seeing typos. -For information on the Kubernetes documentation - content and style, see the - [Documentation style overview](/docs/contribute/style/). +{{% /capture %}} {{% capture body %}} -## Types of docs contributors - -- A _member_ of the Kubernetes organization who has [signed the CLA](/docs/contribute/start#sign-the-cla) - and contributed some time and effort to the project. See - [Community membership](https://github.com/kubernetes/community/blob/master/community-membership.md) - for specific criteria for membership. -- A SIG Docs _reviewer_ is a member of the Kubernetes organization who has - expressed interest in reviewing documentation pull requests and who has been - added to the appropriate GitHub group and `OWNERS` files in the GitHub - repository, by a SIG Docs Approver. -- A SIG Docs _approver_ is a member in good standing who has shown a continued - commitment to the project. An approver can merge pull requests - and publish content on behalf of the Kubernetes organization. - Approvers can also represent SIG Docs in the larger Kubernetes community. - Some of the duties of a SIG Docs approver, such as coordinating a release, - require a significant time commitment. - -## Ways to contribute to documentation - -This list is divided into things anyone can do, things Kubernetes organization -members can do, and things that require a higher level of access and familiarity -with SIG Docs processes. Contributing consistently over time can help you -understand some of the tooling and organizational decisions that have already -been made. - -This is not an exhaustive list of ways you can contribute to the Kubernetes -documentation, but it should help you get started. - -- [Anyone](/docs/contribute/start/) - - Open actionable issues -- [Member](/docs/contribute/start/) - - Improve existing docs - - Bring up ideas for improvement on [Slack](http://slack.k8s.io/) or the [SIG docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) - - Improve docs accessibility - - Provide non-binding feedback on PRs - - Write a blog post or case study -- [Reviewer](/docs/contribute/intermediate/) - - Document new features - - Triage and categorize issues - - Review PRs - - Create diagrams, graphics assets, and embeddable screencasts / videos - - Localization - - Contribute to other repos as a docs representative - - Edit user-facing strings in code - - Improve code comments, Godoc -- [Approver](/docs/contribute/advanced/) - - Publish contributor content by approving and merging PRs - - Participate in a Kubernetes release team as a docs representative - - Propose improvements to the style guide - - Propose improvements to docs tests - - Propose improvements to the Kubernetes website or other tooling - - -## Additional ways to contribute +## Getting Started + +Anyone can open an issue describing problems or desired improvements with documentation, or contribute a change with a pull request (PR). +Some tasks require more trust and need more access in the Kubernetes organization. +See [Participating in SIG Docs](/docs/contribute/participating/) for more details about +of roles and permissions. + +Kubernetes documentation resides in a GitHub repository. While we welcome +contributions from anyone, you do need basic comfort with git and GitHub to +operate effectively in the Kubernetes community. + +To get involved with documentation: + +1. Sign the CNCF [Contributor License Agreement](https://github.com/kubernetes/community/blob/master/CLA.md). +2. Familiarize yourself with the [documentation repository](https://github.com/kubernetes/website) and the website's [static site generator](https://gohugo.io). +3. Make sure you understand the basic processes for [improving content](https://kubernetes.io/docs/contribute/start/#improve-existing-content) and [reviewing changes](https://kubernetes.io/docs/contribute/start/#review-docs-pull-requests). + +## Contributions best practices + +- Do write clear and meaningful GIT commit messages. +- Make sure to include _Github Special Keywords_ which references the issue and automatically closes the issue when PR is merged. +- When you make a small change to a PR like fixing a typo, any style change, or changing grammar. Make sure you squash your commits so that you dont get a large number of commits for a relatively small change. +- Make sure you include a nice PR description depicting the code you have changes, why to change a following piece of code and ensuring there is sufficient information for the reviewer to understand your PR. +- Additional Readings : + - [chris.beams.io/posts/git-commit/](https://chris.beams.io/posts/git-commit/) + - [github.com/blog/1506-closing-issues-via-pull-requests ](https://github.com/blog/1506-closing-issues-via-pull-requests ) + - [davidwalsh.name/squash-commits-git ](https://davidwalsh.name/squash-commits-git ) + +## Other ways to contribute - To contribute to the Kubernetes community through online forums like Twitter or Stack Overflow, or learn about local meetups and Kubernetes events, visit the [Kubernetes community site](/community/). - To contribute to feature development, read the [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) to get started. {{% /capture %}} + +{{% capture whatsnext %}} + +- For more information about the basics of contributing to documentation, read [Start contributing](/docs/contribute/start/). +- Follow the [Kubernetes documentation style guide](/docs/contribute/style/style-guide/) when proposing changes. +- For more information about SIG Docs, read [Participating in SIG Docs](/docs/contribute/participating/). +- For more information about localizing Kubernetes docs, read [Localizing Kubernetes documentation](/docs/contribute/localization/). + +{{% /capture %}} diff --git a/content/en/docs/contribute/generate-ref-docs/_index.md b/content/en/docs/contribute/generate-ref-docs/_index.md index cf058d98ff106..5720f0fe51633 100644 --- a/content/en/docs/contribute/generate-ref-docs/_index.md +++ b/content/en/docs/contribute/generate-ref-docs/_index.md @@ -1,9 +1,12 @@ --- -title: Reference docs overview +title: Reference Docs Overview main_menu: true weight: 80 --- -Much of the Kubernetes reference documentation is generated from Kubernetes -source code, using scripts. The topics in this section document how to generate -this type of content. +The topics in this section document how to generate the Kubernetes +reference guides. + +To build the reference documentation, see the following guide: + +* [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) diff --git a/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md b/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md index ec96a56c39600..6c4d93cd401ed 100644 --- a/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md +++ b/content/en/docs/contribute/generate-ref-docs/contribute-upstream.md @@ -1,13 +1,14 @@ --- title: Contributing to the Upstream Kubernetes Code content_template: templates/task +weight: 20 --- {{% capture overview %}} -This page shows how to contribute to the upstream kubernetes/kubernetes project -to fix bugs found in the Kubernetes API documentation or the `kube-*` -components such as `kube-apiserver`, `kube-controller-manager`, etc. +This page shows how to contribute to the upstream `kubernetes/kubernetes` project. +You can fix bugs found in the Kubernetes API documentation or the content of +the Kubernetes components such as `kubeadm`, `kube-apiserver`, and `kube-controller-manager`. If you instead want to regenerate the reference documentation for the Kubernetes API or the `kube-*` components from the upstream code, see the following instructions: @@ -17,28 +18,25 @@ API or the `kube-*` components from the upstream code, see the following instruc {{% /capture %}} - {{% capture prerequisites %}} -You need to have these tools installed: +- You need to have these tools installed: -* [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) -* [Golang](https://golang.org/doc/install) version 1.9.1 or later -* [Docker](https://docs.docker.com/engine/installation/) -* [etcd](https://github.com/coreos/etcd/) + - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) + - [Golang](https://golang.org/doc/install) version 1.13+ + - [Docker](https://docs.docker.com/engine/installation/) + - [etcd](https://github.com/coreos/etcd/) -Your $GOPATH environment variable must be set, and the location of `etcd` -must be in your $PATH environment variable. +- Your `GOPATH` environment variable must be set, and the location of `etcd` + must be in your `PATH` environment variable. -You need to know how to create a pull request to a GitHub repository. -Typically, this involves creating a fork of the repository. For more -information, see -[Creating a Pull Request](https://help.github.com/articles/creating-a-pull-request/) and -[GitHub Standard Fork & Pull Request Workflow](https://gist.github.com/Chaser324/ce0505fbed06b947d962). +- You need to know how to create a pull request to a GitHub repository. + Typically, this involves creating a fork of the repository. + For more information, see [Creating a Pull Request](https://help.github.com/articles/creating-a-pull-request/) + and [GitHub Standard Fork & Pull Request Workflow](https://gist.github.com/Chaser324/ce0505fbed06b947d962). {{% /capture %}} - {{% capture steps %}} ## The big picture @@ -221,11 +219,10 @@ the same as the generated files in the master branch. The generated files in the contain API elements only from Kubernetes 1.9. The generated files in the master branch might contain API elements that are not in 1.9, but are under development for 1.10. - ## Generating the published reference docs The preceding section showed how to edit a source file and then generate -several files, including `api/openapi-spec/swagger.json` in the +several files, including `api/openapi-spec/swagger.json` in the `kubernetes/kubernetes` repository. The `swagger.json` file is the OpenAPI definition file to use for generating the API reference documentation. @@ -238,8 +235,7 @@ You are now ready to follow the [Generating Reference Documentation for the Kube {{% capture whatsnext %}} * [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) -* [Generating Reference Docs for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/) -* [Generating Reference Documentation for kubectl Commands](/docs/home/contribute/generated-reference/kubectl/) +* [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) +* [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) {{% /capture %}} - diff --git a/content/en/docs/contribute/generate-ref-docs/kubectl.md b/content/en/docs/contribute/generate-ref-docs/kubectl.md index dafe7571c761b..797a0f537144d 100644 --- a/content/en/docs/contribute/generate-ref-docs/kubectl.md +++ b/content/en/docs/contribute/generate-ref-docs/kubectl.md @@ -1,12 +1,12 @@ --- title: Generating Reference Documentation for kubectl Commands content_template: templates/task +weight: 90 --- {{% capture overview %}} -This page shows how to automatically generate reference pages for the -commands provided by the `kubectl` tool. +This page shows how to generate the `kubectl` command reference. {{< note >}} This topic shows how to generate reference documentation for @@ -23,29 +23,12 @@ reference page, see {{% /capture %}} - {{% capture prerequisites %}} -* You need to have -[Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) -installed. - -* You need to have -[Golang](https://golang.org/doc/install) version 1.9.1 or later installed, -and your `$GOPATH` environment variable must be set. - -* You need to have -[Docker](https://docs.docker.com/engine/installation/) installed. - -* You need to know how to create a pull request to a GitHub repository. -Typically, this involves creating a fork of the repository. For more -information, see -[Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/) and -[GitHub Standard Fork & Pull Request Workflow](https://gist.github.com/Chaser324/ce0505fbed06b947d962). +{{< include "prerequisites-ref-docs.md" >}} {{% /capture %}} - {{% capture steps %}} ## Setting up the local repositories @@ -85,8 +68,7 @@ Remove the spf13 package from `$GOPATH/src/k8s.io/kubernetes/vendor/github.com`. rm -rf $GOPATH/src/k8s.io/kubernetes/vendor/github.com/spf13 ``` -The kubernetes/kubernetes repository provides access to the kubectl and kustomize source code. - +The kubernetes/kubernetes repository provides the `kubectl` and `kustomize` source code. * Determine the base directory of your clone of the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repository. @@ -108,15 +90,16 @@ The remaining steps refer to your base directory as ``. In your local k8s.io/kubernetes repository, check out the branch of interest, and make sure it is up to date. For example, if you want to generate docs for -Kubernetes 1.15, you could use these commands: +Kubernetes 1.17, you could use these commands: ```shell cd -git checkout release-1.15 -git pull https://github.com/kubernetes/kubernetes release-1.15 +git checkout v1.17.0 +git pull https://github.com/kubernetes/kubernetes v1.17.0 ``` -If you do not need to edit the kubectl source code, follow the instructions to [Edit the Makefile](#editing-makefile). +If you do not need to edit the `kubectl` source code, follow the instructions for +[Setting build variables](#setting-build-variables). ## Editing the kubectl source code @@ -152,65 +135,60 @@ milestone in your pull request. If you don’t have those permissions, you will need to work with someone who can set the label and milestone for you. {{< /note >}} -## Editing Makefile +## Setting build variables -Go to ``, and open the `Makefile` for editing: +Go to ``. On you command line, set the following environment variables. -* Set `K8SROOT` to ``. -* Set `WEBROOT` to ``. -* Set `MINOR_VERSION` to the minor version of the docs you want to build. For example, -if you want to build docs for Kubernetes 1.15, set `MINOR_VERSION` to 15. Save and close the `Makefile`. +* Set `K8S_ROOT` to ``. +* Set `WEB_ROOT` to ``. +* Set `K8S_RELEASE` to the version of the docs you want to build. + For example, if you want to build docs for Kubernetes 1.17, set `K8S_RELEASE` to 1.17. -For example, update the following variables: +For example: -``` -WEBROOT=$(GOPATH)/src/github.com//website -K8SROOT=$(GOPATH)/src/k8s.io/kubernetes -MINOR_VERSION=15 +```shell +export WEB_ROOT=$(GOPATH)/src/github.com//website +export K8S_ROOT=$(GOPATH)/src/k8s.io/kubernetes +export K8S_RELEASE=1.17 ``` -## Creating a version directory +## Creating a versioned directory -The version directory is a staging area for the kubectl command reference build. -The YAML files in this directory are used to create the structure and navigation -of the kubectl command reference. +The `createversiondirs` build target creates a versioned directory +and copies the kubectl reference configuration files to the versioned directory. +The versioned directory name follows the pattern of `v_`. -In the `/gen-kubectldocs/generators` directory, if you do not already -have a directory named `v1_`, create one now by copying the directory -for the previous version. For example, suppose you want to generate docs for -Kubernetes 1.15, but you don't already have a `v1_15` directory. Then you could -create and populate a `v1_15` directory by running these commands: +In the `` directory, run the following build target: ```shell -mkdir gen-kubectldocs/generators/v1_15 -cp -r gen-kubectldocs/generators/v1_14/* gen-kubectldocs/generators/v1_15 +cd +make createversiondirs ``` -## Checking out a branch in k8s.io/kubernetes +## Checking out a release tag in k8s.io/kubernetes -In your local repository, checkout the branch that has +In your local `` repository, checkout the branch that has the version of Kubernetes that you want to document. For example, if you want -to generate docs for Kubernetes 1.15, checkout the release-1.15 branch. Make sure +to generate docs for Kubernetes 1.17, checkout the `v1.17.0` tag. Make sure you local branch is up to date. ```shell cd -git checkout release-1.15 -git pull https://github.com/kubernetes/kubernetes release-1.15 +git checkout v1.17.0 +git pull https://github.com/kubernetes/kubernetes v1.17.0 ``` ## Running the doc generation code -In your local kubernetes-sigs/reference-docs repository, build and run the -kubectl command reference generation code. You might need to run the command as root: +In your local ``, run the `copycli` build target. The command runs as `root`: ```shell cd make copycli ``` -The `copycli` command will clean the staging directories, generate the kubectl command files, -and copy the collated kubectl reference HTML page and assets to ``. +The `copycli` command cleans the temporary build directory, generates the kubectl command files, +and copies the collated kubectl command reference HTML page and assets to ``. ## Locate the generated files @@ -237,7 +215,7 @@ static/docs/reference/generated/kubectl/kubectl-commands.html static/docs/reference/generated/kubectl/navData.js ``` -Additionally, the output might show the modified files: +The output may also include: ``` static/docs/reference/generated/kubectl/scroll.js @@ -275,13 +253,12 @@ A few minutes after your pull request is merged, your updated reference topics will be visible in the [published documentation](/docs/home). - {{% /capture %}} {{% capture whatsnext %}} -* [Generating Reference Documentation for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/) -* [Generating Reference Documentation for the Kubernetes API](/docs/home/contribute/generated-reference/kubernetes-api/) -* [Generating Reference Documentation for the Kubernetes Federation API](/docs/home/contribute/generated-reference/federation-api/) +* [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) +* [Generating Reference Documentation for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) +* [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) {{% /capture %}} diff --git a/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md b/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md index 60f4d18ec0be6..35bf166d2bd63 100644 --- a/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md +++ b/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md @@ -1,14 +1,16 @@ --- title: Generating Reference Documentation for the Kubernetes API content_template: templates/task +weight: 50 --- {{% capture overview %}} -This page shows how to update the generated reference docs for the Kubernetes API. +This page shows how to update the Kubernetes API reference documentation. + The Kubernetes API reference documentation is built from the [Kubernetes OpenAPI spec](https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json) -and tools from [kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs). +using the [kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs) generation code. If you find bugs in the generated documentation, you need to [fix them upstream](/docs/contribute/generate-ref-docs/contribute-upstream/). @@ -18,23 +20,12 @@ spec, continue reading this page. {{% /capture %}} - {{% capture prerequisites %}} -You need to have these tools installed: - -* [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) -* [Golang](https://golang.org/doc/install) version 1.9.1 or later - -You need to know how to create a pull request (PR) to a GitHub repository. -Typically, this involves creating a fork of the repository. For more -information, see -[Creating a Documentation Pull Request](/docs/contribute/start/) and -[GitHub Standard Fork & Pull Request Workflow](https://gist.github.com/Chaser324/ce0505fbed06b947d962). +{{< include "prerequisites-ref-docs.md" >}} {{% /capture %}} - {{% capture steps %}} ## Setting up the local repositories @@ -83,49 +74,50 @@ The remaining steps refer to your base directory as ``. repository is `$GOPATH/src/github.com/kubernetes-sigs/reference-docs.` The remaining steps refer to your base directory as ``. - ## Generating the API reference docs This section shows how to generate the [published Kubernetes API reference documentation](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/). -### Modifying the Makefile +### Setting build variables -Go to ``, and open the `Makefile` for editing: +* Set `K8S_ROOT` to ``. +* Set `WEB_ROOT` to ``. +* Set `K8S_RELEASE` to the version of the docs you want to build. + For example, if you want to build docs for Kubernetes 1.17, set `K8S_RELEASE` to 1.17. -* Set `K8SROOT` to ``. -* Set `WEBROOT` to ``. -* Set `MINOR_VERSION` to the minor version of the docs you want to build. For example, -if you want to build docs for Kubernetes 1.15, set `MINOR_VERSION` to 15. Save and close the `Makefile`. +For example: -For example, update the following variables: - -``` -WEBROOT=$(GOPATH)/src/github.com//website -K8SROOT=$(GOPATH)/src/k8s.io/kubernetes -MINOR_VERSION=15 +```shell +export WEB_ROOT=$(GOPATH)/src/github.com//website +export K8S_ROOT=$(GOPATH)/src/k8s.io/kubernetes +export K8S_RELEASE=1.17 ``` -### Copying the OpenAPI spec +### Creating versioned directory and fetching Open API spec -Run the following command in ``: +The `updateapispec` build target creates the versioned build directory. +After the directory is created, the Open API spec is fetched from the +`` repository. These steps ensure that the version +of the configuration files and Kubernetes Open API spec match the release version. +The versioned directory name follows the pattern of `v_`. -```shell -make updateapispec -``` - -The output shows that the file was copied: +In the `` directory, run the following build target: ```shell -cp ~/src/k8s.io/kubernetes/api/openapi-spec/swagger.json gen-apidocs/generators/openapi-spec/swagger.json +cd +make updateapispec ``` ### Building the API reference docs +The `copyapi` target builds the API reference and +copies the generated files to directories in ``. Run the following command in ``: ```shell -make api +cd +make copyapi ``` Verify that these two files have been generated: @@ -135,71 +127,57 @@ Verify that these two files have been generated: [ -e "/gen-apidocs/generators/build/navData.js" ] && echo "navData.js built" || echo "no navData.js" ``` -### Creating directories for published docs - -Create the directories in `` for the generated API reference files: - -```shell -mkdir -p /static/docs/reference/generated/kubernetes-api/v1. -mkdir -p /static/docs/reference/generated/kubernetes-api/v1./css -mkdir -p /static/docs/reference/generated/kubernetes-api/v1./fonts -``` - -## Copying the generated docs to the kubernetes/website repository - -Run the following command in `` to copy the generated files to -your local kubernetes/website repository: - -```shell -make copyapi -``` - -Go to the base of your local kubernetes/website repository, and -see which files have been modified: +Go to the base of your local ``, and +view which files have been modified: ```shell cd git status ``` -The output shows the modified files: +The output is similar to: ``` -static/docs/reference/generated/kubernetes-api/v1.15/css/bootstrap.min.css -static/docs/reference/generated/kubernetes-api/v1.15/css/font-awesome.min.css -static/docs/reference/generated/kubernetes-api/v1.15/css/stylesheet.css -static/docs/reference/generated/kubernetes-api/v1.15/fonts/FontAwesome.otf -static/docs/reference/generated/kubernetes-api/v1.15/fonts/fontawesome-webfont.eot -static/docs/reference/generated/kubernetes-api/v1.15/fonts/fontawesome-webfont.svg -static/docs/reference/generated/kubernetes-api/v1.15/fonts/fontawesome-webfont.ttf -static/docs/reference/generated/kubernetes-api/v1.15/fonts/fontawesome-webfont.woff -static/docs/reference/generated/kubernetes-api/v1.15/fonts/fontawesome-webfont.woff2 -static/docs/reference/generated/kubernetes-api/v1.15/index.html -static/docs/reference/generated/kubernetes-api/v1.15/jquery.scrollTo.min.js -static/docs/reference/generated/kubernetes-api/v1.15/navData.js -static/docs/reference/generated/kubernetes-api/v1.15/scroll.js +static/docs/reference/generated/kubernetes-api/v1.17/css/bootstrap.min.css +static/docs/reference/generated/kubernetes-api/v1.17/css/font-awesome.min.css +static/docs/reference/generated/kubernetes-api/v1.17/css/stylesheet.css +static/docs/reference/generated/kubernetes-api/v1.17/fonts/FontAwesome.otf +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.eot +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.svg +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.ttf +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.woff +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.woff2 +static/docs/reference/generated/kubernetes-api/v1.17/index.html +static/docs/reference/generated/kubernetes-api/v1.17/js/jquery.scrollTo.min.js +static/docs/reference/generated/kubernetes-api/v1.17/js/navData.js +static/docs/reference/generated/kubernetes-api/v1.17/js/scroll.js ``` ## Updating the API reference index pages -* Open `/content/en/docs/reference/kubernetes-api/api-index.md` for editing, and update the API reference version number. For example: +When generating reference documentation for a new release, update the file, +`/content/en/docs/reference/kubernetes-api/api-index.md` with the new +version number. + +* Open `/content/en/docs/reference/kubernetes-api/api-index.md` for editing, + and update the API reference version number. For example: - ```markdown + ``` --- - title: v1.15 + title: v1.17 --- - [Kubernetes API v1.15](/docs/reference/generated/kubernetes-api/v1.15/) + [Kubernetes API v1.17](/docs/reference/generated/kubernetes-api/v1.17/) ``` * Open `/content/en/docs/reference/_index.md` for editing, and add a - new link for the latest API reference. Remove the oldest API reference version. - There should be five links to the most recent API references. + new link for the latest API reference. Remove the oldest API reference version. + There should be five links to the most recent API references. ## Locally test the API reference Publish a local version of the API reference. -Verify the [local preview](http://localhost:1313/docs/reference/generated/kubernetes-api/v1.15/). +Verify the [local preview](http://localhost:1313/docs/reference/generated/kubernetes-api/v1.17/). ```shell cd @@ -220,8 +198,8 @@ to monitor your pull request until it has been merged. {{% capture whatsnext %}} -* [Generating Reference Docs for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/) -* [Generating Reference Documentation for kubectl Commands](/docs/home/contribute/generated-reference/kubectl/) -* [Generating Reference Documentation for the Kubernetes Federation API](/docs/home/contribute/generated-reference/federation-api/) +* [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) +* [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) +* [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) {{% /capture %}} diff --git a/content/en/docs/contribute/generate-ref-docs/kubernetes-components.md b/content/en/docs/contribute/generate-ref-docs/kubernetes-components.md index df0dfd56fa35b..f71db7afb1ae7 100644 --- a/content/en/docs/contribute/generate-ref-docs/kubernetes-components.md +++ b/content/en/docs/contribute/generate-ref-docs/kubernetes-components.md @@ -1,228 +1,34 @@ --- title: Generating Reference Pages for Kubernetes Components and Tools content_template: templates/task +weight: 120 --- {{% capture overview %}} -This page shows how to use the `update-imported-docs` tool to generate -reference documentation for tools and components in the -[Kubernetes](https://github.com/kubernetes/kubernetes) repository. +This page shows how to build the Kubernetes component and tool reference pages. {{% /capture %}} {{% capture prerequisites %}} -* You need a machine that is running Linux or macOS. - -* Install the following: - - * [Python](https://www.python.org/downloads/) v3.7.x - * [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - * [Golang](https://golang.org/doc/install) version 1.13+ - * [Pip](https://pypi.org/project/pip/) used to install PyYAML - * [PyYAML](https://pyyaml.org/) v5.1.2 - * [make](https://www.gnu.org/software/make/) - * [gcc compiler/linker](https://gcc.gnu.org/) - -* The `Go` binary must be in your path. The `update-imported-docs` tool sets your GOPATH. - -* You need to know how to create a pull request to a GitHub repository. -This involves creating your own fork of the repository. For more -information, see [Work from a local clone](/docs/contribute/intermediate/#work_from_a_local_clone). +Start with the [Prerequisites section](/docs/contribute/generate-ref-docs/quickstart/#before-you-begin) +in the Reference Documentation Quickstart guide. {{% /capture %}} {{% capture steps %}} -## Getting the repository - -Make sure your `website` fork is up-to-date with the `kubernetes/website` master and then clone your `website` fork. - -```shell -mkdir github.com -cd github.com -git clone git@github.com:/website.git -``` - -Determine the base directory of your clone. For example, if you followed the -preceding step to get the repository, your base directory is -`github.com/website.` The remaining steps refer to your base directory as -``. - -The `update-imported-docs` tool generates the reference documentation for the -Kubernetes components from the Kubernetes source code. The tool automatically -clones the `kubernetes/kubernetes` repository. If you want to change the -reference documentation, please follow [this -guide](/docs/contribute/generate-ref-docs/contribute-upstream). - -## Overview of update-imported-docs - -The `update-imported-docs` tool is located in the `kubernetes/website/update-imported-docs/` -directory. The tool consists of a Python script that reads a YAML configuration file and performs the following steps: - -1. Clones the related repositories specified in a configuration file. For the - purpose of generating reference docs, the repository that is cloned by - default is `kubernetes-sigs/reference-docs`. -1. Runs commands under the cloned repositories to prepare the docs generator and - then generates the Markdown files. -1. Copies the generated Markdown files to a local clone of the `kubernetes/website` - repository under locations specified in the configuration file. -1. Updates `kubectl` command links from `kubectl`.md to the `kubectl` command reference. - -When the Markdown files are in your local clone of the `kubernetes/website` -repository, you can submit them in a [pull request](/docs/contribute/start/) -to `kubernetes/website`. - -## Configuration file format - -Each config file may contain multiple repos that will be imported together. When -necessary, you can customize the configuration file by manually editing it. You -may create new config files for importing other groups of documents. Imported -documents must follow these guidelines: - -1. Adhere to the [Documentation Style Guide](/docs/contribute/style/style-guide/). - -1. Have `title` defined in the front matter. For example: - - ``` - --- - title: Title Displayed in Table of Contents - --- - - Rest of the .md file... - ``` -1. Be listed in the `kubernetes/website/data/reference.yml` file - -The following is an example of the YAML configuration file: - -```yaml -repos: -- name: community - remote: https://github.com/kubernetes/community.git - branch: master - files: - - src: contributors/devel/README.md - dst: docs/imported/community/devel.md - - src: contributors/guide/README.md - dst: docs/imported/community/guide.md -``` - -Note: `generate-command` is an optional entry, which can be used to run a -given command or a short script to generate the docs from within a repo. - -## Customizing the reference.yml config file - -Open `/update-imported-docs/reference.yml` for editing. -Do not change the content for the `generate-command` entry unless you understand -what it is doing and need to change the specified release branch. - -```yaml -repos: -- name: reference-docs - remote: https://github.com/kubernetes-sigs/reference-docs.git - # This and the generate-command below needs a change when reference-docs has - # branches properly defined - branch: master - generate-command: | - cd $GOPATH - git clone https://github.com/kubernetes/kubernetes.git src/k8s.io/kubernetes - cd src/k8s.io/kubernetes - git checkout release-1.17 - make generated_files - cp -L -R vendor $GOPATH/src - rm -r vendor - cd $GOPATH - go get -v github.com/kubernetes-sigs/reference-docs/gen-compdocs - cd src/github.com/kubernetes-sigs/reference-docs/ - make comp -``` - -In reference.yml, the `files` field is a list of `src` and `dst` fields. The `src` field -specifies the location of a generated Markdown file, and the `dst` field specifies -where to copy this file in the cloned `kubernetes/website` repository. -For example: - -```yaml -repos: -- name: reference-docs - remote: https://github.com/kubernetes-sigs/reference-docs.git - files: - - src: gen-compdocs/build/kube-apiserver.md - dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md - ... -``` - -Note that when there are many files to be copied from the same source directory -to the same destination directory, you can use wildcards in the value given to -`src` and you can just provide the directory name as the value for `dst`. -For example: - -```yaml - files: - - src: gen-compdocs/build/kubeadm*.md - dst: content/en/docs/reference/setup-tools/kubeadm/generated/ -``` - -## Running the update-imported-docs tool - -After having reviewed and/or customized the `reference.yaml` file, you can run -the `update-imported-docs` tool: - -```shell -cd /update-imported-docs -./update-imported-docs reference.yml -``` - -## Fixing Links - -To fix relative links within your imported files, set the repo config's -`gen-absolute-links` property to `true`. You can find an example of this in -[`release.yml`](https://github.com/kubernetes/website/blob/master/update-imported-docs/release.yml). - -## Adding and committing changes in kubernetes/website - -List the files that were generated and copied to the `kubernetes/website` -repository: - -``` -cd -git status -``` - -The output shows the new and modified files. For example, the output -might look like this: - -```shell -... - - modified: content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md - modified: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md - modified: content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md - modified: content/en/docs/reference/command-line-tools-reference/kube-proxy.md - modified: content/en/docs/reference/command-line-tools-reference/kube-scheduler.md - modified: content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md - modified: content/en/docs/reference/kubectl/kubectl.md -... -``` - -Run `git add` and `git commit` to commit the files. - -## Creating a pull request - -Create a pull request to the `kubernetes/website` repository. Monitor your -pull request, and respond to review comments as needed. Continue to monitor -your pull request until it is merged. - -A few minutes after your pull request is merged, your updated reference -topics will be visible in the -[published documentation](/docs/home/). +Follow the [Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) +to generate the Kubernetes component and tool reference pages. {{% /capture %}} {{% capture whatsnext %}} +* [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/) * [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) * [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) * [Contributing to the Upstream Kubernetes Project for Documentation](/docs/contribute/generate-ref-docs/contribute-upstream/) + {{% /capture %}} diff --git a/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md new file mode 100644 index 0000000000000..a777fb77e5250 --- /dev/null +++ b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md @@ -0,0 +1,21 @@ + +### Requirements: + +- You need a machine that is running Linux or macOS. + +- You need to have these tools installed: + + - [Python](https://www.python.org/downloads/) v3.7.x + - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) + - [Golang](https://golang.org/doc/install) version 1.13+ + - [Pip](https://pypi.org/project/pip/) used to install PyYAML + - [PyYAML](https://pyyaml.org/) v5.1.2 + - [make](https://www.gnu.org/software/make/) + - [gcc compiler/linker](https://gcc.gnu.org/) + - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference) + +- Your `PATH` environment variable must include the required build tools, such as the `Go` binary and `python`. + +- You need to know how to create a pull request to a GitHub repository. + This involves creating your own fork of the repository. For more + information, see [Work from a local clone](/docs/contribute/intermediate/#work_from_a_local_clone). diff --git a/content/en/docs/contribute/generate-ref-docs/quickstart.md b/content/en/docs/contribute/generate-ref-docs/quickstart.md new file mode 100644 index 0000000000000..095bc05c21725 --- /dev/null +++ b/content/en/docs/contribute/generate-ref-docs/quickstart.md @@ -0,0 +1,260 @@ +--- +title: Quickstart +content_template: templates/task +weight: 40 +--- + +{{% capture overview %}} + +This page shows how to use the `update-imported-docs` script to generate +the Kubernetes reference documentation. The script automates +the build setup and generates the reference documentation for a release. + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "prerequisites-ref-docs.md" >}} + +{{% /capture %}} + +{{% capture steps %}} + +## Getting the docs repository + +Make sure your `website` fork is up-to-date with the `kubernetes/website` master and clone +your `website` fork. + +```shell +mkdir github.com +cd github.com +git clone git@github.com:/website.git +``` + +Determine the base directory of your clone. For example, if you followed the +preceding step to get the repository, your base directory is +`github.com/website.` The remaining steps refer to your base directory as +``. + +{{< note>}} +If you want to change the content of the component tools and API reference, +see the [contributing upstream guide](/docs/contribute/generate-ref-docs/contribute-upstream). +{{< /note >}} + +## Overview of update-imported-docs + +The `update-imported-docs` script is located in the `/update-imported-docs/` +directory. + +The script builds the following references: + +* Component and tool reference pages +* The `kubectl` command reference +* The Kubernetes API reference + +The `update-imported-docs` script generates the Kubernetes reference documentation +from the Kubernetes source code. The script creates a temporary directory +under `/tmp` on your machine and clones the required repositories: `kubernetes/kubernetes` and +`kubernetes-sigs/reference-docs` into this directory. +The script sets your `GOPATH` to this temporary directory. +Three additional environment variables are set: + +* `K8S_RELEASE` +* `K8S_ROOT` +* `K8S_WEBROOT` + +The script requires two arguments to run successfully: + +* A YAML configuration file (`reference.yml`) +* A release version, for example:`1.17` + +The configuration file contains a `generate-command` field. +The `generate-command` field defines a series of build instructions +from `kubernetes-sigs/reference-docs/Makefile`. The `K8S_RELEASE` variable +determines the version of the release. + +The `update-imported-docs` script performs the following steps: + +1. Clones the related repositories specified in a configuration file. For the + purpose of generating reference docs, the repository that is cloned by + default is `kubernetes-sigs/reference-docs`. +1. Runs commands under the cloned repositories to prepare the docs generator and + then generates the HTML and Markdown files. +1. Copies the generated HTML and Markdown files to a local clone of the `` + repository under locations specified in the configuration file. +1. Updates `kubectl` command links from `kubectl`.md to the refer to + the sections in the `kubectl` command reference. + +When the generated files are in your local clone of the `` +repository, you can submit them in a [pull request](/docs/contribute/start/) +to ``. + +## Configuration file format + +Each configuration file may contain multiple repos that will be imported together. When +necessary, you can customize the configuration file by manually editing it. You +may create new config files for importing other groups of documents. +The following is an example of the YAML configuration file: + +```yaml +repos: +- name: community + remote: https://github.com/kubernetes/community.git + branch: master + files: + - src: contributors/devel/README.md + dst: docs/imported/community/devel.md + - src: contributors/guide/README.md + dst: docs/imported/community/guide.md +``` + +Single page Markdown documents, imported by the tool, must adhere to +the [Documentation Style Guide](/docs/contribute/style/style-guide/). + +## Customizing reference.yml + +Open `/update-imported-docs/reference.yml` for editing. +Do not change the content for the `generate-command` field unless you understand +how the command is used to build the references. +You should not need to update `reference.yml`. At times, changes in the +upstream source code, may require changes to the configuration file +(for example: golang version dependencies and third-party library changes). +If you encounter build issues, contact the SIG-Docs team on the +[#sig-docs Kubernetes Slack channel](https://kubernetes.slack.com). + +{{< note >}} +The `generate-command` is an optional entry, which can be used to run a +given command or a short script to generate the docs from within a repository. +{{< /note >}} + +In `reference.yml`, `files` contains a list of `src` and `dst` fields. +The `src` field contains the location of a generated Markdown file in the cloned +`kubernetes-sigs/reference-docs` build directory, and the `dst` field specifies +where to copy this file in the cloned `kubernetes/website` repository. +For example: + +```yaml +repos: +- name: reference-docs + remote: https://github.com/kubernetes-sigs/reference-docs.git + files: + - src: gen-compdocs/build/kube-apiserver.md + dst: content/en/docs/reference/command-line-tools-reference/kube-apiserver.md + ... +``` + +Note that when there are many files to be copied from the same source directory +to the same destination directory, you can use wildcards in the value given to +`src`. You must provide the directory name as the value for `dst`. +For example: + +```yaml + files: + - src: gen-compdocs/build/kubeadm*.md + dst: content/en/docs/reference/setup-tools/kubeadm/generated/ +``` + +## Running the update-imported-docs tool + +You can run the `update-imported-docs` tool as follows: + +```shell +cd /update-imported-docs +./update-imported-docs +``` + +For example: + +```shell +./update-imported-docs reference.yml 1.17 +``` + + +## Fixing Links + +The `release.yml` configuration file contains instructions to fix relative links. +To fix relative links within your imported files, set the`gen-absolute-links` +property to `true`. You can find an example of this in +[`release.yml`](https://github.com/kubernetes/website/blob/master/update-imported-docs/release.yml). + +## Adding and committing changes in kubernetes/website + +List the files that were generated and copied to ``: + +```shell +cd +git status +``` + +The output shows the new and modified files. The generated output varies +depending upon changes made to the upstream source code. + +### Generated component tool files + +``` +content/en/docs/reference/command-line-tools-reference/cloud-controller-manager.md +content/en/docs/reference/command-line-tools-reference/kube-apiserver.md +content/en/docs/reference/command-line-tools-reference/kube-controller-manager.md +content/en/docs/reference/command-line-tools-reference/kube-proxy.md +content/en/docs/reference/command-line-tools-reference/kube-scheduler.md +content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm.md +content/en/docs/reference/kubectl/kubectl.md +``` + +### Generated kubectl command reference files + +``` +static/docs/reference/generated/kubectl/kubectl-commands.html +static/docs/reference/generated/kubectl/navData.js +static/docs/reference/generated/kubectl/scroll.js +static/docs/reference/generated/kubectl/stylesheet.css +static/docs/reference/generated/kubectl/tabvisibility.js +static/docs/reference/generated/kubectl/node_modules/bootstrap/dist/css/bootstrap.min.css +static/docs/reference/generated/kubectl/node_modules/highlight.js/styles/default.css +static/docs/reference/generated/kubectl/node_modules/jquery.scrollto/jquery.scrollTo.min.js +static/docs/reference/generated/kubectl/node_modules/jquery/dist/jquery.min.js +static/docs/reference/generated/kubectl/css/font-awesome.min.css +``` + +### Generated Kubernetes API reference directories and files + +``` +static/docs/reference/generated/kubernetes-api/v1.17/index.html +static/docs/reference/generated/kubernetes-api/v1.17/js/navData.js +static/docs/reference/generated/kubernetes-api/v1.17/js/scroll.js +static/docs/reference/generated/kubernetes-api/v1.17/js/query.scrollTo.min.js +static/docs/reference/generated/kubernetes-api/v1.17/css/font-awesome.min.css +static/docs/reference/generated/kubernetes-api/v1.17/css/bootstrap.min.css +static/docs/reference/generated/kubernetes-api/v1.17/css/stylesheet.css +static/docs/reference/generated/kubernetes-api/v1.17/fonts/FontAwesome.otf +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.eot +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.svg +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.ttf +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.woff +static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.woff2 +``` + +Run `git add` and `git commit` to commit the files. + +## Creating a pull request + +Create a pull request to the `kubernetes/website` repository. Monitor your +pull request, and respond to review comments as needed. Continue to monitor +your pull request until it is merged. + +A few minutes after your pull request is merged, your updated reference +topics will be visible in the +[published documentation](/docs/home/). + +{{% /capture %}} + +{{% capture whatsnext %}} + +To generate the individual reference documentation by manually setting up the required build repositories and +running the build targets, see the following guides: + +* [Generating Reference Documentation for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/) +* [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/) +* [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) + +{{% /capture %}} diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 8982e29d99967..2522a832f3d5e 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -54,16 +54,16 @@ Once you've opened a localization PR, you can become members of the Kubernetes G ### Add your localization team in GitHub -Next, add your Kubernetes localization team to [`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml). For an example of adding a localization team, see the PR to add the [Spanish localization team](https://github.com/kubernetes/org/pull/685). +Next, add your Kubernetes localization team to [`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml). For an example of adding a localization team, see the PR to add the [Spanish localization team](https://github.com/kubernetes/org/pull/685). -Members of `sig-docs-**-owners` can approve PRs that change content within (and only within) your localization directory: `/content/**/`. +Members of `@kubernetes/sig-docs-**-owners` can approve PRs that change content within (and only within) your localization directory: `/content/**/`. -The `sig-docs-**-reviews` team automates review assignment for new PRs. +For each localization, The `@kubernetes/sig-docs-**-reviews` team automates review assignment for new PRs. -Members of `sig-docs-l10n-admins` can create new development branches to coordinate translation efforts. +Members of `@kubernetes/website-maintainers` can create new development branches to coordinate translation efforts. + +Members of `@kubernetes/website-milestone-maintainers` can use the `/milestone` [Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs. -Members of `website-milestone-maintainers` can use the `/milestone` [Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs. - ### Configure the workflow Next, add a GitHub label for your localization in the `kubernetes/test-infra` repository. A label lets you filter issues and pull requests for your specific language. @@ -240,9 +240,9 @@ Because localization projects are highly collaborative efforts, we encourage tea To collaborate on a development branch: -1. A team member of [@kubernetes/sig-docs-l10n-admins](https://github.com/orgs/kubernetes/teams/sig-docs-l10n-admins) opens a development branch from a source branch on https://github.com/kubernetes/website. +1. A team member of [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) opens a development branch from a source branch on https://github.com/kubernetes/website. - Your team approvers joined the `sig-docs-l10n-admins` team when you [added your localization team](#add-your-localization-team-in-github) to the `kubernetes/org` repository. + Your team approvers joined the `@kubernetes/website-maintainers` team when you [added your localization team](#add-your-localization-team-in-github) to the [`kubernetes/org`](https://github.com/kubernetes/org) repository. We recommend the following branch naming scheme: diff --git a/content/en/docs/contribute/participating.md b/content/en/docs/contribute/participating.md index c9785388abcac..b63c33e915383 100644 --- a/content/en/docs/contribute/participating.md +++ b/content/en/docs/contribute/participating.md @@ -19,7 +19,7 @@ SIG Docs welcomes content and reviews from all contributors. Anyone can open a pull request (PR), and anyone is welcome to file issues about content or comment on pull requests in progress. -Within SIG Docs, you may also become a [member](#members), +You can also become a [member](#members), [reviewer](#reviewers), or [approver](#approvers). These roles require greater access and entail certain responsibilities for approving and committing changes. See [community-membership](https://github.com/kubernetes/community/blob/master/community-membership.md) @@ -34,51 +34,47 @@ aspects of Kubernetes -- the Kubernetes website and documentation. ## Roles and responsibilities -When a pull request is merged to the branch used to publish content (currently -`master`), that content is published and available to the world. To ensure that -the quality of our published content is high, we limit merging pull requests to -SIG Docs approvers. Here's how it works. +- **Anyone** can contribute to Kubernetes documentation. To contribute, you must [sign the CLA](/docs/contribute/start#sign-the-cla) and have a GitHub account. +- **Members** of the Kubernetes organization are contributors who have spent time and effort on the Kubernetes project, usually by opening pull requests with accepted changes. See [Community membership](https://github.com/kubernetes/community/blob/master/community-membership.md) for membership criteria. +- A SIG Docs **Reviewer** is a member of the Kubernetes organization who has + expressed interest in reviewing documentation pull requests, and has been + added to the appropriate GitHub group and `OWNERS` files in the GitHub + repository by a SIG Docs Approver. +- A SIG Docs **Approver** is a member in good standing who has shown a continued + commitment to the project. An approver can merge pull requests + and publish content on behalf of the Kubernetes organization. + Approvers can also represent SIG Docs in the larger Kubernetes community. + Some duties of a SIG Docs approver, such as coordinating a release, + require a significant time commitment. -- When a pull request has both the `lgtm` and `approve` labels and has no `hold` - labels, the pull request merges automatically. -- Kubernetes organization members and SIG Docs approvers can add comments to - prevent automatic merging of a given pull request (by adding a `/hold` comment - or withholding a `/lgtm` comment). -- Any Kubernetes member can add the `lgtm` label, by adding a `/lgtm` comment. -- Only an approver who is a member of SIG Docs can cause a pull request to merge - by adding an `/approve` comment. Some approvers also perform additional - specific roles, such as [PR Wrangler](#pr-wrangler) or - [SIG Docs chairperson](#sig-docs-chairperson). +## Anyone -For more information about expectations and differences between the roles of -Kubernetes organization member and SIG Docs approvers, see -[Types of contributor](/docs/contribute#types-of-contributor). The following -sections cover more details about these roles and how they work within -SIG Docs. +Anyone can do the following: -### Anyone +- Open a GitHub issue against any part of Kubernetes, including documentation. +- Provide non-binding feedback on a pull request/ +- Bring up ideas for improvement on [Slack](http://slack.k8s.io/) or the [SIG docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). +- Use the `/lgtm` Prow command (short for "looks good to me") to recommend the changes in a pull request for merging. + {{< note >}} + If you are not a member of the Kubernetes organization, using `/lgtm` has no effect on automated systems. + {{< /note >}} -Anyone can file an issue against any part of Kubernetes, including documentation. +After [signing the CLA](/docs/contribute/start#sign-the-cla), anyone can also: +- Open a pull request to improve existing content, add new content, or write a blog post or case study. -Anyone who has signed the CLA can submit a pull request. If you cannot sign the -CLA, the Kubernetes project cannot accept your contribution. +## Members -### Members +Members are contributors to the Kubernetes project who meet the [membership criteria](https://github.com/kubernetes/community/blob/master/community-membership.md#member). SIG Docs welcomes contributions from all members of the Kubernetes community, +and frequently requests reviews from members of other SIGs for technical accuracy. -Any member of the [Kubernetes organization](https://github.com/kubernetes) can -review a pull request, and SIG Docs team members frequently request reviews from -members of other SIGs for technical accuracy. -SIG Docs also welcomes reviews and feedback regardless of a person's membership -status in the Kubernetes organization. You can indicate your approval by adding -a comment of `/lgtm` to a pull request. If you are not a member of the -Kubernetes organization, your `/lgtm` has no effect on automated systems. +Any member of the [Kubernetes organization](https://github.com/kubernetes) can do the following: -Any member of the Kubernetes organization can add a `/hold` comment to prevent -the pull request from being merged. Any member can also remove a `/hold` comment -to cause a PR to be merged if it already has both `/lgtm` and `/approve` applied -by appropriate people. +- Everything listed under [Anyone](#anyone) +- Use the `/lgtm` comment to add the LGTM (looks good to me) label to a pull request. +- Use the `/hold` command to prevent a pull request from being merged, if the pull request already has the LGTM and approve labels. +- Use the `/assign` comment to assign a reviewer to a pull request. -#### Becoming a member +### Becoming a member After you have successfully submitted at least 5 substantive pull requests, you can request [membership](https://github.com/kubernetes/community/blob/master/community-membership.md#member) @@ -86,11 +82,11 @@ in the Kubernetes organization. Follow these steps: 1. Find two reviewers or approvers to [sponsor](/docs/contribute/advanced#sponsor-a-new-contributor) your membership. - - Ask for sponsorship in the [#sig-docs channel on the + + Ask for sponsorship in the [#sig-docs channel on the Kubernetes Slack instance](https://kubernetes.slack.com) or on the [SIG Docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). - + {{< note >}} Don't send a direct email or Slack direct message to an individual SIG Docs member. @@ -108,20 +104,29 @@ in the Kubernetes organization. Follow these steps: GitHub issue to show approval and then closes the GitHub issue. Congratulations, you are now a member! -If for some reason your membership request is not accepted right away, the +If your membership request is not accepted, the membership committee provides information or steps to take before applying again. -### Reviewers +## Reviewers Reviewers are members of the [@kubernetes/sig-docs-pr-reviews](https://github.com/orgs/kubernetes/teams/sig-docs-pr-reviews) -GitHub group. See [Teams and groups within SIG Docs](#teams-and-groups-within-sig-docs). +GitHub group. Reviewers review documentation pull requests and provide feedback on proposed +changes. Reviewers can: -Reviewers review documentation pull requests and provide feedback on proposed -changes. +- Do everything listed under [Anyone](#anyone) and [Members](#members) +- Document new features +- Triage and categorize issues +- Review pull requests and provide binding feedback +- Create diagrams, graphics assets, and embeddable screencasts and videos +- Localization +- Edit user-facing strings in code +- Improve code comments -Automation assigns reviewers to pull requests, and contributors can request a +### Assigning reviewers to pull requests + +Automation assigns reviewers to all pull requests. You can request a review from a specific reviewer with a comment on the pull request: `/assign [@_github_handle]`. To indicate that a pull request is technically accurate and requires no further changes, a reviewer adds a `/lgtm` comment to the pull @@ -129,18 +134,14 @@ request. If the assigned reviewer has not yet reviewed the content, another reviewer can step in. In addition, you can assign technical reviewers and wait for them to -provide `/lgtm`. - -For a trivial change or one that needs no technical review, the SIG Docs -[approver](#approvers) can provide the `/lgtm` as well. +provide a `/lgtm` comment. -A `/approve` comment from a reviewer is ignored by automation. +For a trivial change or one that needs no technical review, SIG Docs +[approvers](#approvers) can provide the `/lgtm` as well. -For more about how to become a SIG Docs reviewer and the responsibilities and -time commitment involved, see -[Becoming a reviewer or approver](#becoming-an-approver-or-reviewer). +An `/approve` comment from a reviewer is ignored by automation. -#### Becoming a reviewer +### Becoming a reviewer When you meet the [requirements](https://github.com/kubernetes/community/blob/master/community-membership.md#reviewer), @@ -161,26 +162,27 @@ If you are approved, request that a current SIG Docs approver add you to the GitHub group. Only members of the `kubernetes-website-admins` GitHub group can add new members to a GitHub group. -### Approvers +## Approvers Approvers are members of the [@kubernetes/sig-docs-maintainers](https://github.com/orgs/kubernetes/teams/sig-docs-maintainers) GitHub group. See [Teams and groups within SIG Docs](#teams-and-groups-within-sig-docs). -Approvers have the ability to merge a PR, and thus, to publish content on the -Kubernetes website. To approve a PR, an approver leaves an `/approve` comment on -the PR. If someone who is not an approver leaves the approval comment, -automation ignores it. +Approvers can do the following: + +- Everything listed under [Anyone](#anyone), [Members](#members) and [Reviewers](#reviewers) +- Publish contributor content by approving and merging pull requests using the `/approve` comment. + If someone who is not an approver leaves the approval comment, automation ignores it. +- Participate in a Kubernetes release team as a docs representative +- Propose improvements to the style guide +- Propose improvements to docs tests +- Propose improvements to the Kubernetes website or other tooling If the PR already has a `/lgtm`, or if the approver also comments with `/lgtm`, the PR merges automatically. A SIG Docs approver should only leave a `/lgtm` on a change that doesn't need additional technical review. -For more about how to become a SIG Docs approver and the responsibilities and -time commitment involved, see -[Becoming a reviewer or approver](#becoming-an-approver-or-reviewer). - -#### Becoming an approver +### Becoming an approver When you meet the [requirements](https://github.com/kubernetes/community/blob/master/community-membership.md#approver), @@ -201,34 +203,29 @@ If you are approved, request that a current SIG Docs approver add you to the GitHub group. Only members of the `kubernetes-website-admins` GitHub group can add new members to a GitHub group. -#### Approver responsibilities +### Approver responsibilities Approvers improve the documentation by reviewing and merging pull requests into the website repository. Because this role carries additional privileges, approvers have additional responsibilities: - Approvers can use the `/approve` command, which merges PRs into the repo. A careless merge can break the site, so be sure that when you merge something, you mean it. - -- Make sure that proposed changes meet the contribution guidelines. + +- Make sure that proposed changes meet the [contribution guidelines](/docs/contribute/style/content-guide/#contributing-content). If you ever have a question, or you're not sure about something, feel free to call for additional review. -- Verify that netlify tests pass before you `/approve` a PR. +- Verify that Netlify tests pass before you `/approve` a PR. Netlify tests must pass before approving -- Visit the netlify page preview for a PR to make sure things look good before approving. - -#### PR Wrangler +- Visit the Netlify page preview for a PR to make sure things look good before approving. -SIG Docs approvers participate in the -[PR Wrangler rotation scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers) -for weekly rotations. SIG Docs expects all approvers to participate in this -rotation. See -[Be the PR Wrangler for a week](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) +- Participate in the [PR Wrangler rotation scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers) for weekly rotations. SIG Docs expects all approvers to participate in this +rotation. See [Be the PR Wrangler for a week](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) for more details. -#### SIG Docs chairperson +## SIG Docs chairperson Each SIG, including SIG Docs, selects one or more SIG members to act as chairpersons. These are points of contact between SIG Docs and other parts of @@ -285,6 +282,24 @@ The combination of OWNERS files and front-matter in Markdown files determines the advice PR owners get from automated systems about who to ask for technical and editorial review of their PR. +## How merging works + +When a pull request is merged to the branch used to publish content (currently +`master`), that content is published and available to the world. To ensure that +the quality of our published content is high, we limit merging pull requests to +SIG Docs approvers. Here's how it works. + +- When a pull request has both the `lgtm` and `approve` labels, has no `hold` + labels, and all tests are passing, the pull request merges automatically. +- Kubernetes organization members and SIG Docs approvers can add comments to + prevent automatic merging of a given pull request (by adding a `/hold` comment + or withholding a `/lgtm` comment). +- Any Kubernetes member can add the `lgtm` label by adding a `/lgtm` comment. +- Only SIG Docs approvers can merge a pull request + by adding an `/approve` comment. Some approvers also perform additional + specific roles, such as [PR Wrangler](#pr-wrangler) or + [SIG Docs chairperson](#sig-docs-chairperson). + {{% /capture %}} {{% capture whatsnext %}} @@ -295,5 +310,3 @@ For more information about contributing to the Kubernetes documentation, see: - [Documentation style](/docs/contribute/style/) {{% /capture %}} - - diff --git a/content/en/docs/contribute/start.md b/content/en/docs/contribute/start.md index ff1f62b149b9c..4af5f6d5cefd2 100644 --- a/content/en/docs/contribute/start.md +++ b/content/en/docs/contribute/start.md @@ -196,15 +196,38 @@ to base your work on. Use these guidelines to make the decision: - Use `master` for fixing problems in content that is already published, or making improvements to content that already exists. - - Use a release branch (such as `dev-{{< release-branch >}}` for the {{< release-branch >}} release) to document upcoming features - or changes for an upcoming release that is not yet published. -- Use a feature branch that has been agreed upon by SIG Docs to collaborate on - big improvements or changes to the existing documentation, including content - reorganization or changes to the look and feel of the website. +- Use `master` to document something that is already part of the current + Kubernetes release, but isn't yet documented. You should write this content + in English first, and then localization teams will pick that change up as a + localization task. +- If you're working on a localization, you should follow the convention for + that particular localization. To find this out, you can look at other + pull requests (tip: search for `is:pr is:merged label:language/xx`) + {{< comment >}}Localization note: when localizing that tip, replace `xx` + with the actual ISO3166 two-letter code for your target locale.{{< /comment >}} + - Some localization teams work with PRs that target `master` + - Some localization teams work with a series of long-lived branches, and + periodically merge these to `master`. This kind of branch has a name like + dev-\-\.\; for example: + `dev-{{< release-branch >}}-ja.1`. +- If you're writing or updating documentation for a feature change release, + then you need to know the major and minor version of Kubernetes that + the change will first appear in. + - For example, if the feature gate JustAnExample is going to move from alpha + to beta in the next minor version, you need to know what the next minor + version number is. + - Find the release branch named for that version. For example, features that + changed in the v{{< release-branch >}} release got documented in the branch + named `dev-{{< release-branch >}}`. If you're still not sure which branch to choose, ask in `#sig-docs` on Slack or attend a weekly SIG Docs meeting to get clarity. +{{< note >}} +If you already submitted your pull request and you know that the Base Branch +was wrong, you (and only you, the submitter) can change it. +{{< /note >}} + ### Submit a pull request Follow these steps to submit a pull request to improve the Kubernetes diff --git a/content/en/docs/contribute/style/content-guide.md b/content/en/docs/contribute/style/content-guide.md index 6c315ef0ad0b4..5d3f5790c7988 100644 --- a/content/en/docs/contribute/style/content-guide.md +++ b/content/en/docs/contribute/style/content-guide.md @@ -47,7 +47,7 @@ Before adding content, ask yourself this: - Is the content about an active CNCF project OR a project in the kubernetes or kubernetes-sigs GitHub organizations? - If yes, then: - Does the project have its own documentation? - - if yes, link to the project's documention from the Kubernetes documentation + - if yes, link to the project's documentation from the Kubernetes documentation - if no, add the content to the project's repository if possible and then link to it from the Kubernetes documentation - If no, then: - Stop! @@ -64,7 +64,7 @@ Below are general categories of non-Kubernetes project content along with guidel - Referring to or linking to existing documentation about a CNCF project or a project in the kubernetes or kubernetes-sigs GitHub organizations - Example: for installating Kubernetes in a learning environment, including a prerequisite stating that successful installation and configuration of minikube is required and linking to the relevant minikube documentation - Adding content for kubernetes or kubernetes-sigs projects that don't have their own instructional content - - Example: including [kubadm](https://github.com/kubernetes/kubeadm) installation and troubleshooting instructions + - Example: including [kubeadm](https://github.com/kubernetes/kubeadm) installation and troubleshooting instructions - Not Allowed: - Adding content that duplicates documentation in another repository - Examples: diff --git a/content/en/docs/contribute/style/write-new-topic.md b/content/en/docs/contribute/style/write-new-topic.md index 22e55e3da8157..50e49734a3b57 100644 --- a/content/en/docs/contribute/style/write-new-topic.md +++ b/content/en/docs/contribute/style/write-new-topic.md @@ -115,8 +115,9 @@ When adding a new standalone sample file, such as a YAML file, place the code in one of the `/examples/` subdirectories where `` is the language for the topic. In your topic file, use the `codenew` shortcode: -
{{< codenew file="<RELPATH>/my-example-yaml>" >}}
- +```none +{{/my-example-yaml>" */>}} +``` where `` is the path to the file to include, relative to the `examples` directory. The following Hugo shortcode references a YAML file located at `/content/en/examples/pods/storage/gce-volume.yaml`. diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md index bdd6b78d24247..7efffa4ef683b 100644 --- a/content/en/docs/reference/_index.md +++ b/content/en/docs/reference/_index.md @@ -19,12 +19,7 @@ This section of the Kubernetes documentation contains references. ## API Reference * [Kubernetes API Overview](/docs/reference/using-api/api-overview/) - Overview of the API for Kubernetes. -* Kubernetes API Versions - * [1.17](/docs/reference/generated/kubernetes-api/v1.17/) - * [1.16](/docs/reference/generated/kubernetes-api/v1.16/) - * [1.15](/docs/reference/generated/kubernetes-api/v1.15/) - * [1.14](/docs/reference/generated/kubernetes-api/v1.14/) - * [1.13](/docs/reference/generated/kubernetes-api/v1.13/) +* [Kubernetes API Reference {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/) ## API Client Libraries @@ -39,18 +34,17 @@ client libraries: ## CLI Reference -* [kubectl](/docs/user-guide/kubectl-overview) - Main CLI tool for running commands and managing Kubernetes clusters. - * [JSONPath](/docs/user-guide/jsonpath/) - Syntax guide for using [JSONPath expressions](http://goessner.net/articles/JsonPath/) with kubectl. -* [kubeadm](/docs/admin/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster. -* [kubefed](/docs/admin/kubefed/) - CLI tool to help you administrate your federated clusters. +* [kubectl](/docs/reference/kubectl/overview/) - Main CLI tool for running commands and managing Kubernetes clusters. + * [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](http://goessner.net/articles/JsonPath/) with kubectl. +* [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster. ## Config Reference -* [kubelet](/docs/admin/kubelet/) - The primary *node agent* that runs on each node. The kubelet takes a set of PodSpecs and ensures that the described containers are running and healthy. -* [kube-apiserver](/docs/admin/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers. -* [kube-controller-manager](/docs/admin/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes. -* [kube-proxy](/docs/admin/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends. -* [kube-scheduler](/docs/admin/kube-scheduler/) - Scheduler that manages availability, performance, and capacity. +* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - The primary *node agent* that runs on each node. The kubelet takes a set of PodSpecs and ensures that the described containers are running and healthy. +* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers. +* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes. +* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends. +* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity. ## Design Docs diff --git a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md index 469d1267f0c5e..3500ed0c53bfe 100644 --- a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md @@ -969,9 +969,10 @@ Specifying `Equivalent` is recommended, and ensures that webhooks continue to in resources they expect when upgrades enable new versions of the resource in the API server. When a resource stops being served by the API server, it is no longer considered equivalent to other versions of that resource that are still served. -For example, deprecated `extensions/v1beta1` deployments are scheduled to stop being served by default in v1.16. -Once that occurs, a webhook with a `apiGroups:["extensions"], apiVersions:["v1beta1"], resources:["deployments"]` rule -would no longer intercept deployments created via `apps/v1` APIs. For that reason, webhooks should prefer registering +For example, `extensions/v1beta1` deployments were first deprecated and then removed (in Kubernetes v1.16). + +Since that removal, a webhook with a `apiGroups:["extensions"], apiVersions:["v1beta1"], resources:["deployments"]` rule +does not intercept deployments created via `apps/v1` APIs. For that reason, webhooks should prefer registering for stable versions of resources. This example shows a validating webhook that intercepts modifications to deployments (no matter the API group or version), diff --git a/content/en/docs/reference/glossary/container-runtime.md b/content/en/docs/reference/glossary/container-runtime.md index 38a11e1964990..c45bed0f7b303 100644 --- a/content/en/docs/reference/glossary/container-runtime.md +++ b/content/en/docs/reference/glossary/container-runtime.md @@ -15,7 +15,7 @@ tags: -Kubernetes supports several container runtimes: [Docker](http://www.docker.com), -[containerd](https://containerd.io), [cri-o](https://cri-o.io/), -[rktlet](https://github.com/kubernetes-incubator/rktlet) and any implementation of -the [Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md). +Kubernetes supports several container runtimes: {{< glossary_tooltip term_id="docker">}}, +{{< glossary_tooltip term_id="containerd" >}}, {{< glossary_tooltip term_id="cri-o" >}}, +and any implementation of the [Kubernetes CRI (Container Runtime +Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md). diff --git a/content/en/docs/reference/glossary/device-plugin.md b/content/en/docs/reference/glossary/device-plugin.md index d29b495953148..d1fb91cce4eee 100644 --- a/content/en/docs/reference/glossary/device-plugin.md +++ b/content/en/docs/reference/glossary/device-plugin.md @@ -4,14 +4,26 @@ id: device-plugin date: 2019-02-02 full_link: /docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ short_description: > - Containers running in Kubernetes that provide access to a vendor specific resource. + Software extensions to let Pods access devices that need vendor-specific initialization or setup aka: tags: - fundamental - extension --- - Device Plugins are containers running in Kubernetes that provide access to a vendor specific resource. + Device plugins run on worker +{{< glossary_tooltip term_id="node" text="Nodes">}} and provide +{{< glossary_tooltip term_id="pod" text="Pods ">}} with access to resources, +such as local hardware, that require vendor-specific initialization or setup +steps. -[Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) are containers running in Kubernetes that provide access to a vendor-specific resource. Device Plugins advertise these resources to {{< glossary_tooltip term_id="kubelet" >}}. They can be deployed manually or as a {{< glossary_tooltip term_id="daemonset" >}}, rather than writing custom Kubernetes code. +Device plugins advertise resources to the +{{< glossary_tooltip term_id="kubelet" text="kubelet" >}}, so that workload +Pods can access hardware features that relate to the Node where that Pod is running. +You can deploy a device plugin as a {{< glossary_tooltip term_id="daemonset" >}}, +or install the device plugin software directly on each target Node. + +See +[Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) +for more information. diff --git a/content/en/docs/reference/glossary/service-account.md b/content/en/docs/reference/glossary/service-account.md index f5d6854ad076b..aee24891c8b0f 100755 --- a/content/en/docs/reference/glossary/service-account.md +++ b/content/en/docs/reference/glossary/service-account.md @@ -15,5 +15,5 @@ tags: -When processes inside Pods access the cluster, they are authenticated by the API server as a particular service account, for example, `default`. When you create a Pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace {{< glossary_tooltip text="Namespace" term_id="namespace" >}}. +When processes inside Pods access the cluster, they are authenticated by the API server as a particular service account, for example, `default`. When you create a Pod, if you do not specify a service account, it is automatically assigned the default service account in the same {{< glossary_tooltip text="Namespace" term_id="namespace" >}}. diff --git a/content/en/docs/reference/glossary/upstream.md b/content/en/docs/reference/glossary/upstream.md index 860e002bfe2d8..880f43562b645 100755 --- a/content/en/docs/reference/glossary/upstream.md +++ b/content/en/docs/reference/glossary/upstream.md @@ -14,6 +14,6 @@ tags: -* In the **Kubernetes Community**: Conversations often use *upstream* to mean the core Kubernetes codebase, which the general ecosystem, other code, or third-party tools relies upon. For example, [community members](#term-member) may suggest that a feature is moved upstream so that it is in the core codebase instead of in a plugin or third-party tool. +* In the **Kubernetes Community**: Conversations often use *upstream* to mean the core Kubernetes codebase, which the general ecosystem, other code, or third-party tools rely upon. For example, [community members](#term-member) may suggest that a feature is moved upstream so that it is in the core codebase instead of in a plugin or third-party tool. * In **GitHub** or **git**: The convention is to refer to a source repo as *upstream*, whereas the forked repo is considered *downstream*. diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index ba3d48b638c8d..adb7fb8b6f339 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -144,7 +144,7 @@ EOF # Get commands with basic output kubectl get services # List all services in the namespace kubectl get pods --all-namespaces # List all pods in all namespaces -kubectl get pods -o wide # List all pods in the namespace, with more details +kubectl get pods -o wide # List all pods in the current namespace, with more details kubectl get deployment my-dep # List a particular deployment kubectl get pods # List all pods in the namespace kubectl get pod my-pod -o yaml # Get a pod's YAML @@ -160,8 +160,8 @@ kubectl get services --sort-by=.metadata.name # List pods Sorted by Restart Count kubectl get pods --sort-by='.status.containerStatuses[0].restartCount' -# List PersistentVolumes in test namespace sorted by capacity -kubectl get pv -n test --sort-by=.spec.capacity.storage +# List PersistentVolumes sorted by capacity +kubectl get pv --sort-by=.spec.capacity.storage # Get the version label of all pods with label app=cassandra kubectl get pods --selector=app=cassandra -o \ @@ -201,7 +201,7 @@ kubectl diff -f ./my-manifest.yaml ## Updating Resources -As of version 1.11 `rolling-update` have been deprecated (see [CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md)), use `rollout` instead. +As of version 1.11 `rolling-update` have been deprecated (see [CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md)), use `rollout` instead. ```bash kubectl set image deployment/frontend www=image:v2 # Rolling update "www" containers of "frontend" deployment, updating the image diff --git a/content/en/docs/reference/kubectl/jsonpath.md b/content/en/docs/reference/kubectl/jsonpath.md index ffbe7103c84b8..731af0004e09f 100644 --- a/content/en/docs/reference/kubectl/jsonpath.md +++ b/content/en/docs/reference/kubectl/jsonpath.md @@ -89,11 +89,13 @@ kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.capacity']}" kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' ``` +{{< note >}} On Windows, you must _double_ quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote around any literals in the template. For example: ```cmd -C:\> kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.startTime}{'\n'}{end}" -C:\> kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"}{.status.startTime}{\"\n\"}{end}" +kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.startTime}{'\n'}{end}" +kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"}{.status.startTime}{\"\n\"}{end}" ``` +{{< /note >}} {{% /capture %}} diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index 67d73482ac114..6468a5ed7079c 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -10,7 +10,7 @@ card: --- {{% capture overview %}} -Kubectl is a command line interface for running commands against Kubernetes clusters. `kubectl` looks for a file named config in the $HOME/.kube directory. You can specify other [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) files by setting the KUBECONFIG environment variable or by setting the [`--kubeconfig`](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) flag. +Kubectl is a command line tool for controlling Kubernetes clusters. `kubectl` looks for a file named config in the $HOME/.kube directory. You can specify other [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) files by setting the KUBECONFIG environment variable or by setting the [`--kubeconfig`](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) flag. This overview covers `kubectl` syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the [kubectl](/docs/reference/generated/kubectl/kubectl-commands/) reference documentation. For installation instructions see [installing kubectl](/docs/tasks/kubectl/install/). @@ -343,9 +343,6 @@ kubectl delete -f pod.yaml # Delete all the pods and services that have the label name=. kubectl delete pods,services -l name= -# Delete all the pods and services that have the label name=. -kubectl delete pods,services -l name= - # Delete all pods, including uninitialized ones. kubectl delete pods --all ``` diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index c6dd77ee5b0b5..8b82d7da260f6 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -334,16 +334,16 @@ are not vulnerable to ordering changes in the list. Once the last finalizer is removed, the resource is actually removed from etcd. -## Dry run +## Dry-run -{{< feature-state for_k8s_version="v1.13" state="beta" >}} In version 1.13, the dry run beta feature is enabled by default. The modifying verbs (`POST`, `PUT`, `PATCH`, and `DELETE`) can accept requests in a dry run mode. Dry run mode helps to evaluate a request through the typical request stages (admission chain, validation, merge conflicts) up until persisting objects to storage. The response body for the request is as close as possible to a non dry run response. The system guarantees that dry run requests will not be persisted in storage or have any other side effects. +{{< feature-state for_k8s_version="v1.13" state="beta" >}} In version 1.13, the dry-run beta feature is enabled by default. The modifying verbs (`POST`, `PUT`, `PATCH`, and `DELETE`) can accept requests in a dry-run mode. DryRun mode helps to evaluate a request through the typical request stages (admission chain, validation, merge conflicts) up until persisting objects to storage. The response body for the request is as close as possible to a non-dry-run response. The system guarantees that dry-run requests will not be persisted in storage or have any other side effects. -### Make a dry run request +### Make a dry-run request -Dry run is triggered by setting the `dryRun` query parameter. This parameter is a string, working as an enum, and in 1.13 the only accepted values are: +Dry-run is triggered by setting the `dryRun` query parameter. This parameter is a string, working as an enum, and in 1.13 the only accepted values are: -* `All`: Every stage runs as normal, except for the final storage stage. Admission controllers are run to check that the request is valid, mutating controllers mutate the request, merge is performed on `PATCH`, fields are defaulted, and schema validation occurs. The changes are not persisted to the underlying storage, but the final object which would have been persisted is still returned to the user, along with the normal status code. If the request would trigger an admission controller which would have side effects, the request will be failed rather than risk an unwanted side effect. All built in admission control plugins support dry run. Additionally, admission webhooks can declare in their [configuration object](/docs/reference/generated/kubernetes-api/v1.13/#webhook-v1beta1-admissionregistration-k8s-io) that they do not have side effects by setting the sideEffects field to "None". If a webhook actually does have side effects, then the sideEffects field should be set to "NoneOnDryRun", and the webhook should also be modified to understand the `DryRun` field in AdmissionReview, and prevent side effects on dry run requests. +* `All`: Every stage runs as normal, except for the final storage stage. Admission controllers are run to check that the request is valid, mutating controllers mutate the request, merge is performed on `PATCH`, fields are defaulted, and schema validation occurs. The changes are not persisted to the underlying storage, but the final object which would have been persisted is still returned to the user, along with the normal status code. If the request would trigger an admission controller which would have side effects, the request will be failed rather than risk an unwanted side effect. All built in admission control plugins support dry-run. Additionally, admission webhooks can declare in their [configuration object](/docs/reference/generated/kubernetes-api/v1.13/#webhook-v1beta1-admissionregistration-k8s-io) that they do not have side effects by setting the sideEffects field to "None". If a webhook actually does have side effects, then the sideEffects field should be set to "NoneOnDryRun", and the webhook should also be modified to understand the `DryRun` field in AdmissionReview, and prevent side effects on dry-run requests. * Leave the value empty, which is also the default: Keep the default modifying behavior. For example: @@ -352,12 +352,28 @@ For example: Content-Type: application/json Accept: application/json -The response would look the same as for non dry run request, but the values of some generated fields may differ. +The response would look the same as for non-dry-run request, but the values of some generated fields may differ. +### Dry-run authorization + +Authorization for dry-run and non-dry-run requests is identical. Thus, to make +a dry-run request, the user must be authorized to make the non-dry-run request. + +For example, to run a dry-run `PATCH` for Deployments, you must have the +`PATCH` permission for Deployments, as in the example of the RBAC rule below. + +```yaml +rules: +- apiGroups: ["extensions", "apps"] + resources: ["deployments"] + verbs: ["patch"] +``` + +See [Authorization Overview](/docs/reference/access-authn-authz/authorization/). ### Generated values -Some values of an object are typically generated before the object is persisted. It is important not to rely upon the values of these fields set by a dry run request, since these values will likely be different in dry run mode from when the real request is made. Some of these fields are: +Some values of an object are typically generated before the object is persisted. It is important not to rely upon the values of these fields set by a dry-run request, since these values will likely be different in dry-run mode from when the real request is made. Some of these fields are: * `name`: if `generateName` is set, `name` will have a unique random name * `creationTimestamp`/`deletionTimestamp`: records the time of creation/deletion @@ -557,14 +573,22 @@ more information about how an object's schema is used to make decisions when merging, see [sigs.k8s.io/structured-merge-diff](https://sigs.k8s.io/structured-merge-diff). +A number of markers were added in Kubernetes 1.16 and 1.17, to allow API developers to describe the merge strategy supported by lists, maps, and structs. These markers can be applied to objects of the respective type, in Go files or OpenAPI specs. + +| Golang marker | OpenAPI extension | Accepted values | Description | Introduced in | +|---|---|---|---|---| +| `//+listType` | `x-kubernetes-list-type` | `atomic`/`set`/`map` | Applicable to lists. `atomic` and `set` apply to lists with scalar elements only. `map` applies to lists of nested types only. If configured as `atomic`, the entire list is replaced during merge; a single manager manages the list as a whole at any one time. If `granular`, different managers can manage entries separately. | 1.16 | +| `//+listMapKeys` | `x-kubernetes-list-map-keys` | Slice of map keys that uniquely identify entries e.g. `["port", "protocol"]` | Only applicable when `+listType=map`. A slice of strings whose values in combination must uniquely identify list entries. | 1.16 | +| `//+mapType` | `x-kubernetes-map-type` | `atomic`/`granular` | Applicable to maps. `atomic` means that the map can only be entirely replaced by a single manager. `granular` means that the map supports separate managers updating individual fields. | 1.17 | +| `//+structType` | `x-kubernetes-map-type` | `atomic`/`granular` | Applicable to structs; otherwise same usage and OpenAPI annotation as `//+mapType`.| 1.17 | + ### Custom Resources By default, Server Side Apply treats custom resources as unstructured data. All keys are treated the same as struct fields, and all lists are considered atomic. -If the validation field is specified in the Custom Rseource Definition, it is +If the validation field is specified in the Custom Resource Definition, it is used when merging objects of this type. - ### Using Server-Side Apply in a controller As a developer of a controller, you can use server-side apply as a way to @@ -667,32 +691,33 @@ For get and list, the semantics of resource version are: **Get:** -| resourceVersion unset | resourceVersion="0" | resourceVersion="{non-zero version}" | -|-----------------------|---------------------|--------------------------------------| -| Most Recent | Any | Not older than | +| resourceVersion unset | resourceVersion is `0` | resourceVersion is set but not `0` | +|-----------------------|------------------------|------------------------------------| +| Most Recent | Any | Not older than | **List:** -| paging | resourceVersion unset | resourceVersion="0" | resourceVersion="{non-zero version}" | -|-----------|-----------------------|---------------------|--------------------------------------| -| no limit | Most Recent | Any | Not older than | -| limit="n" | Most Recent | Any | Exact | - +| paging | resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" | +|-------------------------------|-----------------------|------------------------------------------------|----------------------------------------| +| limit unset | Most Recent | Any | Not older than | +| limit="n", continue unset | Most Recent | Any | Exact | +| limit="n", continue="" | Continue Token, Exact | Invalid, but treated as Continue Token, Exact | Invalid, HTTP `400 Bad Request` | The meaning of the get and list semantics are: - **Most Recent:** Return data at the most recent resource version. The returned data must be consistent (i.e. served from etcd via a quorum read). -- **Any:** Return data at any resource version. The newest available resource version is preferred, but strong consistency is not required; data at any resource version may be served. It is possible for the request to return data at a much older resource version that the client has previously observed, particularly in high availability configurations, due to partitions or stale caches. Clients that cannot tolerate this should not use this semantic. -- **Not older than:** Return data at least as new as the provided resource version. The newest available resource version is preferred, but any data not older than this resource version may be served. +- **Any:** Return data at any resource version. The newest available resource version is preferred, but strong consistency is not required; data at any resource version may be served. It is possible for the request to return data at a much older resource version that the client has previously observed, particularly in high availabiliy configurations, due to partitions or stale caches. Clients that cannot tolerate this should not use this semantic. +- **Not older than:** Return data at least as new as the provided resource version. The newest available data is preferred, but any data not older than this resource version may be served. Note that this ensures only that the objects returned are no older than they were at the time of the provided resource version. The resource version in the `ObjectMeta` of individual object may be older than the provide resource version so long it is for the latest modification to the object at the time of the provided resource version. - **Exact:** Return data at the exact resource version provided. +- **Continue Token, Exact:** Return data at the resource version of the initial paginated list call. The returned Continue Tokens are responsible for keeping track of the initially provided resource version for all paginated list calls after the initial paginated list call. For watch, the semantics of resource version are: **Watch:** -| resourceVersion unset | resourceVersion="0" | resourceVersion="{non-zero version}" | -|-------------------------------------|----------------------------|--------------------------------------| -| Get State and Start at Most Recent | Get State and Start at Any | Start at Exact | +| resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" | +|-------------------------------------|----------------------------|----------------------------------------| +| Get State and Start at Most Recent | Get State and Start at Any | Start at Exact | The meaning of the watch semantics are: @@ -704,4 +729,8 @@ The meaning of the watch semantics are: Servers are not required to serve all older resource versions and may return a HTTP `410 (Gone)` status code if a client requests a resourceVersion older than the server has retained. Clients must be able to tolerate `410 (Gone)` responses. See [Efficient detection of changes](#efficient-detection-of-changes) for details on how to handle `410 (Gone)` responses when watching resources. -For example, the kube-apiserver periodically compacts old resource versions from etcd based on its `--etcd-compaction-interval` setting. Also, the kube-apiserver's watch cache keeps `--watch-cache-sizes` resource versions in each resource cache. It depends on if a request is served from cache on which one of these limits applies, but if a resource version is unavailable in the one that applies, a `410 (Gone)` will be returned by the kube-apiserver. +If you request a a resourceVersion outside the applicable limit then, depending on whether a request is served from cache or not, the API server may reply with a `410 Gone` HTTP response. + +### Unavailable resource versions + +Servers are not required to serve unrecognized resource versions. List and Get requests for unrecognized resource versions may wait briefly for the resource version to become available, should timeout with a `504 (Gateway Timeout)` if the provided resource versions does not become available in a resonable amount of time, and may respond with a `Retry-After` response header indicating how many seconds a client should wait before retrying the request. Currently the kube-apiserver also identifies these responses with a "Too large resource version" message. Watch requests for a unrecognized resource version may wait indefinitely (until the request timeout) for the resource version to become available. diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md index c640a01890918..28ed6e1fcfac0 100644 --- a/content/en/docs/setup/_index.md +++ b/content/en/docs/setup/_index.md @@ -41,7 +41,7 @@ If you're learning Kubernetes, use the Docker-based solutions: tools supported b |Community |Ecosystem | | ------------ | -------- | | [Minikube](/docs/setup/learning-environment/minikube/) | [CDK on LXD](https://www.ubuntu.com/kubernetes/docs/install-local) | -| [kind (Kubernetes IN Docker)](https://github.com/kubernetes-sigs/kind) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| +| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)| | | [Minishift](https://docs.okd.io/latest/minishift/)| | | [MicroK8s](https://microk8s.io/)| | | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | @@ -83,7 +83,7 @@ The following production environment solutions table lists the providers and the | [Gardener](https://gardener.cloud/) | ✔ | ✔ | ✔ | ✔ | ✔ | [Custom Extensions](https://github.com/gardener/gardener/blob/master/docs/extensions/overview.md) | | [Giant Swarm](https://www.giantswarm.io/) | ✔ | ✔ | ✔ | | | [Google](https://cloud.google.com/) | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) | [Google Compute Engine (GCE)](https://cloud.google.com/compute/)|[GKE On-Prem](https://cloud.google.com/gke-on-prem/) | | | | | | | | -| [Hidora](https:/hidora.com/) | ✔ | ✔| ✔ | | | | | | | | +| [Hidora](https://hidora.com/) | ✔ | ✔| ✔ | | | | | | | | | [IBM](https://www.ibm.com/in-en/cloud) | [IBM Cloud Kubernetes Service](https://cloud.ibm.com/kubernetes/catalog/cluster)| |[IBM Cloud Private](https://www.ibm.com/in-en/cloud/private) | | | [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | | | [Kontena Pharos](https://www.kontena.io/pharos/) | |✔| ✔ | | | diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md index 11113050685aa..6169b3f87266a 100644 --- a/content/en/docs/setup/best-practices/certificates.md +++ b/content/en/docs/setup/best-practices/certificates.md @@ -92,7 +92,7 @@ Hosts/SAN listed above are the recommended ones for getting a working cluster; i For kubeadm users only: * The scenario where you are copying to your cluster CA certificates without private keys is referred as external CA in the kubeadm documentation. -* If you are comparing the above list with a kubeadm geneerated PKI, please be aware that `kube-etcd`, `kube-etcd-peer` and `kube-etcd-healthcheck-client` certificates +* If you are comparing the above list with a kubeadm generated PKI, please be aware that `kube-etcd`, `kube-etcd-peer` and `kube-etcd-healthcheck-client` certificates are not generated in case of external etcd. {{< /note >}} diff --git a/content/en/docs/setup/learning-environment/kind.md b/content/en/docs/setup/learning-environment/kind.md new file mode 100644 index 0000000000000..e476d220d0ec0 --- /dev/null +++ b/content/en/docs/setup/learning-environment/kind.md @@ -0,0 +1,23 @@ +--- +title: Installing Kubernetes with Kind +weight: 40 +content_template: templates/concept +--- + +{{% capture overview %}} + +Kind is a tool for running local Kubernetes clusters using Docker container "nodes". + +{{% /capture %}} + +{{% capture body %}} + +## Installation + +See [Installing Kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +{{% /capture %}} + + + + diff --git a/content/en/docs/setup/learning-environment/minikube.md b/content/en/docs/setup/learning-environment/minikube.md index 3135a6af30cd8..7233f98c0b72a 100644 --- a/content/en/docs/setup/learning-environment/minikube.md +++ b/content/en/docs/setup/learning-environment/minikube.md @@ -205,7 +205,11 @@ plugins. * hyperv ([driver installation](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperv-driver)) Note that the IP below is dynamic and can change. It can be retrieved with `minikube ip`. * vmware ([driver installation](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#vmware-unified-driver)) (VMware unified driver) -* none (Runs the Kubernetes components on the host and not in a VM. It is not recommended to run the none driver on personal workstations. Using this driver requires Docker ([docker install](https://docs.docker.com/install/linux/docker-ce/ubuntu/)) and a Linux environment) +* none (Runs the Kubernetes components on the host and not in a virtual machine. You need to be running Linux and to have {{< glossary_tooltip term_id="docker" >}} installed.) + +{{< caution >}} +If you use the `none` driver, some Kubernetes components run as privileged containers that have side effects outside of the Minikube environment. Those side effects mean that the `none` driver is not recommended for personal workstations. +{{< /caution >}} #### Starting a cluster on alternative container runtimes You can start Minikube on the following container runtimes. @@ -263,11 +267,7 @@ When using a single VM for Kubernetes, it's useful to reuse Minikube's built-in Be sure to tag your Docker image with something other than latest and use that tag to pull the image. Because `:latest` is the default value, with a corresponding default image pull policy of `Always`, an image pull error (`ErrImagePull`) eventually results if you do not have the Docker image in the default Docker registry (usually DockerHub). {{< /note >}} -To work with the Docker daemon on your Mac/Linux host, use the `docker-env command` in your shell: - -```shell -eval $(minikube docker-env) -``` +To work with the Docker daemon on your Mac/Linux host, run the last line from `minikube docker-env`. You can now use Docker at the command line of your host Mac/Linux machine to communicate with the Docker daemon inside the Minikube VM: diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index 853256bfb6c02..704a2728c5e86 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -74,8 +74,8 @@ Use the following commands to install Docker on your system: # Install Docker CE ## Set up the repository: ### Install packages to allow apt to use a repository over HTTPS -apt-get update && apt-get install \ - apt-transport-https ca-certificates curl software-properties-common +apt-get update && apt-get install -y \ + apt-transport-https ca-certificates curl software-properties-common gnupg2 ### Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - @@ -87,7 +87,7 @@ add-apt-repository \ stable" ## Install Docker CE. -apt-get update && apt-get install \ +apt-get update && apt-get install -y \ containerd.io=1.2.10-3 \ docker-ce=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) \ docker-ce-cli=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) @@ -115,14 +115,14 @@ systemctl restart docker # Install Docker CE ## Set up the repository ### Install required packages. -yum install yum-utils device-mapper-persistent-data lvm2 +yum install -y yum-utils device-mapper-persistent-data lvm2 ### Add Docker repository. yum-config-manager --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo ## Install Docker CE. -yum update && yum install \ +yum update -y && yum install -y \ containerd.io-1.2.10 \ docker-ce-19.03.4 \ docker-ce-cli-19.03.4 @@ -183,13 +183,13 @@ sysctl --system # Install prerequisites apt-get update -apt-get install software-properties-common +apt-get install -y software-properties-common add-apt-repository ppa:projectatomic/ppa apt-get update # Install CRI-O -apt-get install cri-o-1.15 +apt-get install -y cri-o-1.15 {{< /tab >}} {{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}} @@ -198,7 +198,7 @@ apt-get install cri-o-1.15 yum-config-manager --add-repo=https://cbs.centos.org/repos/paas7-crio-115-release/x86_64/os/ # Install CRI-O -yum install --nogpgcheck cri-o +yum install --nogpgcheck -y cri-o {{< /tab >}} {{< /tabs >}} @@ -272,7 +272,7 @@ systemctl restart containerd # Install containerd ## Set up the repository ### Install required packages -yum install yum-utils device-mapper-persistent-data lvm2 +yum install -y yum-utils device-mapper-persistent-data lvm2 ### Add docker repository yum-config-manager \ @@ -280,7 +280,7 @@ yum-config-manager \ https://download.docker.com/linux/centos/docker-ce.repo ## Install containerd -yum update && yum install containerd.io +yum update -y && yum install -y containerd.io # Configure containerd mkdir -p /etc/containerd diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 46290693fcde3..bfb26fa1ca77e 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -269,8 +269,7 @@ kubeadm only supports Container Network Interface (CNI) based networks (and does Several projects provide Kubernetes Pod networks using CNI, some of which also support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons. -- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0). -- [CNI bridge](https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/README.md) and [local-ipam](https://github.com/containernetworking/plugins/blob/master/plugins/ipam/host-local/README.md) are the only supported IPv6 network plugins in Kubernetes version 1.9. +- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0). See each plugin's documentation to see if it supports IPv6. Note that kubeadm sets up a more secure cluster by default and enforces use of [RBAC](/docs/reference/access-authn-authz/rbac/). Make sure that your network manifest supports RBAC. @@ -290,12 +289,12 @@ Below you can find installation instructions for some popular Pod network plugin {{< tabs name="tabs-pod-install" >}} {{% tab name="Calico" %}} -For more information about using Calico, see [Quickstart for Calico on Kubernetes](https://docs.projectcalico.org/latest/getting-started/kubernetes/), [Installing Calico for policy and networking](https://docs.projectcalico.org/latest/getting-started/kubernetes/installation/calico), and other related resources. +[Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer. Calico works on several architectures, including `amd64`, `arm64`, and `ppc64le`. -For Calico to work correctly, you need to pass `--pod-network-cidr=192.168.0.0/16` to `kubeadm init` or update the `calico.yml` file to match your Pod network. Note that Calico works on `amd64`, `arm64`, and `ppc64le` only. +By default, Calico uses `192.168.0.0/16` as the Pod network CIDR, though this can be configured in the calico.yaml file. For Calico to work correctly, you need to pass this same CIDR to the kubeadm init command using the `--pod-network-cidr=192.168.0.0/16` flag or via the kubeadm configuration. ```shell -kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml +kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml ``` {{% /tab %}} diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index ffa8229b6ed00..b768e13323dc0 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -140,7 +140,7 @@ If the container runtime of choice is Docker, it is used through the built-in Other CRI-based runtimes include: -- [containerd](https://github.com/containerd/cri) (CRI plugin built into containerd) +- [containerd/cri](https://github.com/containerd/cri) (CRI plugin built into containerd) - [cri-o](https://cri-o.io/) - [frakti](https://github.com/kubernetes/frakti) diff --git a/content/en/docs/setup/production-environment/tools/kubespray.md b/content/en/docs/setup/production-environment/tools/kubespray.md index 490cfef8c6e48..8187d5eac46ac 100644 --- a/content/en/docs/setup/production-environment/tools/kubespray.md +++ b/content/en/docs/setup/production-environment/tools/kubespray.md @@ -62,9 +62,9 @@ Kubespray provides the ability to customize many aspects of the deployment: * Component versions * Calico route reflectors * Component runtime options - * docker - * rkt - * cri-o + * {{< glossary_tooltip term_id="docker" >}} + * {{< glossary_tooltip term_id="rkt" >}} + * {{< glossary_tooltip term_id="cri-o" >}} * Certificate generation methods (**Vault being discontinued**) Kubespray customizations can be made to a [variable file](http://docs.ansible.com/ansible/playbooks_variables.html). If you are just getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes. diff --git a/content/en/docs/setup/release/notes.md b/content/en/docs/setup/release/notes.md index 60d49c0f3ac23..c1ad709781b91 100644 --- a/content/en/docs/setup/release/notes.md +++ b/content/en/docs/setup/release/notes.md @@ -105,7 +105,7 @@ The Kubernetes in-tree storage plugin to Container Storage Interface (CSI) migra #### Storage -- All nodes need to be drained before upgrading Kubernetes cluster, because paths used for block volumes are changed in this release, so on-line upgrade of nodes aren't allowed. ([#74026](https://github.com/kubernetes/kubernetes/pull/74026), [@mkimuram](https://github.com/mkimuram)) +- A node that uses a CSI raw block volume needs to be drained before kubelet can be upgraded to 1.17. ([#74026](https://github.com/kubernetes/kubernetes/pull/74026), [@mkimuram](https://github.com/mkimuram)) #### Windows diff --git a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md index 2760933c0a9e0..ef5f904079f7a 100644 --- a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md +++ b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md @@ -207,7 +207,7 @@ The Kubernetes apiserver has two client CA options: Each of these functions independently and can conflict with each other, if not used correctly. -* `--client-ca-file`: When a request arrives to the Kubernetes apiserver, if this option is enabled, the Kubernetes apiserver checks the certificate of the request. If it is signed by one of the CA certificates in the file referenced by `--client-ca-file`, then the request is treated as a legitimate request, and the user is the value of the common name `CN=`, while the group is the organization `O=`. See the [documentaton on TLS authentication](/docs/reference/access-authn-authz/authentication/#x509-client-certs). +* `--client-ca-file`: When a request arrives to the Kubernetes apiserver, if this option is enabled, the Kubernetes apiserver checks the certificate of the request. If it is signed by one of the CA certificates in the file referenced by `--client-ca-file`, then the request is treated as a legitimate request, and the user is the value of the common name `CN=`, while the group is the organization `O=`. See the [documentation on TLS authentication](/docs/reference/access-authn-authz/authentication/#x509-client-certs). * `--requestheader-client-ca-file`: When a request arrives to the Kubernetes apiserver, if this option is enabled, the Kubernetes apiserver checks the certificate of the request. If it is signed by one of the CA certificates in the file reference by `--requestheader-client-ca-file`, then the request is treated as a potentially legitimate request. The Kubernetes apiserver then checks if the common name `CN=` is one of the names in the list provided by `--requestheader-allowed-names`. If the name is allowed, the request is approved; if it is not, the request is not. If _both_ `--client-ca-file` and `--requestheader-client-ca-file` are provided, then the request first checks the `--requestheader-client-ca-file` CA and then the `--client-ca-file`. Normally, different CAs, either root CAs or intermediate CAs, are used for each of these options; regular client requests match against `--client-ca-file`, while aggregation requests match against `--requestheader-client-ca-file`. However, if both use the _same_ CA, then client requests that normally would pass via `--client-ca-file` will fail, because the CA will match the CA in `--requestheader-client-ca-file`, but the common name `CN=` will **not** match one of the acceptable common names in `--requestheader-allowed-names`. This can cause your kubelets and other control plane components, as well as end-users, to be unable to authenticate to the Kubernetes apiserver. diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md index 35d0e2bb60c39..f730fb36609af 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md @@ -5,6 +5,7 @@ reviewers: - liggitt content_template: templates/task weight: 30 +min-kubernetes-server-version: v1.16 --- {{% capture overview %}} @@ -16,11 +17,11 @@ level of your CustomResourceDefinitions or advance your API to a new version wit {{% capture prerequisites %}} -{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +{{< include "task-tutorial-prereqs.md" >}} -* Make sure your Kubernetes cluster has a master version of 1.16.0 or higher for `apiextensions.k8s.io/v1`, or 1.11.0 or higher for `apiextensions.k8s.io/v1beta1`. +You should have a initial understanding of [custom resources](/docs/concepts/api-extension/custom-resources/). -* Read about [custom resources](/docs/concepts/api-extension/custom-resources/). +{{< version-check >}} {{% /capture %}} @@ -28,8 +29,6 @@ level of your CustomResourceDefinitions or advance your API to a new version wit ## Overview -{{< feature-state state="stable" for_kubernetes_version="1.16" >}} - The CustomResourceDefinition API provides a workflow for introducing and upgrading to new versions of a CustomResourceDefinition. diff --git a/content/en/docs/tasks/access-kubernetes-api/http-proxy-access-api.md b/content/en/docs/tasks/access-kubernetes-api/http-proxy-access-api.md index 62a2fe560396f..be282a29c1d1a 100644 --- a/content/en/docs/tasks/access-kubernetes-api/http-proxy-access-api.md +++ b/content/en/docs/tasks/access-kubernetes-api/http-proxy-access-api.md @@ -38,6 +38,8 @@ Get the API versions: curl http://localhost:8080/api/ +The output should look similar to this: + { "kind": "APIVersions", "versions": [ @@ -55,6 +57,8 @@ Get a list of pods: curl http://localhost:8080/api/v1/namespaces/default/pods +The output should look similar to this: + { "kind": "PodList", "apiVersion": "v1", diff --git a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md index 97aef1769d514..a7ac4d80c918f 100644 --- a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md +++ b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md @@ -43,7 +43,7 @@ the corresponding `PersistentVolume` is not be deleted. Instead, it is moved to pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s - This list also includes the name of the claims that are bound to each volume + This list also includes the name of the claims that are bound to each volume for easier identification of dynamically provisioned volumes. 1. Choose one of your PersistentVolumes and change its reclaim policy: @@ -54,6 +54,15 @@ the corresponding `PersistentVolume` is not be deleted. Instead, it is moved to where `` is the name of your chosen PersistentVolume. + {{< note >}} + On Windows, you must _double_ quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote around any literals in the template. For example: + +```cmd +kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}" +``` + + {{< /note >}} + 1. Verify that your chosen PersistentVolume has the right policy: ```shell diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md index d576493dd30ad..73cecd999b5a6 100644 --- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -215,8 +215,8 @@ etcd2 and etcd3 is as follows: message `etcd2 is no longer a supported storage backend` Before upgrading a v1.12.x kube-apiserver using `--storage-backend=etcd2` to -v1.13.x, etcd v2 data MUST by migrated to the v3 storage backend, and -kube-apiserver invocations changed to use `--storage-backend=etcd3`. +v1.13.x, etcd v2 data must be migrated to the v3 storage backend and +kube-apiserver invocations must be changed to use `--storage-backend=etcd3`. The process for migrating from etcd2 to etcd3 is highly dependent on how the etcd cluster was deployed and configured, as well as how the Kubernetes diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md index 597b1cf737ca3..352a7093865aa 100644 --- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -258,14 +258,7 @@ Kubernetes installs do not configure the nodes' `resolv.conf` files to use the cluster DNS by default, because that process is inherently distribution-specific. This should probably be implemented eventually. -Linux's libc is impossibly stuck ([see this bug from -2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)) with limits of just -3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to -consume 1 `nameserver` record and 3 `search` records. This means that if a -local installation already uses 3 `nameserver`s or uses more than 3 `search`es, -some of those settings will be lost. As a partial workaround, the node can run -`dnsmasq` which will provide more `nameserver` entries, but not more `search` -entries. You can also use kubelet's `--resolv-conf` flag. +Linux's libc (a.k.a. glibc) has a limit for the DNS `nameserver` records to 3 by default. What's more, for the glibc versions which are older than glic-2.17-222 ([the new versions update see this issue](https://access.redhat.com/solutions/58028)), the DNS `search` records has been limited to 6 ([see this bug from 2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)). Kubernetes needs to consume 1 `nameserver` record and 3 `search` records. This means that if a local installation already uses 3 `nameserver`s or uses more than 3 `search`es while your glibc versions in the affected list, some of those settings will be lost. For the workaround of the DNS `nameserver` records limit, the node can run `dnsmasq` which will provide more `nameserver` entries, you can also use kubelet's `--resolv-conf` flag. For fixing the DNS `search` records limit, consider upgrading your linux distribution or glibc version. If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md index 2cddad2d72889..5d5dc98ade845 100644 --- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md +++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md @@ -97,7 +97,7 @@ kubectl apply -f dns-horizontal-autoscaler.yaml The output of a successful command is: - deployment.apps/kube-dns-autoscaler created + deployment.apps/dns-autoscaler created DNS horizontal autoscaling is now enabled. diff --git a/content/en/docs/tasks/administer-cluster/enabling-service-topology.md b/content/en/docs/tasks/administer-cluster/enabling-service-topology.md new file mode 100644 index 0000000000000..c39b9b366de81 --- /dev/null +++ b/content/en/docs/tasks/administer-cluster/enabling-service-topology.md @@ -0,0 +1,54 @@ +--- +reviewers: +- andrewsykim +- johnbelamaric +- imroc +title: Enabling Service Topology +content_template: templates/task +--- + +{{% capture overview %}} +This page provides an overview of enabling Service Topology in Kubernetes. +{{% /capture %}} + + +{{% capture prerequisites %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +{{% /capture %}} + +{{% capture steps %}} + +## Introduction + +_Service Topology_ enables a service to route traffic based upon the Node +topology of the cluster. For example, a service can specify that traffic be +preferentially routed to endpoints that are on the same Node as the client, or +in the same availability zone. + +## Prerequisites + +The following prerequisites are needed in order to enable topology aware service +routing: + + * Kubernetes 1.17 or later + * {{< glossary_tooltip text="Kube-proxy" term_id="kube-proxy" >}} running in iptables mode or IPVS mode + * Enable [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/) + +## Enable Service Topology + +{{< feature-state for_k8s_version="v1.17" state="alpha" >}} + +To enable service topology, enable the `ServiceTopology` and `EndpointSlice` feature gate for all Kubernetes components: + +``` +--feature-gates="ServiceTopology=true,EndpointSlice=true" +``` + + +{{% capture whatsnext %}} + +* Read about the [Service Topology](/docs/concepts/services-networking/service-topology) concept +* Read about [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices) +* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) + +{{% /capture %}} diff --git a/content/en/docs/tasks/administer-cluster/encrypt-data.md b/content/en/docs/tasks/administer-cluster/encrypt-data.md index 988c5b630e5ba..920fe19197930 100644 --- a/content/en/docs/tasks/administer-cluster/encrypt-data.md +++ b/content/en/docs/tasks/administer-cluster/encrypt-data.md @@ -3,6 +3,7 @@ reviewers: - smarterclayton title: Encrypting Secret Data at Rest content_template: templates/task +min-kubernetes-server-version: 1.13 --- {{% capture overview %}} @@ -13,9 +14,7 @@ This page shows how to enable and configure encryption of secret data at rest. * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -* Kubernetes version 1.13.0 or later is required - -* etcd v3 or later is required +* etcd v3.0 or later is required {{% /capture %}} @@ -27,9 +26,6 @@ The `kube-apiserver` process accepts an argument `--encryption-provider-config` that controls how API data is encrypted in etcd. An example configuration is provided below. -Note: -The alpha version of the encryption feature prior to 1.13 used the `--experimental-encryption-provider-config` flag. - ## Understanding the encryption at rest configuration. ```yaml @@ -69,10 +65,6 @@ resources from storage each provider that matches the stored data attempts to de order. If no provider can read the stored data due to a mismatch in format or secret key, an error is returned which prevents clients from accessing that resource. -Note: -The alpha version of the encryption feature prior to 1.13 required to be configured with -`kind: EncryptionConfig` and `apiVersion: v1`. - {{< caution >}} **IMPORTANT:** If any resource is not readable via the encryption config (because keys were changed), the only recourse is to delete that key from the underlying etcd directly. Calls that attempt to @@ -81,11 +73,12 @@ read that resource will fail until it is deleted or a valid decryption key is pr ### Providers: +{{< table caption="Providers for Kubernetes encryption at rest" >}} Name | Encryption | Strength | Speed | Key Length | Other Considerations -----|------------|----------|-------|------------|--------------------- `identity` | None | N/A | N/A | N/A | Resources written as-is without encryption. When set as the first provider, the resource will be decrypted as new values are written. `aescbc` | AES-CBC with PKCS#7 padding | Strongest | Fast | 32-byte | The recommended choice for encryption at rest but may be slightly slower than `secretbox`. -`secretbox` | XSalsa20 and Poly1305 | Strong | Faster | 32-byte | A newer standard and may not be considered acceptable in environments that require high levels of review. +`secretbox` | XSalsa20 and Poly1305 | Strong | Faster | 32-byte | A newer standard and may not be considered acceptable in environments that require high levels of review. `aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented. `kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with PKCS#7 padding, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/) @@ -95,12 +88,12 @@ is the first provider, the first key is used for encryption. __Storing the raw encryption key in the EncryptionConfig only moderately improves your security posture, compared to no encryption. Please use `kms` provider for additional security.__ By default, the `identity` provider is used to protect secrets in etcd, which provides no encryption. `EncryptionConfiguration` was introduced to encrypt secrets locally, with a locally managed key. + Encrypting secrets with a locally managed key protects against an etcd compromise, but it fails to protect against a host compromise. Since the encryption keys are stored on the host in the EncryptionConfig YAML file, a skilled attacker can access that file and -extract the encryption keys. This was a stepping stone in development to the `kms` provider, introduced in 1.10, and beta since 1.12. Envelope encryption -creates dependence on a separate key, not stored in Kubernetes. In this case, an attacker would need to compromise etcd, the -kubeapi-server, and the third-party KMS provider to retrieve the plaintext values, providing a higher level of security than -locally-stored encryption keys. +extract the encryption keys. + +Envelope encryption creates dependence on a separate key, not stored in Kubernetes. In this case, an attacker would need to compromise etcd, the kubeapi-server, and the third-party KMS provider to retrieve the plaintext values, providing a higher level of security than locally-stored encryption keys. ## Encrypting your data @@ -137,7 +130,7 @@ Your config file contains keys that can decrypt content in etcd, so you must pro {{< /caution >}} -## Verifying that data is encrypted +## Verifying that data is encrypted Data is encrypted when written to etcd. After restarting your `kube-apiserver`, any newly created or updated secret should be encrypted when stored. To check, you can use the `etcdctl` command line @@ -217,5 +210,3 @@ and restart all `kube-apiserver` processes. Then run the command `kubectl get se to force all secrets to be decrypted. {{% /capture %}} - - diff --git a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md index 9c81f3b488d7d..83bb8f337992e 100644 --- a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md +++ b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md @@ -4,17 +4,20 @@ reviewers: - dawnchen title: Reconfigure a Node's Kubelet in a Live Cluster content_template: templates/task +min-kubernetes-server-version: v1.11 --- {{% capture overview %}} {{< feature-state for_k8s_version="v1.11" state="beta" >}} [Dynamic Kubelet Configuration](https://github.com/kubernetes/enhancements/issues/281) -allows you to change the configuration of each Kubelet in a live Kubernetes -cluster by deploying a ConfigMap and configuring each Node to use it. +allows you to change the configuration of each +{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} in a running Kubernetes cluster, +by deploying a {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} and configuring +each {{< glossary_tooltip term_id="node" >}} to use it. {{< warning >}} -All Kubelet configuration parameters can be changed dynamically, +All kubelet configuration parameters can be changed dynamically, but this is unsafe for some parameters. Before deciding to change a parameter dynamically, you need a strong understanding of how that change will affect your cluster's behavior. Always carefully test configuration changes on a small set @@ -25,38 +28,49 @@ fields is available in the inline `KubeletConfiguration` {{% /capture %}} {{% capture prerequisites %}} -- Kubernetes v1.11 or higher on both the Master and the Nodes -- kubectl v1.11 or higher, configured to communicate with the cluster -- The Kubelet's `--dynamic-config-dir` flag must be set to a writable - directory on the Node. +You need to have a Kubernetes cluster. +You also need kubectl v1.11 or higher, configured to communicate with your cluster. +{{< version-check >}} +Your cluster API server version (eg v1.12) must be no more than one minor +version away from the version of kubectl that you are using. For example, +if your cluster is running v1.16 then you can use kubectl v1.15, v1.16 +or v1.17; other combinations +[aren't supported](/docs/setup/release/version-skew-policy/#kubectl). + +Some of the examples use the commandline tool +[jq](https://stedolan.github.io/jq/). You do not need `jq` to complete the task, +because there are manual alternatives. + +For each node that you're reconfiguring, you must set the kubelet +`--dynamic-config-dir` flag to a writable directory. {{% /capture %}} {{% capture steps %}} -## Reconfiguring the Kubelet on a Live Node in your Cluster +## Reconfiguring the kubelet on a running node in your cluster -### Basic Workflow Overview +### Basic workflow overview -The basic workflow for configuring a Kubelet in a live cluster is as follows: +The basic workflow for configuring a kubelet in a live cluster is as follows: 1. Write a YAML or JSON configuration file containing the -Kubelet's configuration. +kubelet's configuration. 2. Wrap this file in a ConfigMap and save it to the Kubernetes control plane. -3. Update the Kubelet's corresponding Node object to use this ConfigMap. +3. Update the kubelet's corresponding Node object to use this ConfigMap. -Each Kubelet watches a configuration reference on its respective Node object. -When this reference changes, the Kubelet downloads the new configuration, +Each kubelet watches a configuration reference on its respective Node object. +When this reference changes, the kubelet downloads the new configuration, updates a local reference to refer to the file, and exits. For the feature to work correctly, you must be running an OS-level service -manager (such as systemd), which will restart the Kubelet if it exits. When the -Kubelet is restarted, it will begin using the new configuration. +manager (such as systemd), which will restart the kubelet if it exits. When the +kubelet is restarted, it will begin using the new configuration. The new configuration completely overrides configuration provided by `--config`, and is overridden by command-line flags. Unspecified values in the new configuration will receive default values appropriate to the configuration version (e.g. `kubelet.config.k8s.io/v1beta1`), unless overridden by flags. -The status of the Node's Kubelet configuration is reported via +The status of the Node's kubelet configuration is reported via `Node.Spec.Status.Config`. Once you have updated a Node to use the new ConfigMap, you can observe this status to confirm that the Node is using the intended configuration. @@ -70,7 +84,7 @@ mind that it is also valid for multiple Nodes to consume the same ConfigMap. {{< warning >}} While it is *possible* to change the configuration by -updating the ConfigMap in-place, this causes all Kubelets configured with +updating the ConfigMap in-place, this causes all kubelets configured with that ConfigMap to update simultaneously. It is much safer to treat ConfigMaps as immutable by convention, aided by `kubectl`'s `--append-hash` option, and incrementally roll out updates to `Node.Spec.ConfigSource`. @@ -91,23 +105,35 @@ and debug issues. The compromise, however, is that you must start with knowledge of the existing configuration to ensure that you only change the fields you intend to change. -Ideally, the Kubelet would be bootstrapped from a file on disk -and you could edit this file (which could also be version-controlled), -to create the first Kubelet ConfigMap -(see [Set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file)), -Currently, the Kubelet is bootstrapped with **a combination of this file and command-line flags** -that can override the configuration in the file. -As a workaround, you can generate a config file containing a Node's current -configuration by accessing the Kubelet server's `configz` endpoint via the -kubectl proxy. This endpoint, in its current implementation, is intended to be -used only as a debugging aid. Do not rely on the behavior of this endpoint for -production scenarios. The examples below use the `jq` command to streamline -working with JSON. To follow the tasks as written, you need to have `jq` -installed, but you can adapt the tasks if you prefer to extract the -`kubeletconfig` subobject manually. +The kubelet loads settings from its configuration file, but you can set command +line flags to override the configuration in the file. This means that if you +only know the contents of the configuration file, and you don't know the +command line overrides, then you do not know the running configuration either. + +Because you need to know the running configuration in order to override it, +you can fetch the running configuration from the kubelet. You can generate a +config file containing a Node's current configuration by accessing the kubelet's +`configz` endpoint, through `kubectl proxy`. The next section explains how to +do this. + +{{< caution >}} +The kubelet's `configz` endpoint is there to help with debugging, and is not +a stable part of kubelet behavior. +Do not rely on the behavior of this endpoint for production scenarios or for +use with automated tools. +{{< /caution >}} + +For more information on configuring the kubelet via a configuration file, see +[Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file)). #### Generate the configuration file +{{< note >}} +The steps below use the `jq` command to streamline working with JSON. +To follow the tasks as written, you need to have `jq` installed. You can +adapt the steps if you prefer to extract the `kubeletconfig` subobject manually. +{{< /note >}} + 1. Choose a Node to reconfigure. In this example, the name of this Node is referred to as `NODE_NAME`. 2. Start the kubectl proxy in the background using the following command: @@ -122,20 +148,22 @@ installed, but you can adapt the tasks if you prefer to extract the For example: `${NODE_NAME}` will be rewritten as `$\{NODE_NAME\}` during the paste. You must remove the backslashes before running the command, or the command will fail. + ```bash NODE_NAME="the-name-of-the-node-you-are-reconfiguring"; curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME} ``` {{< note >}} You need to manually add the `kind` and `apiVersion` to the downloaded -object, because they are not reported by the `configz` endpoint. +object, because those fields are not reported by the `configz` endpoint. {{< /note >}} #### Edit the configuration file Using a text editor, change one of the parameters in the file generated by the previous procedure. For example, you -might edit the QPS parameter `eventRecordQPS`. +might edit the parameter `eventRecordQPS`, that controls +rate limiting for event recording. #### Push the configuration file to the control plane @@ -162,12 +190,12 @@ data: {...} ``` -The ConfigMap is created in the `kube-system` namespace because this -ConfigMap configures a Kubelet, which is Kubernetes system component. +You created that ConfigMap inside the `kube-system` namespace because the kubelet +is a Kubernetes system component. The `--append-hash` option appends a short checksum of the ConfigMap contents to the name. This is convenient for an edit-then-push workflow, because it -automatically, yet deterministically, generates new names for new ConfigMaps. +automatically, yet deterministically, generates new names for new resources. The name that includes this generated hash is referred to as `CONFIG_MAP_NAME` in the following examples. @@ -185,13 +213,13 @@ In your text editor, add the following YAML under `spec`: ```yaml configSource: configMap: - name: CONFIG_MAP_NAME + name: CONFIG_MAP_NAME # replace CONFIG_MAP_NAME with the name of the ConfigMap namespace: kube-system kubeletConfigKey: kubelet ``` You must specify all three of `name`, `namespace`, and `kubeletConfigKey`. -The `kubeletConfigKey` parameter shows the Kubelet which key of the ConfigMap +The `kubeletConfigKey` parameter shows the kubelet which key of the ConfigMap contains its config. #### Observe that the Node begins using the new configuration @@ -200,16 +228,16 @@ Retrieve the Node using the `kubectl get node ${NODE_NAME} -o yaml` command and `Node.Status.Config`. The config sources corresponding to the `active`, `assigned`, and `lastKnownGood` configurations are reported in the status. -- The `active` configuration is the version the Kubelet is currently running with. -- The `assigned` configuration is the latest version the Kubelet has resolved based on +- The `active` configuration is the version the kubelet is currently running with. +- The `assigned` configuration is the latest version the kubelet has resolved based on `Node.Spec.ConfigSource`. - The `lastKnownGood` configuration is the version the - Kubelet will fall back to if an invalid config is assigned in `Node.Spec.ConfigSource`. + kubelet will fall back to if an invalid config is assigned in `Node.Spec.ConfigSource`. The`lastKnownGood` configuration might not be present if it is set to its default value, the local config deployed with the node. The status will update `lastKnownGood` to -match a valid `assigned` config after the Kubelet becomes comfortable with the config. -The details of how the Kubelet determines a config should become the `lastKnownGood` are +match a valid `assigned` config after the kubelet becomes comfortable with the config. +The details of how the kubelet determines a config should become the `lastKnownGood` are not guaranteed by the API, but is currently implemented as a 10-minute grace period. You can use the following command (using `jq`) to filter down @@ -254,16 +282,19 @@ The following is an example response: ``` -If an error occurs, the Kubelet reports it in the `Node.Status.Config.Error` +(if you do not have `jq`, you can look at the whole response and find `Node.Status.Config` +by eye). + +If an error occurs, the kubelet reports it in the `Node.Status.Config.Error` structure. Possible errors are listed in [Understanding Node.Status.Config.Error messages](#understanding-node-status-config-error-messages). -You can search for the identical text in the Kubelet log for additional details +You can search for the identical text in the kubelet log for additional details and context about the error. #### Make more changes Follow the workflow above to make more changes and push them again. Each time -you push a ConfigMap with new contents, the --append-hash kubectl option creates +you push a ConfigMap with new contents, the `--append-hash` kubectl option creates the ConfigMap with a new name. The safest rollout strategy is to first create a new ConfigMap, and then update the Node to use the new ConfigMap. @@ -283,7 +314,7 @@ error is reported. {{% /capture %}} {{% capture discussion %}} -## Kubectl Patch Example +## `kubectl patch` example You can change a Node's configSource using several different mechanisms. This example uses `kubectl patch`: @@ -292,25 +323,25 @@ This example uses `kubectl patch`: kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMap\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"kubeletConfigKey\":\"kubelet\"}}}}" ``` -## Understanding how the Kubelet checkpoints config +## Understanding how the kubelet checkpoints config -When a new config is assigned to the Node, the Kubelet downloads and unpacks the -config payload as a set of files on the local disk. The Kubelet also records metadata +When a new config is assigned to the Node, the kubelet downloads and unpacks the +config payload as a set of files on the local disk. The kubelet also records metadata that locally tracks the assigned and last-known-good config sources, so that the -Kubelet knows which config to use across restarts, even if the API server becomes -unavailable. After checkpointing a config and the relevant metadata, the Kubelet -exits if it detects that the assigned config has changed. When the Kubelet is +kubelet knows which config to use across restarts, even if the API server becomes +unavailable. After checkpointing a config and the relevant metadata, the kubelet +exits if it detects that the assigned config has changed. When the kubelet is restarted by the OS-level service manager (such as `systemd`), it reads the new metadata and uses the new config. The recorded metadata is fully resolved, meaning that it contains all necessary information to choose a specific config version - typically a `UID` and `ResourceVersion`. This is in contrast to `Node.Spec.ConfigSource`, where the intended config is declared -via the idempotent `namespace/name` that identifies the target ConfigMap; the Kubelet +via the idempotent `namespace/name` that identifies the target ConfigMap; the kubelet tries to use the latest version of this ConfigMap. -When you are debugging problems on a node, you can inspect the Kubelet's config -metadata and checkpoints. The structure of the Kubelet's checkpointing directory is: +When you are debugging problems on a node, you can inspect the kubelet's config +metadata and checkpoints. The structure of the kubelet's checkpointing directory is: ```none - --dynamic-config-dir (root for managing dynamic config) @@ -334,13 +365,18 @@ in the Kubelet log for additional details and context about the error. Error Message | Possible Causes :-------------| :-------------- -failed to load config, see Kubelet log for details | The Kubelet likely could not parse the downloaded config payload, or encountered a filesystem error attempting to load the payload from disk. -failed to validate config, see Kubelet log for details | The configuration in the payload, combined with any command-line flag overrides, and the sum of feature gates from flags, the config file, and the remote payload, was determined to be invalid by the Kubelet. -invalid NodeConfigSource, exactly one subfield must be non-nil, but all were nil | Since Node.Spec.ConfigSource is validated by the API server to contain at least one non-nil subfield, this likely means that the Kubelet is older than the API server and does not recognize a newer source type. -failed to sync: failed to download config, see Kubelet log for details | The Kubelet could not download the config. It is possible that Node.Spec.ConfigSource could not be resolved to a concrete API object, or that network errors disrupted the download attempt. The Kubelet will retry the download when in this error state. -failed to sync: internal failure, see Kubelet log for details | The Kubelet encountered some internal problem and failed to update its config as a result. Examples include filesystem errors and reading objects from the internal informer cache. -internal failure, see Kubelet log for details | The Kubelet encountered some internal problem while manipulating config, outside of the configuration sync loop. +failed to load config, see Kubelet log for details | The kubelet likely could not parse the downloaded config payload, or encountered a filesystem error attempting to load the payload from disk. +failed to validate config, see Kubelet log for details | The configuration in the payload, combined with any command-line flag overrides, and the sum of feature gates from flags, the config file, and the remote payload, was determined to be invalid by the kubelet. +invalid NodeConfigSource, exactly one subfield must be non-nil, but all were nil | Since Node.Spec.ConfigSource is validated by the API server to contain at least one non-nil subfield, this likely means that the kubelet is older than the API server and does not recognize a newer source type. +failed to sync: failed to download config, see Kubelet log for details | The kubelet could not download the config. It is possible that Node.Spec.ConfigSource could not be resolved to a concrete API object, or that network errors disrupted the download attempt. The kubelet will retry the download when in this error state. +failed to sync: internal failure, see Kubelet log for details | The kubelet encountered some internal problem and failed to update its config as a result. Examples include filesystem errors and reading objects from the internal informer cache. +internal failure, see Kubelet log for details | The kubelet encountered some internal problem while manipulating config, outside of the configuration sync loop. -{{< /table >}} +{{< /table >}} {{% /capture %}} +{{% capture whatsnext %}} + - For more information on configuring the kubelet via a configuration file, see +[Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file). +- See the reference documentation for [`NodeConfigSource`](https://kubernetes.io/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodeconfigsource-v1-core) +{{% /capture %}} diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md index 83c24f639efee..6bdd9a1061e61 100644 --- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md +++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md @@ -77,7 +77,7 @@ Cloud controller manager does not implement any of the volume controllers found ### Scalability -In the previous architecture for cloud providers, we relied on kubelets using a local metadata service to retrieve node information about itself. With this new architecture, we now fully rely on the cloud controller managers to retrieve information for all nodes. For very larger clusters, you should consider possible bottle necks such as resource requirements and API rate limiting. +In the previous architecture for cloud providers, we relied on kubelets using a local metadata service to retrieve node information about itself. With this new architecture, we now fully rely on the cloud controller managers to retrieve information for all nodes. For very large clusters, you should consider possible bottle necks such as resource requirements and API rate limiting. ### Chicken and Egg diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index 12ab6ed22b121..dcba78d81a5d8 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -26,7 +26,7 @@ You can use either `kubectl create configmap` or a ConfigMap generator in `kusto ### Create a ConfigMap Using kubectl create configmap -Use the `kubectl create configmap` command to create configmaps from [directories](#create-configmaps-from-directories), [files](#create-configmaps-from-files), or [literal values](#create-configmaps-from-literal-values): +Use the `kubectl create configmap` command to create ConfigMaps from [directories](#create-configmaps-from-directories), [files](#create-configmaps-from-files), or [literal values](#create-configmaps-from-literal-values): ```shell kubectl create configmap @@ -34,10 +34,7 @@ kubectl create configmap where \ is the name you want to assign to the ConfigMap and \ is the directory, file, or literal value to draw the data from. -The data source corresponds to a key-value pair in the ConfigMap, where - -* key = the file name or the key you provided on the command line, and -* value = the file contents or the literal value you provided on the command line. +When you are creating a ConfigMap based on a file, the key in the \ defaults to the basename of the file, and the value defaults to the file content. You can use [`kubectl describe`](/docs/reference/generated/kubectl/kubectl-commands/#describe) or [`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) to retrieve information @@ -45,7 +42,7 @@ about a ConfigMap. #### Create ConfigMaps from directories -You can use `kubectl create configmap` to create a ConfigMap from multiple files in the same directory. +You can use `kubectl create configmap` to create a ConfigMap from multiple files in the same directory. When you are creating a ConfigMap based on a directory, kubectl identifies files whose basename is a valid key in the directory and packages each of those files into the new ConfigMap. Any directory entries except regular files are ignored (e.g. subdirectories, symlinks, devices, pipes, etc). For example: @@ -61,30 +58,36 @@ wget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-con kubectl create configmap game-config --from-file=configure-pod-container/configmap/ ``` -combines the contents of the `configure-pod-container/configmap/` directory - -```shell -game.properties -ui.properties -``` - -into the following ConfigMap: +The above command packages each file, in this case, `game.properties` and `ui.properties` in the `configure-pod-container/configmap/` directory into the game-config ConfigMap. You can display details of the ConfigMap using the following command: ```shell kubectl describe configmaps game-config ``` -where the output is similar to this: +The output is similar to this: ``` -Name: game-config -Namespace: default -Labels: -Annotations: +Name: game-config +Namespace: default +Labels: +Annotations: Data ==== -game.properties: 158 bytes -ui.properties: 83 bytes +game.properties: +---- +enemies=aliens +lives=3 +enemies.cheat=true +enemies.cheat.level=noGoodRotten +secret.code.passphrase=UUDDLRLRBABAS +secret.code.allowed=true +secret.code.lives=30 +ui.properties: +---- +color.good=purple +color.bad=yellow +allow.textmode=true +how.nice.to.look=fairlyNice ``` The `game.properties` and `ui.properties` files in the `configure-pod-container/configmap/` directory are represented in the `data` section of the ConfigMap. @@ -138,14 +141,22 @@ kubectl describe configmaps game-config-2 where the output is similar to this: ``` -Name: game-config-2 -Namespace: default -Labels: -Annotations: +Name: game-config-2 +Namespace: default +Labels: +Annotations: Data ==== -game.properties: 158 bytes +game.properties: +---- +enemies=aliens +lives=3 +enemies.cheat=true +enemies.cheat.level=noGoodRotten +secret.code.passphrase=UUDDLRLRBABAS +secret.code.allowed=true +secret.code.lives=30 ``` You can pass in the `--from-file` argument multiple times to create a ConfigMap from multiple data sources. @@ -154,7 +165,7 @@ You can pass in the `--from-file` argument multiple times to create a ConfigMap kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties --from-file=configure-pod-container/configmap/ui.properties ``` -Describe the above `game-config-2` configmap created +You can display details of the `game-config-2` ConfigMap using the following command: ```shell kubectl describe configmaps game-config-2 @@ -163,15 +174,28 @@ kubectl describe configmaps game-config-2 The output is similar to this: ``` -Name: game-config-2 -Namespace: default -Labels: -Annotations: +Name: game-config-2 +Namespace: default +Labels: +Annotations: Data ==== -game.properties: 158 bytes -ui.properties: 83 bytes +game.properties: +---- +enemies=aliens +lives=3 +enemies.cheat=true +enemies.cheat.level=noGoodRotten +secret.code.passphrase=UUDDLRLRBABAS +secret.code.allowed=true +secret.code.lives=30 +ui.properties: +---- +color.good=purple +color.bad=yellow +allow.textmode=true +how.nice.to.look=fairlyNice ``` Use the option `--from-env-file` to create a ConfigMap from an env-file, for example: @@ -227,11 +251,11 @@ data: When passing `--from-env-file` multiple times to create a ConfigMap from multiple data sources, only the last env-file is used. {{< /caution >}} -The behavior of passing `--from-env-file` multiple times is demonstrated by: +The behavior of passing `--from-env-file` multiple times is demonstrated by: ```shell # Download the sample files into `configure-pod-container/configmap/` directory -wget https://k8s.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties +wget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties # Create the configmap kubectl create configmap config-multi-env-files \ @@ -656,4 +680,3 @@ data: * Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/). {{% /capture %}} - diff --git a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md index f1740d9ae7106..56ba566bc6d22 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md @@ -123,6 +123,8 @@ As an example, to look at the logs from a running Cassandra pod, you might run: kubectl exec cassandra -- cat /var/log/cassandra/system.log ``` +If your cluster enabled it, you can also try adding an [ephemeral container](/docs/concepts/workloads/pods/ephemeral-containers/) into the existing pod. You can use the new temporary container to run arbitrary commands, for example, to diagnose problems inside the Pod. See the page about [ephemeral container](/docs/concepts/workloads/pods/ephemeral-containers/) for more details, including feature availability. + If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host. diff --git a/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md index 7ae9350944a2b..8886da28d2d3d 100644 --- a/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md +++ b/content/en/docs/tasks/inject-data-application/distribute-credentials-secure.md @@ -2,6 +2,7 @@ title: Distribute Credentials Securely Using Secrets content_template: templates/task weight: 50 +min-kubernetes-server-version: v1.6 --- {{% capture overview %}} @@ -11,7 +12,7 @@ encryption keys, into Pods. {{% capture prerequisites %}} -{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +{{< include "task-tutorial-prereqs.md" >}} {{% /capture %}} diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md index 9327afe2a5812..42c47a43b5fb4 100644 --- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -52,7 +52,7 @@ cronjob.batch/hello created Alternatively, you can use `kubectl run` to create a cron job without writing a full config: ```shell -kubectl run --generator=run-pod/v1 hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster" +kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster" ``` After creating the cron job, get its status using this command: diff --git a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md index 6c396e7861a7f..0dee8eb60adc4 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -83,7 +83,14 @@ kubectl diff -f https://k8s.io/examples/application/simple_deployment.yaml ``` {{< note >}} -`diff` uses [server-side dry-run](/docs/reference/using-api/api-concepts/#dry-run), which needs to be enabled on `kube-apiserver`. +`diff` uses [server-side dry-run](/docs/reference/using-api/api-concepts/#dry-run), +which needs to be enabled on `kube-apiserver`. + +Since `diff` performs a server-side apply request in dry-run mode, +it requires granting `PATCH`, `CREATE`, and `UPDATE` permissions. +See [Dry-Run Authorization](/docs/reference/using-api/api-concepts#dry-run-authorization) +for details. + {{< /note >}} Create the object using `kubectl apply`: @@ -985,11 +992,11 @@ used only by the controller selector with no other semantic meaning. ```yaml selector: matchLabels: - controller-selector: "extensions/v1beta1/deployment/nginx" + controller-selector: "apps/v1/deployment/nginx" template: metadata: labels: - controller-selector: "extensions/v1beta1/deployment/nginx" + controller-selector: "apps/v1/deployment/nginx" ``` {{% capture whatsnext %}} diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md index 1c32e8f8349e5..ec6057cd68e25 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md @@ -135,11 +135,11 @@ Example label: ```yaml selector: matchLabels: - controller-selector: "extensions/v1beta1/deployment/nginx" + controller-selector: "apps/v1/deployment/nginx" template: metadata: labels: - controller-selector: "extensions/v1beta1/deployment/nginx" + controller-selector: "apps/v1/deployment/nginx" ``` {{% /capture %}} diff --git a/content/en/docs/tasks/network/validate-dual-stack.md b/content/en/docs/tasks/network/validate-dual-stack.md index 4d9e6c39692a8..0e6d586bea89f 100644 --- a/content/en/docs/tasks/network/validate-dual-stack.md +++ b/content/en/docs/tasks/network/validate-dual-stack.md @@ -14,7 +14,7 @@ This document shares how to validate IPv4/IPv6 dual-stack enabled Kubernetes clu {{% capture prerequisites %}} * Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces) -* Kubenet network plugin +* A [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports dual-stack (such as Kubenet or Calico) * Kube-proxy running in mode IPVS * [Dual-stack enabled](/docs/concepts/services-networking/dual-stack/) cluster @@ -39,7 +39,7 @@ a00:100::/24 ``` There should be one IPv4 block and one IPv6 block allocated. -Validate that the node has an IPv4 and IPv6 interface detected (replace node name with a valid node from the cluster. In this example the node name is k8s-linuxpool1-34450317-0): +Validate that the node has an IPv4 and IPv6 interface detected (replace node name with a valid node from the cluster. In this example the node name is k8s-linuxpool1-34450317-0): ```shell kubectl get nodes k8s-linuxpool1-34450317-0 -o go-template --template='{{range .status.addresses}}{{printf "%s: %s \n" .type .address}}{{end}}' ``` @@ -151,7 +151,7 @@ If the cloud provider supports the provisioning of IPv6 enabled external load ba {{< codenew file="service/networking/dual-stack-ipv6-lb-svc.yaml" >}} -Validate that the Service receives a `CLUSTER-IP` address from the IPv6 address block along with an `EXTERNAL-IP`. You may then validate access to the service via the IP and port. +Validate that the Service receives a `CLUSTER-IP` address from the IPv6 address block along with an `EXTERNAL-IP`. You may then validate access to the service via the IP and port. ``` kubectl get svc -l app=MyApp NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE @@ -159,4 +159,3 @@ my-service ClusterIP fe80:20d::d06b 2001:db8:f100:4002::9d37:c0d7 80:318 ``` {{% /capture %}} - diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index fdcc75d211341..6a9fe4de324d5 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -62,14 +62,19 @@ It defines an index.php page which performs some CPU intensive computations: ?> ``` -First, we will start a deployment running the image and expose it as a service: +First, we will start a deployment running the image and expose it as a service +using the following configuration: +{{< codenew file="application/php-apache.yaml" >}} + + +Run the following command: ```shell -kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80 +kubectl apply -f https://k8s.io/examples/application/php-apache.yaml ``` ``` -service/php-apache created deployment.apps/php-apache created +service/php-apache created ``` ## Create Horizontal Pod Autoscaler diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 2f76954df9db9..871e5c3494296 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -3,7 +3,6 @@ reviewers: - fgrzadkowski - jszczepkowski - directxman12 -- josephburnett title: Horizontal Pod Autoscaler feature: title: Horizontal scaling @@ -163,15 +162,11 @@ can be fetched, scaling is skipped. This means that the HPA is still capable of scaling up if one or more metrics give a `desiredReplicas` greater than the current value. -Finally, just before HPA scales the target, the scale recommendation is -recorded. The controller considers all recommendations within a configurable -window choosing the highest recommendation from within that window. This value -can be configured using the -`--horizontal-pod-autoscaler-downscale-stabilization` flag or the HPA object -behavior `behavior.scaleDown.stabilizationWindowSeconds` (see [Support for -configurable scaling behavior](#support-for-configurable-scaling-behavior)), -which defaults to 5 minutes. This means that scaledowns will occur gradually, -smoothing out the impact of rapidly fluctuating metric values. +Finally, just before HPA scales the target, the scale recommendation is recorded. The +controller considers all recommendations within a configurable window choosing the +highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes. +This means that scaledowns will occur gradually, smoothing out the impact of rapidly +fluctuating metric values. ## API Object @@ -218,7 +213,10 @@ When managing the scale of a group of replicas using the Horizontal Pod Autoscal it is possible that the number of replicas keeps fluctuating frequently due to the dynamic nature of the metrics evaluated. This is sometimes referred to as *thrashing*. -Starting from v1.12, a new algorithmic update removes the need for an +Starting from v1.6, a cluster operator can mitigate this problem by tuning +the global HPA settings exposed as flags for the `kube-controller-manager` component: + +Starting from v1.12, a new algorithmic update removes the need for the upscale delay. - `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a @@ -234,11 +232,6 @@ the delay value is set too short, the scale of the replicas set may keep thrashi usual. {{< /note >}} -Starting from v1.17 the downscale stabilization window can be set on a per-HPA -basis by setting the `behavior.scaleDown.stabilizationWindowSeconds` field in -the v2beta2 API. See [Support for configurable scaling -behavior](#support-for-configurable-scaling-behavior). - ## Support for multiple metrics Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2beta2` API diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index 2f1b59d83f7c8..2a6af0af4c953 100644 --- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -84,7 +84,7 @@

Cluster Diagram

-

When you deploy applications on Kubernetes, you tell the master to start the application containers. The master schedules the containers to run on the cluster's nodes. The nodes communicate with the master using the Kubernetes API, which the master exposes. End users can also use the Kubernetes API directly to interact with the cluster.

+

When you deploy applications on Kubernetes, you tell the master to start the application containers. The master schedules the containers to run on the cluster's nodes. The nodes communicate with the master using the Kubernetes API, which the master exposes. End users can also use the Kubernetes API directly to interact with the cluster.

A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this tutorial, however, you'll use a provided online terminal with Minikube pre-installed.

diff --git a/content/en/docs/tutorials/kubernetes-basics/update/update-intro.html b/content/en/docs/tutorials/kubernetes-basics/update/update-intro.html index aa6f9b406345e..3531cfd3b5796 100644 --- a/content/en/docs/tutorials/kubernetes-basics/update/update-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/update/update-intro.html @@ -31,7 +31,7 @@

Updating an application

Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.

In the previous module we scaled our application to run multiple instances. This is a requirement for performing updates without affecting application availability. By default, the maximum number of Pods that can be unavailable during the update and the maximum number of new Pods that can be created, is one. Both options can be configured to either numbers or percentages (of Pods). - In Kubernetes, updates are versioned and any Deployment update can be reverted to previous (stable) version.

+ In Kubernetes, updates are versioned and any Deployment update can be reverted to a previous (stable) version.

diff --git a/content/en/docs/tutorials/services/source-ip.md b/content/en/docs/tutorials/services/source-ip.md index e59f9058a0e59..e1b4876a240d9 100644 --- a/content/en/docs/tutorials/services/source-ip.md +++ b/content/en/docs/tutorials/services/source-ip.md @@ -34,7 +34,7 @@ document. The examples use a small nginx webserver that echoes back the source IP of requests it receives through an HTTP header. You can create it as follows: ```console -kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4 +kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4 ``` The output is: ``` diff --git a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md index 62f89d35bb597..4f4dbda986caa 100644 --- a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -80,8 +80,17 @@ The preceding command creates a NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s - Note: If the external IP address is shown as \, wait for a minute - and enter the same command again. + {{< note >}} + + The `type=LoadBalancer` service is backed by external cloud providers, which is not covered in this example, please refer to [this page](/docs/concepts/services-networking/service/#loadbalancer) for the details. + + {{< /note >}} + + {{< note >}} + + If the external IP address is shown as \, wait for a minute and enter the same command again. + + {{< /note >}} 1. Display detailed information about the Service: diff --git a/content/en/docs/user-journeys/users/application-developer/advanced.md b/content/en/docs/user-journeys/users/application-developer/advanced.md deleted file mode 100644 index dde720f1b6425..0000000000000 --- a/content/en/docs/user-journeys/users/application-developer/advanced.md +++ /dev/null @@ -1,120 +0,0 @@ ---- -reviewers: -- chenopis -layout: docsportal -css: /css/style_user_journeys.css -js: https://use.fontawesome.com/4bcc658a89.js, https://cdnjs.cloudflare.com/ajax/libs/prefixfree/1.0.7/prefixfree.min.js -title: Advanced Topics -track: "USERS › APPLICATION DEVELOPER › ADVANCED" -content_template: templates/user-journey-content ---- - -{{% capture overview %}} - -{{< note >}} -This page assumes that you're familiar with core Kubernetes concepts, and are comfortable deploying your own apps. If not, you should review the {{< link text="Intermediate App Developer" url="/docs/user-journeys/users/application-developer/intermediate/" >}} topics first. -{{< /note >}} -After checking out the current page and its linked sections, you should have a better understanding of the following: -* Advanced features that you can leverage in your application -* The various ways of extending the Kubernetes API - -{{% /capture %}} - - -{{% capture body %}} - -## Deploy an application with advanced features - -Now you know the set of API objects that Kubernetes provides. Understanding the difference between a {{< glossary_tooltip term_id="daemonset" >}} and a {{< glossary_tooltip term_id="deployment" >}} is oftentimes sufficient for app deployment. That being said, it's also worth familiarizing yourself with Kubernetes's lesser known features. They can be quite powerful when applied to the right use cases. - -#### Container-level features - -As you may know, it's an antipattern to migrate an entire app (e.g. containerized Rails app, MySQL database, and all) into a single Pod. That being said, there are some very useful patterns that go beyond a 1:1 correspondence between a container and its Pod: - -* **Sidecar container**: Although your Pod should still have a single main container, you can add a secondary container that acts as a helper (see a {{< link text="logging example" url="/docs/concepts/cluster-administration/logging/#sidecar-container-with-a-logging-agent" >}}). Two containers within a single Pod can communicate {{< link text="via a shared volume" url="/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" >}}. -* **Init containers**: *Init containers* run before any of a Pod's *app containers* (such as main and sidecar containers). {{< link text="Read more" url="/docs/concepts/workloads/pods/init-containers/" >}}, see an {{< link text="nginx server example" url="/docs/tasks/configure-pod-container/configure-pod-initialization/" >}}, and {{< link text="learn how to debug these containers" url="/docs/tasks/debug-application-cluster/debug-init-containers/" >}}. - -#### Pod configuration - -Usually, you use {{< glossary_tooltip text="labels" term_id="label" >}} and {{< glossary_tooltip text="annotations" term_id="annotation" >}} to attach metadata to your resources. To inject data into your resources, you'd likely create {{< glossary_tooltip text="ConfigMaps" term_id="configmap" >}} (for nonconfidential data) or {{< glossary_tooltip text="Secrets" term_id="secret" >}} (for confidential data). - -Below are some other, lesser-known ways of configuring your resources' Pods: - -* **Taints and Tolerations** - These provide a way for nodes to "attract" or "repel" your Pods. They are often used when an application needs to be deployed onto specific hardware, such as GPUs for scientific computing. {{< link text="Read more" url="/docs/concepts/configuration/taint-and-toleration/" >}}. -* **Downward API** - This allows your containers to consume information about themselves or the cluster, without being overly coupled to the Kubernetes API server. This can be achieved with {{< link text="environment variables" url="/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" >}} or {{< link text="DownwardAPIVolumeFiles" url="/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" >}}. -* **Pod Presets** - Normally, to mount runtime requirements (such as environmental variables, ConfigMaps, and Secrets) into a resource, you specify them in the resource's configuration file. {{< link text="PodPresets" url="/docs/concepts/workloads/pods/podpreset/" >}} allow you to dynamically inject these requirements instead, when the resource is created. For instance, this allows team A to mount any number of new Secrets into the resources created by teams B and C, without requiring action from B and C. {{< link text="See an example" url="/docs/tasks/inject-data-application/podpreset/" >}}. - -#### Additional API Objects - -{{< note >}} -Before setting up the following resources, check to see if they are the responsibility of your organization's {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}}. -{{< /note >}} -* **{{< glossary_tooltip text="Horizontal Pod Autoscaler (HPA)" term_id="horizontal-pod-autoscaler" >}}** - These resources are a great way to automate the process of scaling your application when CPU usage or other {{< link text="custom metrics" url="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md" >}} spike. {{< link text="See an example" url="/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" >}} to understand how HPAs are set up. - -* **Federated cluster objects** - If you are running an application on multiple Kubernetes clusters using *federation*, you need to deploy the federated version of the standard Kubernetes API objects. For reference, check out the guides for setting up {{< link text="Federated ConfigMaps" url="/docs/tasks/administer-federation/configmap/" >}} and {{< link text="Federated Deployments" url="/docs/tasks/administer-federation/deployment/" >}}. - -## Extend the Kubernetes API - -Kubernetes is designed with extensibility in mind. If the API resources and features mentioned above are not enough for your needs, there are ways to customize its behavior without having to modify core Kubernetes code. - -#### Understand Kubernetes's default behavior - -Before making any customizations, it's important that you understand the general abstraction behind Kubernetes API objects. Although Deployments and Secrets may seem quite different, the following concepts are true for *any* object: - -* **Kubernetes objects are a way of storing structured data about your cluster.** - In the case of Deployments, this data represents desired state (such as "How many replicas should be running?"), but it can also be general metadata (such as database credentials). -* **Kubernetes objects are modified via the {{< glossary_tooltip text="Kubernetes API" term_id="kubernetes-api" >}}**. - In other words, you can make `GET` and `POST` requests to a specific resource path (such as `/api/v1/namespaces/default/deployments`) to read and write the corresponding object type. -* **By leveraging the {{< link text="Controller pattern" url="/docs/concepts/api-extension/custom-resources/#custom-controllers" >}}, Kubernetes objects can be used to enforce desired state**. For simplicity, you can think of the Controller pattern as the following continuous loop: - -
- 1. Check current state (number of replicas, container image, etc) - 2. Compare current state to desired state - 3. Update if there's a mismatch -
- - These states are obtained from the Kubernetes API. - - {{< note >}} - Not all Kubernetes objects need to have a Controller. Though Deployments trigger the cluster to make state changes, ConfigMaps act purely as storage. - {{< /note >}} -#### Create Custom Resources - -Based on the ideas above, you can define a new {{< link text="Custom Resource" url="/docs/concepts/api-extension/custom-resources/#custom-resources" >}} that is just as legitimate as a Deployment. For example, you might want to define a `Backup` object for periodic backups, if `CronJobs` don't provide all the functionality you need. - -There are two main ways of setting up custom resources: -1. **Custom Resource Definitions (CRDs)** - This method requires the least amount of implementation work. See {{< link text="an example" url="/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/" >}}. -2. **API aggregation** - This method requires some {{< link text="pre-configuration" url="/docs/tasks/access-kubernetes-api/configure-aggregation-layer/" >}} before you actually {{< link text="set up a separate, extension API server" url="/docs/tasks/access-kubernetes-api/setup-extension-api-server/" >}}. - -Note that unlike standard Kubernetes objects, which rely on the built-in {{< link text="`kube-controller-manager`" url="/docs/reference/generated/kube-controller-manager/" >}}, you'll need to write and run your own {{< link text="custom controllers" url="https://github.com/kubernetes/sample-controller" >}}. - -You may also find the following info helpful: -* {{< link text="How to know if custom resources are right for your use case" url="/docs/concepts/api-extension/custom-resources/#should-i-use-a-configmap-or-a-custom-resource" >}} -* {{< link text="How to decide between CRDs and API aggregation" url="/docs/concepts/api-extension/custom-resources/#choosing-a-method-for-adding-custom-resources" >}} - -#### Service Catalog - -If you want to consume or provide complete services (rather than individual resources), **{{< glossary_tooltip text="Service Catalog" term_id="service-catalog" >}}** provides a {{< link text="specification" url="https://github.com/openservicebrokerapi/servicebroker" >}} for doing so. These services are registered using {{< glossary_tooltip text="Service Brokers" term_id="service-broker" >}} (see {{< link text="some examples" url="https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#example-service-brokers" >}}). - -If you do not have a {{< glossary_tooltip text="cluster operator" term_id="cluster-operator" >}} to manage the installation of Service Catalog, you can do so using {{< link text="Helm" url="/docs/tasks/service-catalog/install-service-catalog-using-helm/" >}} or an {{< link text="installer binary" url="/docs/tasks/service-catalog/install-service-catalog-using-sc/" >}}. - - -## Explore additional resources - -#### References - -The following topics are also useful for building more complex applications: - -* {{< link text="Other points of extensibility within Kubernetes" url="/docs/concepts/overview/extending/" >}} - A conceptual overview of where you can hook into the Kubernetes architecture. -* {{< link text="Kubernetes Client Libraries" url="/docs/reference/using-api/client-libraries/" >}} - Useful for building apps that need to interact heavily with the Kubernetes API. - -#### What's next -Congrats on completing the Application Developer user journey! You've covered the majority of features that Kubernetes has to offer. What now? - -* If you'd like to suggest new features or keep up with the latest developments around Kubernetes app development, consider joining a {{< glossary_tooltip term_id="sig" >}} such as {{< link text="SIG Apps" url="https://github.com/kubernetes/community/tree/master/sig-apps" >}}. - -* If you are interested in learning more about the inner workings of Kubernetes (e.g. networking), consider checking out the {{< link text="Cluster Operator journey" url="/docs/user-journeys/users/cluster-operator/foundational/" >}}. - -{{% /capture %}} - - diff --git a/content/en/docs/user-journeys/users/application-developer/foundational.md b/content/en/docs/user-journeys/users/application-developer/foundational.md deleted file mode 100644 index 43226f1bad1dc..0000000000000 --- a/content/en/docs/user-journeys/users/application-developer/foundational.md +++ /dev/null @@ -1,260 +0,0 @@ ---- -reviewers: -- chenopis -layout: docsportal -css: /css/style_user_journeys.css -js: https://use.fontawesome.com/4bcc658a89.js, https://cdnjs.cloudflare.com/ajax/libs/prefixfree/1.0.7/prefixfree.min.js -title: Foundational -track: "USERS › APPLICATION DEVELOPER › FOUNDATIONAL" -content_template: templates/user-journey-content ---- - -{{% capture overview %}} -If you're a developer looking to run applications on Kubernetes, this page and its linked topics can help you get started with the fundamentals. Though this page primarily describes development workflows, {{< link text="the subsequent page in the series" url="/docs/home/?path=users&persona=app-developer&level=intermediate" >}} covers more advanced, production setups. - -{{< note >}} -**A quick note**
This app developer "user journey" is *not* a comprehensive overview of Kubernetes. It focuses more on *what* you develop, test, and deploy to Kubernetes, rather than *how* the underlying infrastructure works.

Though it's possible for a single person to manage both, in many organizations, it’s common to assign the latter to a dedicated {{< glossary_tooltip text="cluster operator" term_id="cluster-operator" >}}. -{{< /note >}} -{{% /capture %}} - - -{{% capture body %}} -## Get started with a cluster - -#### Web-based environment - -If you're brand new to Kubernetes and simply want to experiment without setting up a full development environment, *web-based environments* are a good place to start: - -* {{< link text="Kubernetes Basics" url="/docs/tutorials/kubernetes-basics/#basics-modules" >}} - Introduces you to six common Kubernetes workflows. Each section walks you through browser-based, interactive exercises complete with their own Kubernetes environment. - -* {{< link text="Katacoda" url="https://www.katacoda.com/courses/kubernetes/playground" >}} - The playground equivalent of the environment used in *Kubernetes Basics* above. Katacoda also provides {{< link text="more advanced tutorials" url="https://www.katacoda.com/courses/kubernetes/" >}}, such as "Liveness and Readiness Healthchecks". - - -* {{< link text="Play with Kubernetes" url="http://labs.play-with-k8s.com/" >}} - A less structured environment than the *Katacoda* playground, for those who are more comfortable with Kubernetes concepts and want to explore further. It supports the ability to spin up multiple nodes. - - -#### Minikube (recommended) - -Web-based environments are easy to access, but are not persistent. If you want to continue exploring Kubernetes in a workspace that you can come back to and change, *Minikube* is a good option. - -Minikube can be installed locally, and runs a simple, single-node Kubernetes cluster inside a virtual machine (VM). This cluster is fully functioning and contains all core Kubernetes components. Many developers have found this sufficient for local application development. - -* {{< link text="Install Minikube" url="/docs/tasks/tools/install-minikube/" >}}. - -* {{< link text="Install kubectl" url="/docs/tasks/tools/install-kubectl/" >}}. ({{< glossary_tooltip text="What is kubectl?" term_id="kubectl" >}}) - -* *(Optional)* {{< link text="Install Docker" url="/docs/setup/production-environment/container-runtimes/#docker" >}} if you plan to run your Minikube cluster as part of a local development environment. - - Minikube includes a Docker daemon, but if you're developing applications locally, you'll want an independent Docker instance to support your workflow. This allows you to create {{< glossary_tooltip text="containers" term_id="container" >}} and push them to a container registry. - - {{< note >}} - Version 1.12 is recommended for full compatibility with Kubernetes, but a few other versions are tested and known to work. - {{< /note >}} - -You can get basic information about your cluster with the commands `kubectl cluster-info` and `kubectl get nodes`. However, to get a good idea of what's really going on, you need to deploy an application to your cluster. This is covered in the next section. - -#### MicroK8s - -On Linux, *MicroK8s* is a good alternative to Minikube for a local -install of Kubernetes: - -* Runs on the native OS, so there is no overhead from running a virtual machine. -* Always provides the latest stable version of Kubernetes, using built-in auto-upgrade functionality. -* Installs in less than a minute. - -* {{< link text="Install microk8s" url="https://microk8s.io/" >}}. - -After you install MicroK8s, you can use its tab-completion -functionality. All MicroK8s commands start with `microk8s.`. Type -`microk8s.` (with the period) and then use the tab key to see a list -of available commands. - -It also includes commands to enable Kubernetes subsystems. For example: - -* the Kubernetes Dashboard -* the DNS service -* GPU passthrough (for NVIDIA) -* Ingress -* Istio -* Metrics server -* Registry -* Storage - -## Deploy an application - -#### Basic workloads - -The following examples demonstrate the fundamentals of deploying Kubernetes apps: - - * **Stateless apps**: {{< link text="Deploy a simple nginx server" url="/docs/tasks/run-application/run-stateless-application-deployment/" >}}. - - * **Stateful apps**: {{< link text="Deploy a MySQL database" url="/docs/tasks/run-application/run-single-instance-stateful-application/" >}}. - -Through these deployment tasks, you'll gain familiarity with the following: - -* General concepts - - * **Configuration files** - Written in YAML or JSON, these files describe the desired state of your application in terms of Kubernetes API objects. A file can include one or more API object descriptions (*manifests*). (See [the example YAML](/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment) from the stateless app). - - * **{{< glossary_tooltip text="Pods" term_id="pod" >}}** - This is the basic unit for all of the workloads you run on Kubernetes. These workloads, such as *Deployments* and *Jobs*, are composed of one or more Pods. To learn more, check out {{< link text="this explanation of Pods and Nodes" url="/docs/tutorials/kubernetes-basics/explore-intro/" >}}. - -* Common workload objects - * **{{< glossary_tooltip text="Deployment" term_id="deployment" >}}** - The most common way of running *X* copies (Pods) of your application. Supports rolling updates to your container images. - - * **{{< glossary_tooltip text="Service" term_id="service" >}}** - By itself, a Deployment can't receive traffic. Setting up a Service is one of the simplest ways to configure a Deployment to receive and loadbalance requests. Depending on the `type` of Service used, these requests can come from external client apps or be limited to apps within the same cluster. A Service is tied to a specific Deployment using {{< glossary_tooltip text="label" term_id="label" >}} selection. - -The subsequent topics are also useful to know for basic application deployment. - -#### Metadata - -You can also specify custom information about your Kubernetes API objects by attaching key/value fields. Kubernetes provides two ways of doing this: - -* **{{< glossary_tooltip text="Labels" term_id="label" >}}** - Identifying metadata that you can use to sort and select sets of API objects. Labels have many applications, including the following: - - * *To keep the right number of replicas (Pods) running in a Deployment.* The specified label (`app: nginx` in the {{< link text="stateless app example" url="/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment" >}}) is used to stamp the Deployment's newly created Pods (as the value of the `spec.template.labels` configuration field), and to query which Pods it already manages (as the value of `spec.selector.matchLabels`). - - * *To tie a Service to a Deployment* using the `selector` field, which is demonstrated in the {{< link text="stateful app example" url="/docs/tasks/run-application/run-single-instance-stateful-application/#deploy-mysql" >}}. - - * *To look for specific subset of Kubernetes objects, when you are using {{< glossary_tooltip text="kubectl" term_id="kubectl" >}}.* For instance, the command `kubectl get deployments --selector=app=nginx` only displays Deployments from the nginx app. - -* **{{< glossary_tooltip text="Annotations" term_id="annotation" >}}** - Nonidentifying metadata that you can attach to API objects, usually if you don't intend to use them for sorting purposes. These often serve as supplementary data about an app's deployment, such as Git SHAs, PR numbers, or URL pointers to observability dashboards. - - -#### Storage - -You'll also want to think about storage. Kubernetes provides different types of storage API objects for different storage needs: - -* **{{< glossary_tooltip text="Volumes" term_id="volume" >}}** - Let you define storage for your cluster that is tied to the lifecycle of a Pod. It is therefore more persistent than container storage. Learn {{< link text="how to configure volume storage" url="/docs/tasks/configure-pod-container/configure-volume-storage/" >}}, or {{< link text="read more about volume storage" url="/docs/concepts/storage/volumes/" >}}. - -* **{{< glossary_tooltip text="PersistentVolumes" term_id="persistent-volume" >}}** and **{{< glossary_tooltip text="PersistentVolumeClaims" term_id="persistent-volume-claim" >}}** - Let you define storage at the cluster level. Typically a cluster operator defines the PersistentVolume objects for the cluster, and cluster users (application developers, you) define the PersistentVolumeClaim objects that your application requires. Learn {{< link text="how to set up persistent storage for your cluster" url="/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" >}} or {{< link text="read more about persistent volumes" url="/docs/concepts/storage/persistent-volumes/" >}}. - -#### Configuration - -To avoid having to unnecessarily rebuild your container images, you should decouple your application's *configuration data* from the code required to run it. There are a couple ways of doing this, which you should choose according to your use case: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ApproachType of DataHow it's mountedExample
Using a manifest's container definitionNon-confidentialEnvironment variableCommand-line flag
Using {{< glossary_tooltip text="ConfigMaps" term_id="configmap" >}}Non-confidentialEnvironment variable OR local filenginx configuration
Using {{< glossary_tooltip text="Secrets" term_id="secret" >}}ConfidentialEnvironment variable OR local fileDatabase credentials
- -{{< note >}} -If you have any data that you want to keep private, you should be using a Secret. Otherwise there is nothing stopping that data from being exposed to malicious users. -{{< /note >}} - -## Understand basic Kubernetes architecture - -As an app developer, you don't need to know everything about the inner workings of Kubernetes, but you may find it helpful to understand it at a high level. - -#### What Kubernetes offers - -Say that your team is deploying an ordinary Rails application. You've run some calculations and determined that you need five instances of your app running at any given time, in order to handle external traffic. - -If you're not running Kubernetes or a similar automated system, you might find the following scenario familiar: - -{{< note >}} -1. One instance of your app (a complete machine instance or just a container) goes down. - -1. Because your team has monitoring set up, this pages the person on call. - -1. The on-call person has to go in, investigate, and manually spin up a new instance. - -1. Depending how your team handles DNS/networking, the on-call person may also need to also update the service discovery mechanism to point at the IP of the new Rails instance rather than the old. -{{< /note >}} - -This process can be tedious and also inconvenient, especially if (2) happens in the early hours of the morning! - -**If you have Kubernetes set up, however, manual intervention is not as necessary.** The Kubernetes {{< link text="control plane" url="/docs/concepts/overview/components/#master-components" >}}, which runs on your cluster's master node, gracefully handles (3) and (4) on your behalf. As a result, Kubernetes is often referred to as a *self-healing* system. - -There are two key parts of the control plane that facilitate this behavior: the *Kubernetes API server* and the *Controllers*. - -#### Kubernetes API server - -For Kubernetes to be useful, it needs to know *what* sort of cluster state you want it to maintain. Your YAML or JSON *configuration files* declare this desired state in terms of one or more API objects, such as {{< glossary_tooltip text="Deployments" term_id="deployment" >}}. To make updates to your cluster's state, you submit these files to the {{< glossary_tooltip text="Kubernetes API" term_id="kubernetes-api" >}} server (`kube-apiserver`). - -Examples of state include but are not limited to the following: - -* The applications or other workloads to run -* The container images for your applications and workloads -* Allocation of network and disk resources - -Note that the API server is just the gateway, and that object data is actually stored in a highly available datastore called {{< link text="*etcd*" url="https://github.com/coreos/etcd" >}}. For most intents and purposes, though, you can focus on the API server. Most reads and writes to cluster state take place as API requests. - -For more information, see {{< link text="Understanding Kubernetes Objects" url="/docs/concepts/overview/working-with-objects/kubernetes-objects/" >}}. - -#### Controllers - -Once you’ve declared your desired state through the Kubernetes API, the *controllers* work to make the cluster’s current state match this desired state. - -The standard controller processes are {{< link text="`kube-controller-manager`" url="/docs/reference/generated/kube-controller-manager/" >}} and {{< link text="`cloud-controller-manager`" url="/docs/concepts/overview/components/#cloud-controller-manager" >}}, but you can also write your own controllers as well. - -All of these controllers implement a *control loop*. For simplicity, you can think of this as the following: - -{{< note >}} -1. What is the current state of the cluster (X)? - -1. What is the desired state of the cluster (Y)? - -1. X == Y ? - - * `true` - Do nothing. - * `false` - Perform tasks to get to Y, such as starting or restarting containers, - or scaling the number of replicas of a given application. Return to 1. -{{< /note >}} - -By continuously looping, these controllers ensure the cluster can pick up new updates and avoid drifting from the desired state. These ideas are covered in more detail {{< link text="here" url="/docs/concepts/" >}}. - -## Additional resources - -The Kubernetes documentation is rich in detail. Here's a curated list of resources to help you start digging deeper. - -### Basic concepts - -* {{< link text="More about the components that run Kubernetes" url="/docs/concepts/overview/components/" >}} - -* {{< link text="Understanding Kubernetes objects" url="/docs/concepts/overview/working-with-objects/kubernetes-objects/" >}} - -* {{< link text="More about Node objects" url="/docs/concepts/architecture/nodes/" >}} - -* {{< link text="More about Pod objects" url="/docs/concepts/workloads/pods/pod-overview/" >}} - -### Tutorials - -* {{< link text="Kubernetes Basics" url="/docs/tutorials/kubernetes-basics/" >}} - -* {{< link text="Hello Minikube" url="/docs/tutorials/stateless-application/hello-minikube/" >}} *(Runs on Mac only)* - -* {{< link text="Kubernetes object management" url="/docs/tutorials/object-management-kubectl/object-management/" >}} - -### What's next - -If you feel fairly comfortable with the topics on this page and want to learn more, check out the following user journeys: - -* {{< link text="Intermediate App Developer" url="/docs/user-journeys/users/application-developer/intermediate/" >}} - Dive deeper, with the next level of this journey. -* {{< link text="Foundational Cluster Operator" url="/docs/user-journeys/users/cluster-operator/foundational/" >}} - Build breadth, by exploring other journeys. - -{{% /capture %}} diff --git a/content/en/docs/user-journeys/users/application-developer/intermediate.md b/content/en/docs/user-journeys/users/application-developer/intermediate.md deleted file mode 100644 index 1a3915f2248ac..0000000000000 --- a/content/en/docs/user-journeys/users/application-developer/intermediate.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -reviewers: -- chenopis -layout: docsportal -css: /css/style_user_journeys.css -js: https://use.fontawesome.com/4bcc658a89.js, https://cdnjs.cloudflare.com/ajax/libs/prefixfree/1.0.7/prefixfree.min.js -title: Intermediate -track: "USERS › APPLICATION DEVELOPER › INTERMEDIATE" -content_template: templates/user-journey-content ---- - - -{{% capture overview %}} - -{{< note >}} - This page assumes that you've experimented with Kubernetes before. At this point, you should have basic experience interacting with a Kubernetes cluster (locally with Minikube, or elsewhere), and using API objects like Deployments to run your applications.

If not, you should review the {{< link text="Beginner App Developer" url="/docs/user-journeys/users/application-developer/foundational/" >}} topics first. -{{< /note >}} -After checking out the current page and its linked sections, you should have a better understanding of the following: - -* Additional Kubernetes workload patterns, beyond Deployments -* What it takes to make a Kubernetes application production-ready -* Community tools that can improve your development workflow - -{{% /capture %}} - - -{{% capture body %}} - -## Learn additional workload patterns - -As your Kubernetes use cases become more complex, you may find it helpful to familiarize yourself with more of the toolkit that Kubernetes provides. {{< link text="Basic workload" url="/docs/user-journeys/users/application-developer/foundational/#section-2" >}} objects like {{< glossary_tooltip text="Deployments" term_id="deployment" >}} make it straightforward to run, update, and scale applications, but they are not ideal for every scenario. - -The following API objects provide functionality for additional workload types, whether they are *persistent* or *terminating*. - -#### Persistent workloads - -Like Deployments, these API objects run indefinitely on a cluster until they are manually terminated. They are best for long-running applications. - -* **{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}}** - Like Deployments, StatefulSets allow you to specify that a - certain number of replicas should be running for your application. - - {{< note >}} It's misleading to say that Deployments can't handle stateful workloads. Using {{< glossary_tooltip text="PersistentVolumes" term_id="persistent-volume" >}}, you can persist data beyond the lifecycle of any individual Pod in your Deployment. - {{< /note >}} - - However, StatefulSets can provide stronger guarantees about "recovery" behavior than Deployments. StatefulSets maintain a sticky, stable identity for their Pods. The following table provides some concrete examples of what this might look like: - - | | Deployment | StatefulSet | - |---|---|---| - | **Example Pod name** | `example-b1c4` | `example-0` | - | **When a Pod dies** | Reschedule on *any* node, with new name `example-a51z` | Reschedule on same node, as `example-0` | - | **When a node becomes unreachable** | Pod(s) are scheduled onto new node, with new names | Pod(s) are marked as "Unknown", and aren't rescheduled unless the Node object is forcefully deleted | - - In practice, this means that StatefulSets are best suited for scenarios where replicas (Pods) need to coordinate their workloads in a strongly consistent manner. Guaranteeing an identity for each Pod helps avoid {{< link text="split-brain" url="https://en.wikipedia.org/wiki/Split-brain_(computing)" >}} side effects in the case when a node becomes unreachable ({{< link text="network partition" url="https://en.wikipedia.org/wiki/Network_partition" >}}). This makes StatefulSets a great fit for distributed datastores like Cassandra or Elasticsearch. - - -* **{{< glossary_tooltip text="DaemonSets" term_id="daemonset" >}}** - DaemonSets run continuously on every node in your cluster, even as nodes are added or swapped in. This guarantee is particularly useful for setting up global behavior across your cluster, such as: - - * Logging and monitoring, from applications like `fluentd` - * Network proxy or {{< link text="service mesh" url="https://www.linux.com/news/whats-service-mesh-and-why-do-i-need-one" >}} - - -#### Terminating workloads - -In contrast to Deployments, these API objects are finite. They stop once the specified number of Pods have completed successfully. - -* **{{< glossary_tooltip text="Jobs" term_id="job" >}}** - You can use these for one-off tasks like running a script or setting up a work queue. These tasks can be executed sequentially or in parallel. These tasks should be relatively independent, as Jobs do not support closely communicating parallel processes. {{< link text="Read more about Job patterns" url="/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-patterns" >}}. - -* **{{< glossary_tooltip text="CronJobs" term_id="cronjob" >}}** - These are similar to Jobs, but allow you to schedule their execution for a specific time or for periodic recurrence. You might use CronJobs to send reminder emails or to run backup jobs. They are set up with a similar syntax as *crontab*. - -#### Other resources - -For more info, you can check out {{< link text="a list of additional Kubernetes resource types" url="/docs/reference/kubectl/overview/#resource-types" >}} as well as the {{< link text="API reference docs" url="{{ reference_docs_url }}" >}}. - -There may be additional features not mentioned here that you may find useful, which are covered in the {{< link text="full Kubernetes documentation" url="/docs/home/?path=browse" >}}. - -## Deploy a production-ready workload - -The beginner tutorials on this site, such as the {{< link text="Guestbook app" url="/docs/tutorials/stateless-application/guestbook/" >}}, are geared towards getting workloads up and running on your cluster. This prototyping is great for building your intuition around Kubernetes! However, in order to reliably and securely promote your workloads to production, you need to follow some additional best practices. - -#### Declarative configuration - -You are likely interacting with your Kubernetes cluster via {{< glossary_tooltip text="kubectl" term_id="kubectl" >}}. kubectl can be used to debug the current state of your cluster (such as checking the number of nodes), or to modify live Kubernetes objects (such as updating a workload's replica count with `kubectl scale`). - -When using kubectl to update your Kubernetes objects, it's important to be aware that different commands correspond to different approaches: - -* {{< link text="Purely imperative" url="/docs/tutorials/object-management-kubectl/imperative-object-management-command/" >}} -* {{< link text="Imperative with local configuration files" url="/docs/tutorials/object-management-kubectl/imperative-object-management-configuration/" >}} (typically YAML) -* {{< link text="Declarative with local configuration files" url="/docs/tutorials/object-management-kubectl/declarative-object-management-configuration/" >}} (typically YAML) - -There are pros and cons to each approach, though the declarative approach (such as `kubectl apply -f`) may be most helpful in production. With this approach, you rely on local YAML files as the source of truth about your desired state. This enables you to version control your configuration, which is helpful for code reviews and audit tracking. - -For additional configuration best practices, familiarize yourself with {{< link text="this guide" url="/docs/concepts/configuration/overview/" >}}. - -#### Security - -You may be familiar with the *principle of least privilege*---if you are too generous with permissions when writing or using software, the negative effects of a compromise can escalate out of control. Would you be cautious handing out `sudo` privileges to software on your OS? If so, you should be just as careful when granting your workload permissions to the {{< glossary_tooltip text="Kubernetes API" term_id="kubernetes-api" >}} server! The API server is the gateway for your cluster's source of truth; it provides endpoints to read or modify cluster state. - -You (or your {{< glossary_tooltip text="cluster operator" term_id="cluster-operator" >}}) can lock down API access with the following: - -* **{{< glossary_tooltip text="ServiceAccounts" term_id="service-account" >}}** - An "identity" that your Pods can be tied to -* **{{< glossary_tooltip text="RBAC" term_id="rbac" >}}** - One way of granting your ServiceAccount explicit permissions - -For even more comprehensive reading about security best practices, consider checking out the following topics: - -* {{< link text="Authentication" url="/docs/reference/access-authn-authz/authentication/" >}} (Is the user who they say they are?) -* {{< link text="Authorization" url="/docs/admin/authorization/" >}} (Does the user actually have permissions to do what they're asking?) - -#### Resource isolation and management - -If your workloads are operating in a *multi-tenant* environment with multiple teams or projects, your container(s) are not necessarily running alone on their node(s). They are sharing node resources with other containers which you do not own. - -Even if your cluster operator is managing the cluster on your behalf, it is helpful to be aware of the following: - -* **{{< glossary_tooltip text="Namespaces" term_id="namespace" >}}**, used for isolation -* **{{< link text="Resource quotas" url="/docs/concepts/policy/resource-quotas/" >}}**, which affect what your team's workloads can use -* **{{< link text="Memory" url="/docs/tasks/configure-pod-container/assign-memory-resource/" >}} and {{< link text="CPU" url="/docs/tasks/configure-pod-container/assign-cpu-resource/" >}} requests**, for a given Pod or container -* **{{< link text="Monitoring" url="/docs/tasks/debug-application-cluster/resource-usage-monitoring/" >}}**, both on the cluster level and the app level - -This list may not be completely comprehensive, but many teams have existing processes that take care of all this. If this is not the case, you'll find the Kubernetes documentation fairly rich in detail. - -## Improve your dev workflow with tooling - -As an app developer, you'll likely encounter the following tools in your workflow. - -#### kubectl - -`kubectl` is a command-line tool that allows you to easily read or modify your Kubernetes cluster. It provides convenient, short commands for common operations like scaling app instances and getting node info. How does kubectl do this? It's basically just a user-friendly wrapper for making API requests. It's written using {{< link text="client-go" url="https://github.com/kubernetes/client-go/#client-go" >}}, the Go library for the Kubernetes API. - -To learn about the most commonly used kubectl commands, check out the {{< link text="kubectl cheatsheet" url="/docs/reference/kubectl/cheatsheet/" >}}. It explains topics such as the following: - -* {{< link text="kubeconfig files" url="/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" >}} - Your kubeconfig file tells kubectl what cluster to talk to, and can reference multiple clusters (such as dev and prod). -* {{< link text="The various output formats available" url="/docs/reference/kubectl/cheatsheet/#formatting-output" >}} - This is useful to know when you are using `kubectl get` to list information about certain API objects. - -* {{< link text="The JSONPath output format" url="/docs/reference/kubectl/jsonpath/" >}} - This is related to the output formats above. JSONPath is especially useful for parsing specific subfields out of `kubectl get` output (such as the URL of a {{< glossary_tooltip text="Service" term_id="service" >}}). - -* {{< link text="`kubectl run` vs `kubectl apply`" url="/docs/reference/kubectl/conventions/" >}} - This ties into the [declarative configuration](#declarative-configuration) discussion in the previous section. - -For the full list of kubectl commands and their options, check out {{< link text="the reference guide" url="/docs/reference/generated/kubectl/kubectl-commands" >}}. - -#### Helm - -To leverage pre-packaged configurations from the community, you can use **{{< glossary_tooltip text="Helm charts" term_id="helm-chart" >}}**. - -Helm charts package up YAML configurations for specific apps like Jenkins and Postgres. You can then install and run these apps on your cluster with minimal extra configuration. This approach makes the most sense for "off-the-shelf" components which do not require much custom implementation logic. - -For writing your own Kubernetes app configurations, there is a {{< link text="thriving ecosystem of tools" url="https://docs.google.com/a/heptio.com/spreadsheets/d/1FCgqz1Ci7_VCz_wdh8vBitZ3giBtac_H8SBw4uxnrsE/edit?usp=drive_web" >}} that you may find useful. - -## Explore additional resources - -#### References -Now that you're fairly familiar with Kubernetes, you may find it useful to browse the following reference pages. Doing so provides a high level view of what other features may exist: - -* {{< link text="Commonly used `kubectl` commands" url="/docs/reference/kubectl/cheatsheet/" >}} -* {{< link text="Kubernetes API reference" url="{{ reference_docs_url }}" >}} -* {{< link text="Standardized Glossary" url="/docs/reference/glossary/" >}} - -In addition, {{< link text="the Kubernetes Blog" url="https://kubernetes.io/blog/" >}} often has helpful posts on Kubernetes design patterns and case studies. - -#### What's next -If you feel fairly comfortable with the topics on this page and want to learn more, check out the following user journeys: - -* {{< link text="Advanced App Developer" url="/docs/user-journeys/users/application-developer/advanced/" >}} - Dive deeper, with the next level of this journey. -* {{< link text="Foundational Cluster Operator" url="/docs/user-journeys/users/cluster-operator/foundational/" >}} - Build breadth, by exploring other journeys. -{{% /capture %}} - - diff --git a/content/en/docs/user-journeys/users/cluster-operator/foundational.md b/content/en/docs/user-journeys/users/cluster-operator/foundational.md deleted file mode 100644 index e74b7c5964e15..0000000000000 --- a/content/en/docs/user-journeys/users/cluster-operator/foundational.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -reviewers: -- chenopis -layout: docsportal -css: /css/style_user_journeys.css -js: https://use.fontawesome.com/4bcc658a89.js, https://cdnjs.cloudflare.com/ajax/libs/prefixfree/1.0.7/prefixfree.min.js -title: Foundational -track: "USERS › CLUSTER OPERATOR › FOUNDATIONAL" -content_template: templates/user-journey-content ---- - -{{% capture overview %}} - -If you want to learn how to get started managing and operating a Kubernetes cluster, this page and the linked topics introduce you to the foundational concepts and tasks. -This page introduces you to a Kubernetes cluster and key concepts to understand and manage it. The content focuses primarily on the cluster itself rather than the software running within the cluster. - -{{% /capture %}} - - - -{{% capture body %}} - -## Get an overview of Kubernetes - -If you have not already done so, start your understanding by reading through [What is Kubernetes?](/docs/concepts/overview/what-is-kubernetes/), which introduces a number of basic concepts and terms. - -Kubernetes is quite flexible, and a cluster can be run in a wide variety of places. You can interact with Kubernetes entirely on your own laptop or local development machine with it running within a virtual machine. Kubernetes can also run on virtual machines hosted either locally or in a cloud provider, and you can run a Kubernetes cluster on bare metal. - -A cluster is made up of one or more [Nodes](/docs/concepts/architecture/nodes/); where a node is a physical or virtual machine. -If there is more than one node in your cluster then the nodes are connected with a [cluster network](/docs/concepts/cluster-administration/networking/). -Regardless of how many nodes, all Kubernetes clusters generally have the same components, which are described in [Kubernetes Components](/docs/concepts/overview/components). - - -## Learn about Kubernetes basics - -A good way to become familiar with how to manage and operate a Kubernetes cluster is by setting one up. -One of the most compact ways to experiment with a cluster is [Installing and using Minikube](/docs/tasks/tools/install-minikube/). -Minikube is a command line tool for setting up and running a single-node cluster within a virtual machine on your local laptop or development computer. Minikube is even available through your browser at the [Katacoda Kubernetes Playground](https://www.katacoda.com/courses/kubernetes/playground). -Katacoda provides a browser-based connection to a single-node cluster, using minikube behind the scenes, to support a number of tutorials to explore Kubernetes. You can also leverage the web-based [Play with Kubernetes](http://labs.play-with-k8s.com/) to the same ends - a temporary cluster to play with on the web. - -You interact with Kubernetes either through a dashboard, an API, or using a command-line tool (such as `kubectl`) that interacts with the Kubernetes API. -Be familiar with [Organizing Cluster Access](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) by using configuration files. -The Kubernetes API exposes a number of resources that provide the building blocks and abstractions that are used to run software on Kubernetes. -Learn more about these resources at [Understanding Kubernetes Objects](/docs/concepts/overview/working-with-objects/kubernetes-objects). -These resources are covered in a number of articles within the Kubernetes documentation. - -* [Pod Overview](/docs/concepts/workloads/pods/pod-overview/) - * [Pods](/docs/concepts/workloads/pods/pod/) - * [ReplicaSets](/docs/concepts/workloads/controllers/replicaset/) - * [Deployments](/docs/concepts/workloads/controllers/deployment/) - * [Garbage Collection](/docs/concepts/workloads/controllers/garbage-collection/) - * [Container Images](/docs/concepts/containers/images/) - * [Container Environment Variables](/docs/concepts/containers/container-environment-variables/) -* [Labels and Selectors](/docs/concepts/overview/working-with-objects/labels/) -* [Namespaces](/docs/concepts/overview/working-with-objects/namespaces/) - * [Namespaces Walkthrough](/docs/tasks/administer-cluster/namespaces-walkthrough/) -* [Services](/docs/concepts/services-networking/service/) -* [Annotations](/docs/concepts/overview/working-with-objects/annotations/) -* [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/) -* [Secrets](/docs/concepts/configuration/secret/) - -As a cluster operator you may not need to use all these resources, although you should be familiar with them to understand how the cluster is being used. -There are a number of additional resources that you should be aware of, some listed under [Intermediate Resources](/docs/user-journeys/users/cluster-operator/intermediate#section-1). -You should also be familiar with [how to manage kubernetes resources](/docs/concepts/cluster-administration/manage-deployment/) -and [supported versions and version skew between cluster components](/docs/setup/release/version-skew-policy/). - -## Get information about your cluster - -You can [access clusters using the Kubernetes API](/docs/tasks/administer-cluster/access-cluster-api/). -If you are not already familiar with how to do this, you can review the [introductory tutorial](/docs/tutorials/kubernetes-basics/explore-intro/). -Using `kubectl`, you can retrieve information about your Kubernetes cluster very quickly. -To get basic information about the nodes in your cluster run the command `kubectl get nodes`. -You can get more detailed information for the same nodes with the command `kubectl describe nodes`. -You can see the status of the core of kubernetes with the command `kubectl get componentstatuses`. - -Some additional resources for getting information about your cluster and how it is operating include: - -* [Tools for Monitoring Compute, Storage, and Network Resources](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) -* [Resource metrics pipeline](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/) - * [Metrics](/docs/concepts/cluster-administration/controller-metrics/) - -## Explore additional resources - -### Tutorials - -* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) -* [Configuring Redis with a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/) -* Stateless Applications - * [Deploying PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook/) - * [Expose an External IP address to access an application](/docs/tutorials/stateless-application/expose-external-ip-address/) - -{{% /capture %}} diff --git a/content/en/docs/user-journeys/users/cluster-operator/intermediate.md b/content/en/docs/user-journeys/users/cluster-operator/intermediate.md deleted file mode 100644 index b590e4862cef2..0000000000000 --- a/content/en/docs/user-journeys/users/cluster-operator/intermediate.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -reviewers: -- chenopis -layout: docsportal -css: /css/style_user_journeys.css -js: https://use.fontawesome.com/4bcc658a89.js, https://cdnjs.cloudflare.com/ajax/libs/prefixfree/1.0.7/prefixfree.min.js -title: Intermediate -track: "USERS > CLUSTER OPERATOR > INTERMEDIATE" -content_template: templates/user-journey-content ---- - -{{% capture overview %}} - -If you are a cluster operator looking to expand your grasp of Kubernetes, this page and its linked topics extend the information provided on the [foundational cluster operator page](/docs/user-journeys/users/cluster-operator/foundational). From this page you can get information on key Kubernetes tasks needed to manage a complete production cluster. - -{{% /capture %}} - -{{% capture body %}} - -## Work with ingress, networking, storage, and workloads - -Introductions to Kubernetes typically discuss simple stateless applications. As you move into more complex development, testing, and production environments, you need to consider more complex cases: - -Communication: Ingress and Networking - -* [Ingress](/docs/concepts/services-networking/ingress/) - -Storage: Volumes and PersistentVolumes - -* [Volumes](/docs/concepts/storage/volumes/) -* [Persistent Volumes](/docs/concepts/storage/persistent-volumes/) - -Workloads - -* [DaemonSets](/docs/concepts/workloads/controllers/daemonset/) -* [Stateful Sets](/docs/concepts/workloads/controllers/statefulset/) -* [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-completion/) -* [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/) - -Pods - -* [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/) - * [Init Containers](/docs/concepts/workloads/pods/init-containers/) - * [Pod Presets](/docs/concepts/workloads/pods/podpreset/) - * [Container Lifecycle Hooks](/docs/concepts/containers/container-lifecycle-hooks/) - -And how Pods work with scheduling, priority, disruptions: - -* [Taints and Tolerations](/docs/concepts/configuration/taint-and-toleration/) -* [Pods and Priority](/docs/concepts/configuration/pod-priority-preemption/) -* [Disruptions](/docs/concepts/workloads/pods/disruptions/) -* [Assigning Pods to Nodes](/docs/concepts/configuration/assign-pod-node/) -* [Managing Compute Resources for Containers](/docs/concepts/configuration/manage-compute-resources-container/) -* [Configuration Best Practices](/docs/concepts/configuration/overview/) - -## Implement security best practices - -Securing your cluster includes work beyond the scope of Kubernetes itself. - -In Kubernetes, you configure access control: - -* [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/) -* [Authenticating](/docs/reference/access-authn-authz/authentication/) -* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) - -You also configure authorization. That is, you determine not just how users and services authenticate to the API server, or whether they have access, but also what resources they have access to. Role-based access control (RBAC) is the recommended mechanism for controlling authorization to Kubernetes resources. Other authorization modes are available for more specific use cases. - -* [Authorization Overview](/docs/reference/access-authn-authz/authorization/) -* [Using RBAC Authorization](/docs/reference/access-authn-authz/rbac/) - -You should create Secrets to hold sensitive data such as passwords, tokens, or keys. Be aware, however, that there are limitations to the protections that a Secret can provide. See [the Risks section of the Secrets documentation](/docs/concepts/configuration/secret/#risks). - - - -## Implement custom logging and monitoring - -Monitoring the health and state of your cluster is important. Collecting metrics, logging, and providing access to that information are common needs. Kubernetes provides some basic logging structure, and you may want to use additional tools to help aggregate and analyze log data. - -Start with the [basics on Kubernetes logging](/docs/concepts/cluster-administration/logging/) to understand how containers do logging and common patterns. Cluster operators often want to add something to gather and aggregate those logs. See the following topics: - -* [Logging Using Elasticsearch and Kibana](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/) -* [Logging Using Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/) - -Like log aggregation, many clusters utilize additional software to help capture metrics and display them. There is an overview of tools at [Tools for Monitoring Compute, Storage, and Network Resources](/docs/tasks/debug-application-cluster/resource-usage-monitoring/). -Kubernetes also supports a [resource metrics pipeline](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/) which can be used by Horizontal Pod Autoscaler with custom metrics. - -[Prometheus](https://prometheus.io/), another {{< glossary_tooltip text="CNCF" term_id="cncf" >}} project, is a common choice to support capture and temporary collection of metrics. There are several options for installing Prometheus, including using the [stable/prometheus](https://github.com/kubernetes/charts/tree/master/stable/prometheus) [helm](https://helm.sh/) chart, and CoreOS provides a [prometheus operator](https://github.com/coreos/prometheus-operator) and [kube-prometheus](https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus), which adds on Grafana dashboards and common configurations. - -A common configuration on [Minikube](https://github.com/kubernetes/minikube) and some Kubernetes clusters uses [Heapster](https://github.com/kubernetes/heapster) -[along with InfluxDB and Grafana](https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md). -There is a [walkthrough of how to install this configuration in your cluster](https://blog.kublr.com/how-to-utilize-the-heapster-influxdb-grafana-stack-in-kubernetes-for-monitoring-pods-4a553f4d36c9). -As of Kubernetes 1.11, Heapster is deprecated, as per [sig-instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation). See [Prometheus vs. Heapster vs. Kubernetes Metrics APIs](https://brancz.com/2018/01/05/prometheus-vs-heapster-vs-kubernetes-metrics-apis/) for more information alternatives. - -Hosted monitoring, APM, or data analytics services such as [Datadog](https://docs.datadoghq.com/integrations/kubernetes/) or [Instana](https://www.instana.com/supported-integrations/kubernetes-monitoring/) also offer Kubernetes integration. - -## Additional resources - -Cluster Administration: - -* [Troubleshoot Clusters](/docs/tasks/debug-application-cluster/debug-cluster/) -* [Debug Pods and Replication Controllers](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/) -* [Debug Init Containers](/docs/tasks/debug-application-cluster/debug-init-containers/) -* [Debug Stateful Sets](/docs/tasks/debug-application-cluster/debug-stateful-set/) -* [Debug Applications](/docs/tasks/debug-application-cluster/debug-application/) -* [Using explorer to investigate your cluster](https://github.com/kubernetes/examples/blob/master/staging/explorer/README.md) - -{{% /capture %}} - - diff --git a/content/en/examples/application/php-apache.yaml b/content/en/examples/application/php-apache.yaml new file mode 100644 index 0000000000000..5eb04cfb899ad --- /dev/null +++ b/content/en/examples/application/php-apache.yaml @@ -0,0 +1,39 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: php-apache +spec: + selector: + matchLabels: + run: php-apache + replicas: 1 + template: + metadata: + labels: + run: php-apache + spec: + containers: + - name: php-apache + image: k8s.gcr.io/hpa-example + ports: + - containerPort: 80 + resources: + limits: + cpu: 500m + requests: + cpu: 200m + +--- + +apiVersion: v1 +kind: Service +metadata: + name: php-apache + labels: + run: php-apache +spec: + ports: + - port: 80 + selector: + run: php-apache + diff --git a/content/en/examples/service/networking/network-policy-allow-all-egress.yaml b/content/en/examples/service/networking/network-policy-allow-all-egress.yaml new file mode 100644 index 0000000000000..42b2a2a296655 --- /dev/null +++ b/content/en/examples/service/networking/network-policy-allow-all-egress.yaml @@ -0,0 +1,11 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-egress +spec: + podSelector: {} + egress: + - {} + policyTypes: + - Egress diff --git a/content/en/examples/service/networking/network-policy-allow-all-ingress.yaml b/content/en/examples/service/networking/network-policy-allow-all-ingress.yaml new file mode 100644 index 0000000000000..462912dae4eb3 --- /dev/null +++ b/content/en/examples/service/networking/network-policy-allow-all-ingress.yaml @@ -0,0 +1,11 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-ingress +spec: + podSelector: {} + ingress: + - {} + policyTypes: + - Ingress diff --git a/content/en/examples/service/networking/network-policy-default-deny-all.yaml b/content/en/examples/service/networking/network-policy-default-deny-all.yaml new file mode 100644 index 0000000000000..589f15eb3e0c4 --- /dev/null +++ b/content/en/examples/service/networking/network-policy-default-deny-all.yaml @@ -0,0 +1,9 @@ +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-all +spec: + podSelector: {} + policyTypes: + - Ingress + - Egress diff --git a/content/en/examples/service/networking/network-policy-default-deny-egress.yaml b/content/en/examples/service/networking/network-policy-default-deny-egress.yaml new file mode 100644 index 0000000000000..a4659e14174db --- /dev/null +++ b/content/en/examples/service/networking/network-policy-default-deny-egress.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-egress +spec: + podSelector: {} + policyTypes: + - Egress diff --git a/content/en/examples/service/networking/network-policy-default-deny-ingress.yaml b/content/en/examples/service/networking/network-policy-default-deny-ingress.yaml new file mode 100644 index 0000000000000..e8238024878f4 --- /dev/null +++ b/content/en/examples/service/networking/network-policy-default-deny-ingress.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-ingress +spec: + podSelector: {} + policyTypes: + - Ingress diff --git a/content/fr/docs/concepts/configuration/secret.md b/content/fr/docs/concepts/configuration/secret.md new file mode 100644 index 0000000000000..5c7ac8d1b200b --- /dev/null +++ b/content/fr/docs/concepts/configuration/secret.md @@ -0,0 +1,981 @@ +--- +title: Secrets +content_template: templates/concept +feature: + title: Gestion du secret et de la configuration + description: > + Déployez et mettez à jour les secrets et la configuration des applications sans reconstruire votre image et sans dévoiler les secrets de la configuration de vos applications. +weight: 50 +--- + + +{{% capture overview %}} + +Les objets `secret` de Kubernetes vous permettent de stocker et de gérer des informations sensibles, telles que les mots de passe, les jetons OAuth et les clés ssh. +Mettre ces informations dans un `secret` est plus sûr et plus flexible que de le mettre en dur dans la définition d'un {{< glossary_tooltip term_id="pod" >}} ou dans une {{< glossary_tooltip text="container image" term_id="image" >}}. +Voir [Document de conception des secrets](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) pour plus d'informations. + +{{% /capture %}} + +{{% capture body %}} + +## Présentation des secrets + +Un secret est un objet qui contient une petite quantité de données sensibles telles qu'un mot de passe, un jeton ou une clé. +De telles informations pourraient autrement être placées dans une spécification de pod ou dans une image; le placer dans un objet secret permet de mieux contrôler la façon dont il est utilisé et réduit le risque d'exposition accidentelle. + +Les utilisateurs peuvent créer des secrets et le système crée également des secrets. + +Pour utiliser un secret, un pod doit référencer le secret. +Un secret peut être utilisé avec un pod de deux manières: sous forme de fichiers dans un {{< glossary_tooltip text="volume" term_id="volume" >}} monté sur un ou plusieurs de ses conteneurs, ou utilisé par kubelet lorsque vous récupérez des images pour le pod. + +### Secrets intégrés + +#### Les comptes de service créent et attachent automatiquement des secrets avec les informations d'identification de l'API + +Kubernetes crée automatiquement des secrets qui contiennent des informations d'identification pour accéder à l'API et il modifie automatiquement vos pods pour utiliser ce type de secret. + +La création et l'utilisation automatiques des informations d'identification de l'API peuvent être désactivées ou remplacées si vous le souhaitez. +Cependant, si tout ce que vous avez à faire est d'accéder en toute sécurité à l'apiserver, il s'agit de la méthode recommandée. + +Voir la documentation des [Compte de service](/docs/tasks/configure-pod-container/configure-service-account/) pour plus d'informations sur le fonctionnement des comptes de service. + +### Créer vos propres secrets + +#### Créer un secret avec kubectl create secret + +Supposons que certains pods doivent accéder à une base de données. +Le nom d'utilisateur et le mot de passe que les pods doivent utiliser se trouvent dans les fichiers `./username.txt` et `./password.txt` sur votre machine locale. + +```shell +# Create files needed for rest of example. +echo -n 'admin' > ./username.txt +echo -n '1f2d1e2e67df' > ./password.txt +``` + +La commande `kubectl create secret` regroupe ces fichiers dans un secret et crée l'objet sur l'Apiserver. + +```shell +kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt +``` + +```text +secret "db-user-pass" created +``` + +{{< note >}} +Les caractères spéciaux tels que `$`, `\`, `*`, et `!` seront interprétés par votre [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) et nécessitent d'être échappés. +Dans les shells les plus courants, le moyen le plus simple d'échapper au mot de passe est de l'entourer de guillemets simples (`'`). +Par exemple, si votre mot de passe réel est `S!B\*d$zDsb`, vous devez exécuter la commande de cette façon: + +```text +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' +``` + +Vous n'avez pas besoin d'échapper les caractères spéciaux dans les mots de passe des fichiers (`--from-file`). +{{< /note >}} + +Vous pouvez vérifier que le secret a été créé comme ceci: + +```shell +kubectl get secrets +``` + +```text +NAME TYPE DATA AGE +db-user-pass Opaque 2 51s +``` + +```text +kubectl describe secrets/db-user-pass +``` + +```text +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +{{< note >}} +`kubectl get` et `kubectl describe` évitent d'afficher le contenu d'un secret par défaut. +Il s'agit de protéger le secret contre une exposition accidentelle à un spectateur de l'écran ou contre son stockage dans un journal de terminal. +{{< /note >}} + +Voir [décoder un secret](#decoding-a-secret) pour voir le contenu d'un secret. + +#### Création manuelle d'un secret + +Vous pouvez également créer un secret dans un fichier d'abord, au format json ou yaml, puis créer cet objet. +Le [secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) contient deux table de hachage: `data` et `stringData`. +Le champ `data` est utilisé pour stocker des données arbitraires, encodées en base64. +Le champ `stringData` est fourni pour plus de commodité et vous permet de fournir des données secrètes sous forme de chaînes non codées. + +Par exemple, pour stocker deux chaînes dans un secret à l'aide du champ `data`, convertissez-les en base64 comme suit: + +```shell +echo -n 'admin' | base64 +YWRtaW4= +echo -n '1f2d1e2e67df' | base64 +MWYyZDFlMmU2N2Rm +``` + +Écrivez un secret qui ressemble à ceci: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +``` + +Maintenant, créez le secret en utilisant [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): + +```text +kubectl apply -f ./secret.yaml +``` + +```text +secret "mysecret" created +``` + +Pour certains scénarios, vous pouvez utiliser le champ `stringData` à la place. +Ce champ vous permet de mettre une chaîne non codée en base64 directement dans le secret, et la chaîne sera codée pour vous lorsque le secret sera créé ou mis à jour. + +Un exemple pratique de cela pourrait être le suivant: vous déployez une application qui utilise un secret pour stocker un fichier de configuration. +Vous souhaitez remplir des parties de ce fichier de configuration pendant votre processus de déploiement. + +Si votre application utilise le fichier de configuration suivant: + +```yaml +apiUrl: "https://my.api.com/api/v1" +username: "user" +password: "password" +``` + +Vous pouvez stocker cela dans un secret en utilisant ce qui suit: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +stringData: + config.yaml: |- + apiUrl: "https://my.api.com/api/v1" + username: {{username}} + password: {{password}} +``` + +Votre outil de déploiement pourrait alors remplacer les variables de modèle `{{username}}` et `{{password}}` avant d'exécuter `kubectl apply`. + +`stringData` est un champ de commodité en écriture seule. +Il n'est jamais affiché lors de la récupération des secrets. +Par exemple, si vous exécutez la commande suivante: + +```text +kubectl get secret mysecret -o yaml +``` + +La sortie sera similaire à: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:40:59Z + name: mysecret + namespace: default + resourceVersion: "7225" + uid: c280ad2e-e916-11e8-98f2-025000000001 +type: Opaque +data: + config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19 +``` + +Si un champ est spécifié à la fois dans `data` et `stringData`, la valeur de `stringData` est utilisée. +Par exemple, la définition de secret suivante: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= +stringData: + username: administrator +``` + +Donnera le secret suivant: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:46:46Z + name: mysecret + namespace: default + resourceVersion: "7579" + uid: 91460ecb-e917-11e8-98f2-025000000001 +type: Opaque +data: + username: YWRtaW5pc3RyYXRvcg== +``` + +Où `YWRtaW5pc3RyYXRvcg ==` décode en `administrateur`. + +Les clés de `data` et `stringData` doivent être composées de caractères alphanumériques, '-', '_' ou '.'. + +**Encoding Note:** Les valeurs JSON et YAML sérialisées des données secrètes sont codées sous forme de chaînes base64. +Les sauts de ligne ne sont pas valides dans ces chaînes et doivent être omis. +Lors de l'utilisation de l'utilitaire `base64` sur Darwin / macOS, les utilisateurs doivent éviter d'utiliser l'option `-b` pour diviser les longues lignes. +Inversement, les utilisateurs Linux *devraient* ajouter l'option `-w 0` aux commandes `base64` ou le pipeline `base64 | tr -d '\ n'` si l'option `-w` n'est pas disponible. + +#### Création d'un secret à partir du générateur + +Kubectl prend en charge [la gestion des objets à l'aide de Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/) depuis 1.14. +Avec cette nouvelle fonctionnalité, vous pouvez également créer un secret à partir de générateurs, puis l'appliquer pour créer l'objet sur l'Apiserver. +Les générateurs doivent être spécifiés dans un `kustomization.yaml` à l'intérieur d'un répertoire. + +Par exemple, pour générer un secret à partir des fichiers `./username.txt` et `./password.txt` + +```shell +# Create a kustomization.yaml file with SecretGenerator +cat <./kustomization.yaml +secretGenerator: +- name: db-user-pass + files: + - username.txt + - password.txt +EOF +``` + +Appliquez le répertoire de personnalisation pour créer l'objet secret. + +```text +$ kubectl apply -k . +secret/db-user-pass-96mffmfh4k created +``` + +Vous pouvez vérifier que le secret a été créé comme ceci: + +```text +$ kubectl get secrets +NAME TYPE DATA AGE +db-user-pass-96mffmfh4k Opaque 2 51s + +$ kubectl describe secrets/db-user-pass-96mffmfh4k +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +Par exemple, pour générer un secret à partir des littéraux `username=admin` et `password=secret`, vous pouvez spécifier le générateur de secret dans `kustomization.yaml` comme: + +```shell +# Create a kustomization.yaml file with SecretGenerator +$ cat <./kustomization.yaml +secretGenerator: +- name: db-user-pass + literals: + - username=admin + - password=secret +EOF +``` + +Appliquer le repertoire kustomization pour créer l'objet secret. + +```shell +$ kubectl apply -k . +secret/db-user-pass-dddghtt9b5 created +``` + +{{< note >}} +Le nom des secrets généré a un suffixe ajouté en hachant le contenu. +Cela garantit qu'un nouveau secret est généré chaque fois que le contenu est modifié. +{{< /note >}} + +#### Décoder un secret + +Les secrets peuvent être récupérés via la command `kubectl get secret`. +Par exemple, pour récupérer le secret créé dans la section précédente: + +```shell +kubectl get secret mysecret -o yaml +``` + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2016-01-22T18:41:56Z + name: mysecret + namespace: default + resourceVersion: "164619" + uid: cfee02d6-c137-11e5-8d73-42010af00002 +type: Opaque +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +``` + +Décodez le champ du mot de passe: + +```shell +echo 'MWYyZDFlMmU2N2Rm' | base64 --decode +``` + +```text +1f2d1e2e67df +``` + +#### Modification d'un secret + +Un secret existant peut être modifié avec la commande suivante: + +```text +kubectl edit secrets mysecret +``` + +Cela ouvrira l'éditeur configuré par défaut et permettra de mettre à jour les valeurs secrètes codées en base64 dans le champ `data`: + +```yaml +# Please edit the object below. Lines beginning with a '#' will be ignored, +# and an empty file will abort the edit. If an error occurs while saving this file will be +# reopened with the relevant failures. +# +apiVersion: v1 +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +kind: Secret +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: { ... } + creationTimestamp: 2016-01-22T18:41:56Z + name: mysecret + namespace: default + resourceVersion: "164619" + uid: cfee02d6-c137-11e5-8d73-42010af00002 +type: Opaque +``` + +## Utiliser les secrets + +Les secrets peuvent être montés en tant que volumes de données ou être exposés en tant que {{< glossary_tooltip text="variables d'environnement" term_id="container-env-variables" >}} à utiliser par un conteneur dans un Pod. +Ils peuvent également être utilisés par d'autres parties du système, sans être directement exposés aux Pods. +Par exemple, ils peuvent détenir des informations d'identification que d'autres parties du système doivent utiliser pour interagir avec des systèmes externes en votre nom. + +### Utilisation de secrets comme fichiers d'un pod + +Pour consommer un secret dans un volume dans un pod: + +1. Créez un secret ou utilisez-en un déjà existant. + Plusieurs Pods peuvent référencer le même secret. +1. Modifiez la définition de votre Pod pour ajouter un volume sous `.spec.volumes[]`. + Nommez le volume et ayez un champ `.spec.volumes[].secret.secretName` égal au nom de l'objet secret. +1. Ajouter un `.spec.containers[].volumeMounts[]` à chaque conteneur qui a besoin du secret. + Spécifier `.spec.containers[].volumeMounts[].readOnly = true` et `.spec.containers[].volumeMounts[].mountPath` à un nom de répertoire inutilisé où vous souhaitez que les secrets apparaissent. +1. Modifiez votre image et/ou votre ligne de commande pour que le programme recherche les fichiers dans ce répertoire. + Chaque clé de la carte secrète `data` devient le nom de fichier sous `mountPath`. + +Voici un exemple de pod qui monte un secret dans un volume: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret +``` + +Chaque secret que vous souhaitez utiliser doit être mentionné dans `.spec.volumes`. + +S'il y a plusieurs conteneurs dans le pod, alors chaque conteneur a besoin de son propre bloc `volumeMounts`, mais un seul `.spec.volumes` est nécessaire par secret. + +Vous pouvez regrouper de nombreux fichiers en un seul secret ou utiliser de nombreux secrets, selon le cas. + +### Projection de clés secrètes vers des chemins spécifiques + +Nous pouvons également contrôler les chemins dans le volume où les clés secrètes sont projetées. +Vous pouvez utiliser le champ `.spec.volumes []. Secret.items` pour changer le chemin cible de chaque clé: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username +``` + +Que se passera-t-il: + +* `username` est stocké dans le fichier `/etc/foo/my-group/my-username` au lieu de `/etc/foo/username`. +* `password` n'est pas projeté + +Si `.spec.volumes[].secret.items` est utilisé, seules les clés spécifiées dans `items` sont projetées. +Pour consommer toutes les clés du secret, toutes doivent être répertoriées dans le champ `items`. +Toutes les clés répertoriées doivent exister dans le secret correspondant. +Sinon, le volume n'est pas créé. + +### Autorisations de fichiers secrets + +Vous pouvez également spécifier les bits de mode d'autorisation des fichiers contenant les parties d'un secret. +Si vous n'en spécifiez pas, `0644` est utilisé par défaut. +Vous pouvez spécifier un mode par défaut pour tout le volume secret et remplacer par clé si nécessaire. + +Par exemple, vous pouvez spécifier un mode par défaut comme celui-ci: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + defaultMode: 256 +``` + +Ensuite, le secret sera monté sur `/etc/foo` et tous les fichiers créés par le montage de volume secret auront la permission `0400`. + +Notez que la spécification JSON ne prend pas en charge la notation octale, utilisez donc la valeur 256 pour les autorisations 0400. +Si vous utilisez yaml au lieu de json pour le pod, vous pouvez utiliser la notation octale pour spécifier les autorisations de manière plus naturelle. + +Vous pouvez aussi utiliser un mapping, comme dans l'exemple précédent, et spécifier des autorisations différentes pour différents fichiers comme celui-ci: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username + mode: 511 +``` + +Dans ce cas, le fichier résultant `/etc/foo/my-group/my-username` aura la valeur d'autorisation de `0777`. +En raison des limitations JSON, vous devez spécifier le mode en notation décimale. + +Notez que cette valeur d'autorisation peut être affichée en notation décimale si vous la lisez plus tard. + +### Consommer des valeurs secrètes à partir de volumes + +À l'intérieur du conteneur qui monte un volume secret, les clés secrètes apparaissent sous forme de fichiers et les valeurs secrètes sont décodées en base 64 et stockées à l'intérieur de ces fichiers. +C'est le résultat des commandes exécutées à l'intérieur du conteneur de l'exemple ci-dessus: + +```shell +ls /etc/foo/ +``` + +```text +username +password +``` + +```shell +cat /etc/foo/username +``` + +```text +admin +``` + +```shell +cat /etc/foo/password +``` + +```text +1f2d1e2e67df +``` + +Le programme dans un conteneur est responsable de la lecture des secrets des fichiers. + +### Les secrets montés sont mis à jour automatiquement + +Lorsqu'un secret déjà consommé dans un volume est mis à jour, les clés projetées sont finalement mises à jour également. +Kubelet vérifie si le secret monté est récent à chaque synchronisation périodique. +Cependant, il utilise son cache local pour obtenir la valeur actuelle du Secret. +Le type de cache est configurable à l'aide de le champ `ConfigMapAndSecretChangeDetectionStrategy` dans la structure [KubeletConfiguration](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go). +Il peut être soit propagé via watch (par défaut), basé sur ttl, ou simplement redirigé toutes les requêtes vers directement kube-apiserver. +Par conséquent, le délai total entre le moment où le secret est mis à jour et le moment où de nouvelles clés sont projetées sur le pod peut être aussi long que la période de synchronisation du kubelet + le délai de propagation du cache, où le délai de propagation du cache dépend du type de cache choisi (cela équivaut au delai de propagation du watch, ttl du cache, ou bien zéro). + +{{< note >}} +Un conteneur utilisant un secret comme un volume [subPath](/docs/concepts/storage/volumes#using-subpath) monté ne recevra pas de mises à jour secrètes. +{{< /note >}} + +### Utilisation de secrets comme variables d'environnement + +Pour utiliser un secret dans une {{< glossary_tooltip text="variable d'environnement" term_id="container-env-variables" >}} dans un pod: + +1. Créez un secret ou utilisez-en un déjà existant. + Plusieurs pods peuvent référencer le même secret. +1. Modifiez la définition de votre pod dans chaque conteneur où vous souhaitez utiliser la valeur d'une clé secrète pour ajouter une variable d'environnement pour chaque clé secrète que vous souhaitez consommer. + La variable d'environnement qui consomme la clé secrète doit remplir le nom et la clé du secret dans `env[].valueFrom.secretKeyRef`. +1. Modifiez votre image et/ou votre ligne de commande pour que le programme recherche des valeurs dans les variables d'environnement spécifiées + +Voici un exemple de pod qui utilise des secrets de variables d'environnement: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-env-pod +spec: + containers: + - name: mycontainer + image: redis + env: + - name: SECRET_USERNAME + valueFrom: + secretKeyRef: + name: mysecret + key: username + - name: SECRET_PASSWORD + valueFrom: + secretKeyRef: + name: mysecret + key: password + restartPolicy: Never +``` + +### Consommation de valeurs secrètes à partir de variables d'environnement + +À l'intérieur d'un conteneur qui consomme un secret dans des variables d'environnement, les clés secrètes apparaissent comme des variables d'environnement normales contenant les valeurs décodées en base64 des données secrètes. +C'est le résultat des commandes exécutées à l'intérieur du conteneur de l'exemple ci-dessus: + +```shell +echo $SECRET_USERNAME +``` + +```text +admin +``` + +```shell +echo $SECRET_PASSWORD +``` + +```text +1f2d1e2e67df +``` + +### Utilisation des imagePullSecrets + +Un `imagePullSecret` est un moyen de transmettre un secret qui contient un mot de passe de registre d'images Docker (ou autre) au Kubelet afin qu'il puisse extraire une image privée au nom de votre Pod. + +#### Spécification manuelle d'une imagePullSecret + +L'utilisation de `imagePullSecrets` est décrite dans la [documentation des images](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) + +### Arranging for imagePullSecrets to be Automatically Attached + +Vous pouvez créer manuellement un `imagePullSecret` et le référencer à partir d'un `serviceAccount`. +Tous les pods créés avec ce `serviceAccount` ou cette valeur par défaut pour utiliser ce `serviceAccount`, verront leur champ `imagePullSecret` défini sur celui du compte de service. +Voir [Ajouter ImagePullSecrets à un compte de service](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) pour une explication détaillée de ce processus. + +### Montage automatique de secrets créés manuellement + +Les secrets créés manuellement (par exemple, un contenant un jeton pour accéder à un compte github) peuvent être automatiquement associés aux pods en fonction de leur compte de service. +Voir [Injection d'informations dans des pods à l'aide d'un PodPreset](/docs/tasks/inject-data-application/podpreset/) pour une explication détaillée de ce processus. + +## Details + +### Restrictions + +Les sources de volume secrètes sont validées pour garantir que la référence d'objet spécifiée pointe réellement vers un objet de type Secret. +Par conséquent, un secret doit être créé avant tous les pods qui en dépendent. + +Les objets API secrets résident dans un {{< glossary_tooltip text="namespace" term_id="namespace" >}}. +Ils ne peuvent être référencés que par des pods dans le même espace de noms. + +Les secrets individuels sont limités à 1 Mo de taille. +C'est pour décourager la création de très grands secrets qui épuiseraient la mémoire de l'apiserver et du kubelet. +Cependant, la création de nombreux petits secrets pourrait également épuiser la mémoire. +Des limites plus complètes sur l'utilisation de la mémoire en raison de secrets sont une fonctionnalité prévue. + +Kubelet prend uniquement en charge l'utilisation des secrets pour les pods qu'il obtient du serveur API. +Cela inclut tous les pods créés à l'aide de kubectl, ou indirectement via un contrôleur de réplication. +Il n'inclut pas les pods créés via les drapeaux kubelet `--manifest-url`, ou `--config`, ou son API REST (ce ne sont pas des moyens courants de créer des Pods). + +Les secrets doivent être créés avant d'être consommés dans les pods en tant que variables d'environnement, sauf s'ils sont marqués comme facultatifs. +Les références à des secrets qui n'existent pas empêcheront le pod de démarrer. + +Les références via `secretKeyRef` à des clés qui n'existent pas dans un Secret nommé empêcheront le pod de démarrer. + +Les secrets utilisés pour remplir les variables d'environnement via `envFrom` qui ont des clés considérées comme des noms de variables d'environnement non valides verront ces clés ignorées. +Le pod sera autorisé à démarrer. +Il y aura un événement dont la raison est `InvalidVariableNames` et le message contiendra la liste des clés invalides qui ont été ignorées. +L'exemple montre un pod qui fait référence au / mysecret par défaut qui contient 2 clés invalides, 1badkey et 2alsobad. + +```shell +kubectl get events +``` + +```text +LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON +0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. +``` + +### Cycle de vie de l'intéraction Secret et Pod + +Lorsqu'un pod est créé via l'API, il n'est pas vérifié s'il existe un secret référencé. +Une fois qu'un pod est programmé, le kubelet tentera de récupérer la valeur secrète. +Si le secret ne peut pas être récupéré parce qu'il n'existe pas ou en raison d'un manque temporaire de connexion au serveur API, kubelet réessayera périodiquement. +Il rapportera un événement sur le pod expliquant la raison pour laquelle il n'a pas encore démarré. +Une fois le secret récupéré, le kubelet créera et montera un volume le contenant. +Aucun des conteneurs du pod ne démarre tant que tous les volumes du pod ne sont pas montés. + +## Cas d'utilisation + +### Cas d'utilisation: pod avec clés SSH + +Créez un kustomization.yaml avec un `SecretGenerator` contenant quelques clés SSH: + +```shell +kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub +``` + +```text +secret "ssh-key-secret" created +``` + +{{< caution >}} +Réfléchissez bien avant d'envoyer vos propres clés SSH: d'autres utilisateurs du cluster peuvent avoir accès au secret. +Utilisez un compte de service que vous souhaitez rendre accessible à tous les utilisateurs avec lesquels vous partagez le cluster Kubernetes et que vous pouvez révoquer s'ils sont compromis. +{{< /caution >}} + +Nous pouvons maintenant créer un pod qui référence le secret avec la clé SSH et le consomme dans un volume: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-test-pod + labels: + name: secret-test +spec: + volumes: + - name: secret-volume + secret: + secretName: ssh-key-secret + containers: + - name: ssh-test-container + image: mySshImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +``` + +Lorsque la commande du conteneur s'exécute, les morceaux de la clé seront disponibles dans: + +```shell +/etc/secret-volume/ssh-publickey +/etc/secret-volume/ssh-privatekey +``` + +Le conteneur est alors libre d'utiliser les données secrètes pour établir une connexion SSH. + +### Cas d'utilisation: pods avec informations d'identification de prod/test + +Faites un fichier kustomization.yaml avec un SecretGenerator. + +Cet exemple illustre un Pod qui consomme un secret contenant des informations d'identification de prod et un autre Pod qui consomme un secret avec des informations d'identification d'environnement de test. + +```shell +kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 +``` + +```text +secret "prod-db-secret" created +``` + +```shell +kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests +``` + +```text +secret "test-db-secret" created +``` + +{{< note >}} +Caractères spéciaux tels que `$`, `\`, `*`, et `!` seront interprétés par votre [shell](https://en.wikipedia.org/wiki/Shell_\(computing\)) et nécessitent d'être échappés. +Dans les shells les plus courants, le moyen le plus simple d'échapper au mot de passe est de l'entourer de guillemets simples (`'`). +Par exemple, si votre mot de passe réel est `S!B\*d$zDsb`, vous devez exécuter la commande de cette façon: + +```text +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb' +``` + +Vous n'avez pas besoin d'échapper les caractères spéciaux dans les mots de passe des fichiers (`--from-file`). +{{< /note >}} + +Maintenant, faites les pods: + +```shell +$ cat < pod.yaml +apiVersion: v1 +kind: List +items: +- kind: Pod + apiVersion: v1 + metadata: + name: prod-db-client-pod + labels: + name: prod-db-client + spec: + volumes: + - name: secret-volume + secret: + secretName: prod-db-secret + containers: + - name: db-client-container + image: myClientImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +- kind: Pod + apiVersion: v1 + metadata: + name: test-db-client-pod + labels: + name: test-db-client + spec: + volumes: + - name: secret-volume + secret: + secretName: test-db-secret + containers: + - name: db-client-container + image: myClientImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +EOF +``` + +Ajoutez les pods à la même kustomization.yaml + +```shell +$ cat <> kustomization.yaml +resources: +- pod.yaml +EOF +``` + +Appliquez tous ces objets sur l'Apiserver avec + +```shell +kubectl apply -k . +``` + +Les deux conteneurs auront les fichiers suivants présents sur leurs systèmes de fichiers avec les valeurs pour l'environnement de chaque conteneur: + +```shell +/etc/secret-volume/username +/etc/secret-volume/password +``` + +Notez comment les spécifications pour les deux pods ne diffèrent que dans un champ; cela facilite la création de pods avec différentes capacités à partir d'un template de pod commun. + +Vous pouvez encore simplifier la spécification du pod de base en utilisant deux comptes de service: un appelé, disons, `prod-user` avec le secret `prod-db-secret`, et un appelé, `test-user` avec le secret `test-db-secret`. +Ensuite, la spécification du pod peut être raccourcie, par exemple: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: prod-db-client-pod + labels: + name: prod-db-client +spec: + serviceAccount: prod-db-client + containers: + - name: db-client-container + image: myClientImage +``` + +### Cas d'utilisation: Dotfiles dans un volume secret + +Afin de masquer des données (c'est-à-dire dans un fichier dont le nom commence par un point), il suffit de faire commencer cette clé par un point. +Par exemple, lorsque le secret suivant est monté dans un volume: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: dotfile-secret +data: + .secret-file: dmFsdWUtMg0KDQo= +--- +apiVersion: v1 +kind: Pod +metadata: + name: secret-dotfiles-pod +spec: + volumes: + - name: secret-volume + secret: + secretName: dotfile-secret + containers: + - name: dotfile-test-container + image: k8s.gcr.io/busybox + command: + - ls + - "-l" + - "/etc/secret-volume" + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +``` + +Le `secret-volume` contiendra un seul fichier, appelé `.secret-file`, et le `dotfile-test-container` aura ce fichier présent au chemin `/etc/secret-volume/.secret-file`. + +{{< note >}} +Les fichiers commençant par des points sont masqués de la sortie de `ls -l`; vous devez utiliser `ls -la` pour les voir lors de la liste du contenu du répertoire. +{{< /note >}} + +### Cas d'utilisation: secret visible pour un conteneur dans un pod + +Envisagez un programme qui doit gérer les requêtes HTTP, effectuer une logique métier complexe, puis signer certains messages avec un HMAC. +Parce qu'il a une logique d'application complexe, il pourrait y avoir un exploit de lecture de fichier à distance inaperçu dans le serveur, qui pourrait exposer la clé privée à un attaquant. + +Cela pourrait être divisé en deux processus dans deux conteneurs: un conteneur frontal qui gère l'interaction utilisateur et la logique métier, mais qui ne peut pas voir la clé privée; et un conteneur de signataire qui peut voir la clé privée, et répond aux demandes de signature simples du frontend (par exemple sur le réseau localhost). + +Avec cette approche partitionnée, un attaquant doit maintenant inciter le serveur d'applications à faire quelque chose d'assez arbitraire, ce qui peut être plus difficile que de lui faire lire un fichier. + + + +## Les meilleures pratiques + +### Clients qui utilisent l'API secrets + +Lors du déploiement d'applications qui interagissent avec l'API secrets, l'accès doit être limité à l'aide de [politiques d'autorisation](/docs/reference/access-authn-authz/authorization/) telles que [RBAC](/docs/reference/access-authn-authz/rbac/). + +Les secrets contiennent souvent des valeurs qui couvrent un spectre d'importance, dont beaucoup peuvent provoquer des escalades au sein de Kubernetes (par exemple, les jetons de compte de service) et vers les systèmes externes. +Même si une application individuelle peut raisonner sur la puissance des secrets avec lesquels elle s'attend à interagir, d'autres applications dans le même namespace peuvent rendre ces hypothèses invalides. + +Pour ces raisons, les requêtes `watch` et `list` pour les secrets dans un namespace sont des capacités extrêmement puissantes et doivent être évitées, puisque la liste des secrets permet aux clients d'inspecter les valeurs de tous les secrets qui se trouvent dans ce namespace. +La capacité à effectuer un `watch` ou `list` des secrets dans un cluster doit être réservé uniquement aux composants les plus privilégiés au niveau du système. + +Les applications qui ont besoin d'accéder à l'API secrets doivent effectuer des requêtes `get` sur les secrets dont elles ont besoin. +Cela permet aux administrateurs de restreindre l'accès à tous les secrets tout en donnant [accès en liste blanche aux instances individuelles](/docs/reference/access-authn-authz/rbac/#referring-to-resources) dont l'application a besoin. + +Pour des performances améliorées sur une boucle `get`, les clients peuvent concevoir des ressources qui font référence à un secret puis `watch` la ressource, demandant à nouveau le secret lorsque la ressource change. +De plus, un ["bulk watch" API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/bulk_watch.md) laisse les clients `watch` des ressources individuelles ont également été proposées et seront probablement disponibles dans les prochaines versions de Kubernetes. + +## Propriétés de sécurité + +### Protections + +Étant donné que les objets secrets peuvent être créés indépendamment des Pods qui les utilisent, il y a moins de risques que le secret soit exposé pendant la création, la visualisation et la modification des Pods. +Le système peut également prendre des précautions supplémentaires avec les objets secrets, comme éviter de les écrire sur le disque lorsque cela est possible. + +Un secret n'est envoyé à un nœud que si un module sur ce nœud l'exige. +Kubelet stocke le secret dans un `tmpfs` afin que le secret ne soit pas écrit sur le stockage sur disque. +Une fois que le pod qui dépend du secret est supprimé, kubelet supprimera également sa copie locale des données secrètes. + +Il peut y avoir des secrets pour plusieurs pods sur le même nœud. +Cependant, seuls les secrets qu'un pod demande sont potentiellement visibles dans ses conteneurs. +Par conséquent, un pod n'a pas accès aux secrets d'un autre pod. + +Il peut y avoir plusieurs conteneurs dans un pod. +Cependant, chaque conteneur d'un pod doit demander le volume secret dans ses `volumesMounts` pour qu'il soit visible dans le conteneur. +Cela peut être utilisé pour construire des [partitions de sécurité au niveau du pod](#use-case-secret-visible-to-one-container-in-a-pod). + +Sur la plupart des distributions gérées par le projet Kubernetes, la communication entre l'utilisateur vers l'apiserver et entre l'apiserver et les kubelets est protégée par SSL/TLS. +Les secrets sont protégés lorsqu'ils sont transmis sur ces canaux. + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +Vous pouvez activer le [chiffrement au repos](/docs/tasks/administer-cluster/encrypt-data/) pour les données secrètes, afin que les secrets ne soient pas stockés en clair dans {{< glossary_tooltip term_id="etcd" >}}. + +### Risques + +* Dans le serveur API, les données secrètes sont stockées dans {{< glossary_tooltip term_id="etcd" >}}; par conséquent: + * Les administrateurs doivent activer le chiffrement au repos pour les données du cluster (nécessite la version 1.13 ou ultérieure) + * Les administrateurs devraient limiter l'accès à etcd aux utilisateurs administrateurs + * Les administrateurs peuvent vouloir effacer/détruire les disques utilisés par etcd lorsqu'ils ne sont plus utilisés + * Si vous exécutez etcd dans un cluster, les administrateurs doivent s'assurer d'utiliser SSL/TLS pour la communication peer-to-peer etcd. +* Si vous configurez le secret via un fichier manifeste (JSON ou YAML) qui a les données secrètes codées en base64, partager ce fichier ou l'archiver dans un dépot de source signifie que le secret est compromis. + L'encodage Base64 _n'est pas_ une méthode de chiffrement, il est considéré comme identique au texte brut. +* Les applications doivent toujours protéger la valeur du secret après l'avoir lu dans le volume, comme ne pas le mettre accidentellement dans un journal ou le transmettre à une partie non fiable. +* Un utilisateur qui peut créer un pod qui utilise un secret peut également voir la valeur de ce secret. + Même si la stratégie apiserver ne permet pas à cet utilisateur de lire l'objet secret, l'utilisateur peut créer un pod qui expose le secret. +* Actuellement, toute personne disposant des droit root sur n'importe quel nœud peut lire _n'importe quel_ secret depuis l'apiserver, en usurpant l'identité du kubelet. + Il est prévu de n'envoyer des secrets qu'aux nœuds qui en ont réellement besoin, pour limiter l'impact d'un exploit root sur un seul nœud. + +{{% capture whatsnext %}} + +{{% /capture %}} diff --git a/content/fr/docs/concepts/storage/persistent-volumes.md b/content/fr/docs/concepts/storage/persistent-volumes.md new file mode 100644 index 0000000000000..e62c996ddb5e0 --- /dev/null +++ b/content/fr/docs/concepts/storage/persistent-volumes.md @@ -0,0 +1,756 @@ +--- +title: Volumes persistants +feature: + title: Orchestration du stockage + description: > + Montez automatiquement le système de stockage de votre choix, que ce soit à partir du stockage local, d'un fournisseur de cloud public tel que GCP ou AWS, ou un système de stockage réseau tel que NFS, iSCSI, Gluster, Ceph, Cinder ou Flocker. + +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +Ce document décrit l'état actuel de `PersistentVolumes` dans Kubernetes. +Une connaissance des [volumes](/fr/docs/concepts/storage/volumes/) est suggérée. + +{{% /capture %}} + +{{% capture body %}} + +## Introduction + +La gestion du stockage est un problème distinct de la gestion des instances de calcul. +Le sous-système `PersistentVolume` fournit une API pour les utilisateurs et les administrateurs qui abstrait les détails de la façon dont le stockage est fourni et de la façon dont il est utilisé. +Pour ce faire, nous introduisons deux nouvelles ressources API: `PersistentVolume` et `PersistentVolumeClaim`. + +Un `PersistentVolume` (PV) est un élément de stockage dans le cluster qui a été provisionné par un administrateur ou provisionné dynamiquement à l'aide de [Storage Classes](/docs/concepts/storage/storage-classes/). +Il s'agit d'une ressource dans le cluster, tout comme un nœud est une ressource de cluster. +Les PV sont des plugins de volume comme Volumes, mais ont un cycle de vie indépendant de tout pod individuel qui utilise le PV. +Cet objet API capture les détails de l'implémentation du stockage, que ce soit NFS, iSCSI ou un système de stockage spécifique au fournisseur de cloud. + +Un `PersistentVolumeClaim` (PVC) est une demande de stockage par un utilisateur. +Il est similaire à un Pod. +Les pods consomment des ressources de noeud et les PVC consomment des ressources PV. +Les pods peuvent demander des niveaux spécifiques de ressources (CPU et mémoire). +Les PVC peuvent demander une taille et des modes d'accès spécifiques (par exemple, ils peuvent être montés une fois en lecture/écriture ou plusieurs fois en lecture seule). + +Alors que les `PersistentVolumeClaims` permettent à un utilisateur de consommer des ressources de stockage abstraites, il est courant que les utilisateurs aient besoin de `PersistentVolumes` avec des propriétés et des performances variables pour différents problèmes. +Les administrateurs de cluster doivent être en mesure d'offrir une variété de `PersistentVolumes` qui diffèrent de bien des façons plus que la taille et les modes d'accès, sans exposer les utilisateurs aux détails de la façon dont ces volumes sont mis en œuvre. +Pour ces besoins, il existe la ressource `StorageClass`. + +Voir la [procédure détaillée avec des exemples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/). + +## Cycle de vie d'un PV et d'un PVC + +Les PV sont des ressources du cluster. +Les PVC sont des demandes pour ces ressources et agissent également comme des contrôles de réclamation pour la ressource. +L'interaction entre les PV et les PVC suit ce cycle de vie: + +### Provisionnement + +Les PV peuvent être provisionnés de deux manières: statiquement ou dynamiquement. + +#### Provisionnement statique + +Un administrateur de cluster crée un certain nombre de PV. +Ils contiennent les détails du stockage réel, qui est disponible pour une utilisation par les utilisateurs du cluster. +Ils existent dans l'API Kubernetes et sont disponibles pour la consommation. + +#### Provisionnement dynamique + +Lorsqu'aucun des PV statiques créés par l'administrateur ne correspond au `PersistentVolumeClaim` d'un utilisateur, le cluster peut essayer de provisionner dynamiquement un volume spécialement pour le PVC. +Ce provisionnement est basé sur les `StorageClasses`: le PVC doit demander une [storage class](/docs/concepts/storage/storage-classes/) et l'administrateur doit avoir créé et configuré cette classe pour que l'approvisionnement dynamique se produise. +Les PVC qui demandent la classe `""` désactive le provisionnement dynamique pour eux-mêmes. + +Pour activer le provisionnement de stockage dynamique basé sur la classe de stockage, l'administrateur de cluster doit activer le `DefaultStorageClass` dans l'[contrôleur d'admission](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) sur le serveur API. +Cela peut être fait, par exemple, en veillant à ce que `DefaultStorageClass` figure parmi la liste de valeurs séparées par des virgules pour l'option `--enable-admission-plugins` du composant serveur API. +Pour plus d'informations sur les options de ligne de commande du serveur API, consultez la documentation [kube-apiserver](/docs/admin/kube-apiserver/). + +### Liaison + +Un utilisateur crée, ou dans le cas d'un provisionnement dynamique, a déjà créé, un `PersistentVolumeClaim` avec une quantité spécifique de stockage demandée et avec certains modes d'accès. +Une boucle de contrôle dans le maître surveille les nouveaux PVC, trouve un PV correspondant (si possible) et les lie ensemble. +Si un PV a été dynamiquement provisionné pour un nouveau PVC, la boucle liera toujours ce PV au PVC. +Sinon, l'utilisateur obtiendra toujours au moins ce qu'il a demandé, mais le volume peut être supérieur à ce qui a été demandé. +Une fois liées, les liaisons `PersistentVolumeClaim` sont exclusives, quelle que soit la façon dont elles ont été liées. +Une liaison PVC-PV est une relation 1-à-1. + +Les PVC resteront non liés indéfiniment s'il n'existe pas de volume correspondant. +Le PVC sera lié à mesure que les volumes correspondants deviendront disponibles. +Par exemple, un cluster provisionné avec de nombreux PV 50Gi ne correspondrait pas à un PVC demandant 100Gi. +Le PVC peut être lié lorsqu'un PV 100Gi est ajouté au cluster. + +### Utilisation + +Les Pods utilisent les PVC comme des volumes. +Le cluster inspecte le PVC pour trouver le volume lié et monte ce volume pour un Pod. +Pour les volumes qui prennent en charge plusieurs modes d'accès, l'utilisateur spécifie le mode souhaité lors de l'utilisation de leur PVC comme volume dans un Pod. + +Une fois qu'un utilisateur a un PVC et que ce PVC est lié, le PV lié appartient à l'utilisateur aussi longtemps qu'il en a besoin. +Les utilisateurs planifient des pods et accèdent à leurs PV revendiqués en incluant un `persistentVolumeClaim` dans le bloc de volumes de leur Pod [Voir ci-dessous pour les détails de la syntaxe](#claims-as-volumes). + +### Protection de l'objet de stockage en cours d'utilisation + +Le but de la fonction de protection des objets de stockage utilisés est de garantir que les revendications de volume persistantes (PVC) en cours d'utilisation par un Pod et les volumes persistants (PV) liés aux PVC ne sont pas supprimées du système, car cela peut entraîner des pertes de données. + +{{< note >}} +Le PVC est utilisé activement par un pod lorsqu'il existe un objet Pod qui utilise le PVC. +{{< /note >}} + +Si un utilisateur supprime un PVC en cours d'utilisation par un pod, le PVC n'est pas supprimé immédiatement. +L'élimination du PVC est différée jusqu'à ce que le PVC ne soit plus activement utilisé par les pods. +De plus, si un administrateur supprime un PV lié à un PVC, le PV n'est pas supprimé immédiatement. +L'élimination du PV est différée jusqu'à ce que le PV ne soit plus lié à un PVC. + +Vous pouvez voir qu'un PVC est protégé lorsque son état est `Terminating` et la liste `Finalizers` inclus `kubernetes.io/pvc-protection`: + +```text +kubectl describe pvc hostpath +Name: hostpath +Namespace: default +StorageClass: example-hostpath +Status: Terminating +Volume: +Labels: +Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath + volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath +Finalizers: [kubernetes.io/pvc-protection] +... +``` + +Vous pouvez voir qu'un PV est protégé lorsque son état est `Terminating` et la liste `Finalizers` inclus `kubernetes.io/pv-protection` aussi: + +```text +kubectl describe pv task-pv-volume +Name: task-pv-volume +Labels: type=local +Annotations: +Finalizers: [kubernetes.io/pv-protection] +StorageClass: standard +Status: Available +Claim: +Reclaim Policy: Delete +Access Modes: RWO +Capacity: 1Gi +Message: +Source: + Type: HostPath (bare host directory volume) + Path: /tmp/data + HostPathType: +Events: +``` + +### Récupération des volumes + +Lorsqu'un utilisateur a terminé avec son volume, il peut supprimer les objets PVC de l'API qui permet la récupération de la ressource. +La politique de récupération pour un `PersistentVolume` indique au cluster ce qu'il doit faire du volume une fois qu'il a été libéré de son PVC. +Actuellement, les volumes peuvent être conservés, recyclés ou supprimés. + +#### Volumes conservés + +La politique de récupération `Retain` permet la récupération manuelle de la ressource. +Lorsque le `PersistentVolumeClaim` est supprimé, le `PersistentVolume` existe toujours et le volume est considéré comme «libéré». +Mais il n'est pas encore disponible pour une autre demande car les données du demandeur précédent restent sur le volume. +Un administrateur peut récupérer manuellement le volume en procédant comme suit. + +1. Supprimer le `PersistentVolume`. + L'actif de stockage associé dans une infrastructure externe (comme un volume AWS EBS, GCE PD, Azure Disk ou Cinder) existe toujours après la suppression du PV. +1. Nettoyez manuellement les données sur l'actif de stockage associé en conséquence. +1. Supprimez manuellement l'actif de stockage associé ou, si vous souhaitez réutiliser le même actif de stockage, créez un nouveau `PersistentVolume` avec la définition de l'actif de stockage. + +#### Volumes supprimés + +Pour les plug-ins de volume qui prennent en charge la stratégie de récupération `Delete`, la suppression supprime à la fois l'objet `PersistentVolume` de Kubernetes, ainsi que l'actif de stockage associé dans l'infrastructure externe, tel qu'un volume AWS EBS, GCE PD, Azure Disk ou Cinder. +Les volumes qui ont été dynamiquement provisionnés héritent de la [politique de récupération de leur `StorageClass`](#politique-de-récupération), qui par défaut est `Delete`. +L'administrateur doit configurer la `StorageClass` selon les attentes des utilisateurs; sinon, le PV doit être édité ou corrigé après sa création. +Voir [Modifier la politique de récupération d'un PersistentVolume](/docs/tasks/administer-cluster/change-pv-reclaim-policy/). + +#### Volumes recyclés + +{{< warning >}} +La politique de récupération `Recycle` est obsolète. +Au lieu de cela, l'approche recommandée consiste à utiliser l'approvisionnement dynamique. +{{< /warning >}} + +Si elle est prise en charge par le plug-in de volume sous-jacent, la stratégie de récupération `Recycle` effectue un nettoyage de base (`rm -rf /thevolume/*`) sur le volume et le rend à nouveau disponible pour une nouvelle demande. + +Cependant, un administrateur peut configurer un modèle de module de recyclage personnalisé à l'aide des arguments de ligne de commande du gestionnaire de contrôleur Kubernetes, comme décrit [ici](/docs/admin/kube-controller-manager/). +Le modèle de pod de recycleur personnalisé doit contenir une définition de `volumes`, comme le montre l'exemple ci-dessous: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pv-recycler + namespace: default +spec: + restartPolicy: Never + volumes: + - name: vol + hostPath: + path: /any/path/it/will/be/replaced + containers: + - name: pv-recycler + image: "k8s.gcr.io/busybox" + command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] + volumeMounts: + - name: vol + mountPath: /scrub +``` + +Cependant, le chemin particulier spécifié dans la partie `volumes` du template personnalisé de Pod est remplacée par le chemin particulier du volume qui est recyclé. + +### Redimensionnement des PVC + +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + +La prise en charge du redimensionnement des PersistentVolumeClaims (PVCs) est désormais activée par défaut. +Vous pouvez redimensionner les types de volumes suivants: + +* gcePersistentDisk +* awsElasticBlockStore +* Cinder +* glusterfs +* rbd +* Azure File +* Azure Disk +* Portworx +* FlexVolumes +* CSI + +Vous ne pouvez redimensionner un PVC que si le champ `allowVolumeExpansion` de sa classe de stockage est défini sur true. + +``` yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: gluster-vol-default +provisioner: kubernetes.io/glusterfs +parameters: + resturl: "http://192.168.10.100:8080" + restuser: "" + secretNamespace: "" + secretName: "" +allowVolumeExpansion: true +``` + +Pour demander un volume plus important pour un PVC, modifiez l'objet PVC et spécifiez une taille plus grande. +Cela déclenche l'expansion du volume qui soutient le `PersistentVolume` sous-jacent. +Un nouveau `PersistentVolume` n'est jamais créé pour satisfaire la demande. +Au lieu de cela, un volume existant est redimensionné. + +#### Redimensionnement de volume CSI + +{{< feature-state for_k8s_version="v1.16" state="beta" >}} + +La prise en charge du redimensionnement des volumes CSI est activée par défaut, mais elle nécessite également un pilote CSI spécifique pour prendre en charge le redimensionnement des volumes. +Reportez-vous à la documentation du pilote CSI spécifique pour plus d'informations. + +#### Redimensionner un volume contenant un système de fichiers + +Vous ne pouvez redimensionner des volumes contenant un système de fichiers que si le système de fichiers est XFS, Ext3 ou Ext4. + +Lorsqu'un volume contient un système de fichiers, le système de fichiers n'est redimensionné que lorsqu'un nouveau pod utilise le `PersistentVolumeClaim` en mode ReadWrite. +L'extension du système de fichiers est effectuée au démarrage d'un pod ou lorsqu'un pod est en cours d'exécution et que le système de fichiers sous-jacent prend en charge le redimensionnement en ligne. + +FlexVolumes autorise le redimensionnement si le pilote est défini avec la capacité `requiresFSResize` sur `true`. +Le FlexVolume peut être redimensionné au redémarrage du pod. + +#### Redimensionnement d'un PersistentVolumeClaim en cours d'utilisation + +{{< feature-state for_k8s_version="v1.15" state="beta" >}} + +{{< note >}} +Redimensionner un PVCs à chaud est disponible en version bêta depuis Kubernetes 1.15 et en version alpha depuis 1.11. +La fonctionnalité `ExpandInUsePersistentVolumes` doit être activée, ce qui est le cas automatiquement pour de nombreux clusters de fonctionnalités bêta. +Se référer à la documentation de la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) pour plus d'informations. +{{< /note >}} + +Dans ce cas, vous n'avez pas besoin de supprimer et de recréer un pod ou un déploiement qui utilise un PVC existant. +Tout PVC en cours d'utilisation devient automatiquement disponible pour son pod dès que son système de fichiers a été étendu. +Cette fonctionnalité n'a aucun effet sur les PVC qui ne sont pas utilisés par un pod ou un déploiement. +Vous devez créer un pod qui utilise le PVC avant que l'extension puisse se terminer. + +Semblable à d'autres types de volume - les volumes FlexVolume peuvent également être étendus lorsqu'ils sont utilisés par un pod. + +{{< note >}} +Le redimensionnement de FlexVolume n'est possible que lorsque le pilote sous-jacent prend en charge le redimensionnement. +{{< /note >}} + +{{< note >}} +L'augmentation des volumes EBS est une opération longue. +En outre, il existe un quota par volume d'une modification toutes les 6 heures. +{{< /note >}} + +## Types de volumes persistants + +Les types `PersistentVolume` sont implémentés en tant que plugins. +Kubernetes prend actuellement en charge les plugins suivants: + +* GCEPersistentDisk +* AWSElasticBlockStore +* AzureFile +* AzureDisk +* CSI +* FC (Fibre Channel) +* FlexVolume +* Flocker +* NFS +* iSCSI +* RBD (Ceph Block Device) +* CephFS +* Cinder (OpenStack block storage) +* Glusterfs +* VsphereVolume +* Quobyte Volumes +* HostPath (Test de nœud unique uniquement -- le stockage local n'est en aucun cas pris en charge et NE FONCTIONNERA PAS dans un cluster à plusieurs nœuds) +* Portworx Volumes +* ScaleIO Volumes +* StorageOS + +## Volumes persistants + +Chaque PV contient une spécification et un état, qui sont les spécifications et l'état du volume. + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv0003 +spec: + capacity: + storage: 5Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Recycle + storageClassName: slow + mountOptions: + - hard + - nfsvers=4.1 + nfs: + path: /tmp + server: 172.17.0.2 +``` + +### Capacité + +Généralement, un PV aura une capacité de stockage spécifique. +Ceci est réglé en utilisant l'attribut `capacity` des PV. +Voir le Kubernetes [modèle de ressource](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) pour comprendre les unités attendues par `capacity`. + +Actuellement, la taille du stockage est la seule ressource qui peut être définie ou demandée. +Les futurs attributs peuvent inclure les IOPS, le débit, etc. + +### Mode volume + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +Avant Kubernetes 1.9, tous les plug-ins de volume créaient un système de fichiers sur le volume persistant. +Maintenant, vous pouvez définir la valeur de `volumeMode` sur `block` pour utiliser un périphérique de bloc brut, ou `filesystem` pour utiliser un système de fichiers. +`filesystem` est la valeur par défaut si la valeur est omise. +Il s'agit d'un paramètre API facultatif. + +### Modes d'accès + +Un `PersistentVolume` peut être monté sur un hôte de n'importe quelle manière prise en charge par le fournisseur de ressources. +Comme indiqué dans le tableau ci-dessous, les fournisseurs auront des capacités différentes et les modes d'accès de chaque PV sont définis sur les modes spécifiques pris en charge par ce volume particulier. +Par exemple, NFS peut prendre en charge plusieurs clients en lecture/écriture, mais un PV NFS spécifique peut être exporté sur le serveur en lecture seule. +Chaque PV dispose de son propre ensemble de modes d'accès décrivant les capacités spécifiques de ce PV. + +Les modes d'accès sont: + +* ReadWriteOnce -- le volume peut être monté en lecture-écriture par un seul nœud +* ReadOnlyMany -- le volume peut être monté en lecture seule par plusieurs nœuds +* ReadWriteMany -- le volume peut être monté en lecture-écriture par de nombreux nœuds + +Dans la CLI, les modes d'accès sont abrégés comme suit: + +* RWO - ReadWriteOnce +* ROX - ReadOnlyMany +* RWX - ReadWriteMany + +> __Important!__ Un volume ne peut être monté qu'en utilisant un seul mode d'accès à la fois, même s'il prend en charge plusieurs. + Par exemple, un GCEPersistentDisk peut être monté en tant que ReadWriteOnce par un seul nœud ou ReadOnlyMany par plusieurs nœuds, mais pas en même temps. + +| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | +|-:--------------------|-:-:--------------|-:-:--------------|-:-:----------------------------------------------| +| AWSElasticBlockStore | ✓ | - | - | +| AzureFile | ✓ | ✓ | ✓ | +| AzureDisk | ✓ | - | - | +| CephFS | ✓ | ✓ | ✓ | +| Cinder | ✓ | - | - | +| CSI | dépend du pilote | dépend du pilote | dépend du pilote | +| FC | ✓ | ✓ | - | +| FlexVolume | ✓ | ✓ | dépend du pilote | +| Flocker | ✓ | - | - | +| GCEPersistentDisk | ✓ | ✓ | - | +| Glusterfs | ✓ | ✓ | ✓ | +| HostPath | ✓ | - | - | +| iSCSI | ✓ | ✓ | - | +| Quobyte | ✓ | ✓ | ✓ | +| NFS | ✓ | ✓ | ✓ | +| RBD | ✓ | ✓ | - | +| VsphereVolume | ✓ | - | - (fonctionne lorsque les pods sont colocalisés) | +| PortworxVolume | ✓ | - | ✓ | +| ScaleIO | ✓ | ✓ | - | +| StorageOS | ✓ | - | - | + +### Classe + +Un PV peut avoir une classe, qui est spécifiée en définissant l'attribut `storageClassName` sur le nom d'une [StorageClass](/docs/concepts/storage/storage-classes/). +Un PV d'une classe particulière ne peut être lié qu'à des PVC demandant cette classe. +Un PV sans `storageClassName` n'a pas de classe et ne peut être lié qu'à des PVC qui ne demandent aucune classe particulière. + +Dans le passé, l'annotation `volume.beta.kubernetes.io/storage-class` a été utilisé à la place de l'attribut `storageClassName`. +Cette annotation fonctionne toujours; cependant, il deviendra complètement obsolète dans une future version de Kubernetes. + +### Politique de récupration + +Les politiques de récupération actuelles sont: + +* Retain -- remise en état manuelle +* Recycle -- effacement de base (`rm -rf /thevolume/*`) +* Delete -- l'élément de stockage associé tel qu'AWS EBS, GCE PD, Azure Disk ou le volume OpenStack Cinder est supprimé + +Actuellement, seuls NFS et HostPath prennent en charge le recyclage. +Les volumes AWS EBS, GCE PD, Azure Disk et Cinder prennent en charge la suppression. + +### Options de montage + +Un administrateur Kubernetes peut spécifier des options de montage supplémentaires pour quand un `PersistentVolume` est monté sur un nœud. + +{{< note >}} +Tous les types de volumes persistants ne prennent pas en charge les options de montage. +{{< /note >}} + +Les types de volume suivants prennent en charge les options de montage: + +* AWSElasticBlockStore +* AzureDisk +* AzureFile +* CephFS +* Cinder (OpenStack block storage) +* GCEPersistentDisk +* Glusterfs +* NFS +* Quobyte Volumes +* RBD (Ceph Block Device) +* StorageOS +* VsphereVolume +* iSCSI + +Les options de montage ne sont pas validées, donc le montage échouera simplement si l'une n'est pas valide. + +Dans le passé, l'annotation `volume.beta.kubernetes.io/mount-options` était utilisée à la place de l'attribut `mountOptions`. +Cette annotation fonctionne toujours; cependant, elle deviendra complètement obsolète dans une future version de Kubernetes. + +### Affinité des nœuds + +{{< note >}} +Pour la plupart des types de volume, vous n'avez pas besoin de définir ce champ. +Il est automatiquement rempli pour les volumes bloc de type [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) et [Azure Disk](/docs/concepts/storage/volumes/#azuredisk). +Vous devez définir explicitement ceci pour les volumes [locaux](/docs/concepts/storage/volumes/#local). +{{< /note >}} + +Un PV peut spécifier une [affinité de nœud](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core) pour définir les contraintes qui limitent les nœuds à partir desquels ce volume est accessible. +Les pods qui utilisent un PV seront uniquement planifiés sur les nœuds sélectionnés par l'affinité de nœud. + +### Phase + +Un volume sera dans l'une des phases suivantes: + +* Available -- une ressource libre qui n'est pas encore liée à une demande +* Bound -- le volume est lié à une demande +* Released -- la demande a été supprimée, mais la ressource n'est pas encore récupérée par le cluster +* Failed -- le volume n'a pas réussi sa récupération automatique + +Le CLI affichera le nom du PVC lié au PV. + +## PersistentVolumeClaims + +Chaque PVC contient une spécification et un état, qui sont les spécifications et l'état de la réclamation. + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: myclaim +spec: + accessModes: + - ReadWriteOnce + volumeMode: Filesystem + resources: + requests: + storage: 8Gi + storageClassName: slow + selector: + matchLabels: + release: "stable" + matchExpressions: + - {key: environment, operator: In, values: [dev]} +``` + +### Modes d'accès + +Les PVC utilisent les mêmes conventions que les volumes lorsque vous demandez un stockage avec des modes d'accès spécifiques. + +### Modes de volume + +Les PVC utilisent la même convention que les volumes pour indiquer la consommation du volume en tant que système de fichiers ou périphérique de bloc. + +### Ressources + +Les PVC, comme les pods, peuvent demander des quantités spécifiques d'une ressource. +Dans ce cas, la demande concerne le stockage. +Le même [modèle de ressource](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) s'applique aux volumes et aux PVC. + +### Sélecteur + +Les PVC peuvent spécifier un [sélecteur de labels](/docs/concepts/overview/working-with-objects/labels/#label-selectors) pour filtrer davantage l'ensemble des volumes. +Seuls les volumes dont les étiquettes correspondent au sélecteur peuvent être liés au PVC. +Le sélecteur peut comprendre deux champs: + +* `matchLabels` - le volume doit avoir un label avec cette valeur +* `matchExpressions` - une liste des exigences définies en spécifiant la clé, la liste des valeurs et l'opérateur qui relie la clé et les valeurs. + Les opérateurs valides incluent In, NotIn, Exists et DoesNotExist. + +Toutes les exigences, à la fois de `matchLabels` et de `matchExpressions` doivent toutes être satisfaites pour correspondre (application d'un opérateur booléen ET). + +### Classe + +Un PVC peut demander une classe particulière en spécifiant le nom d'une [StorageClass](/docs/concepts/storage/storage-classes/) en utilisant l'attribut `storageClassName`. +Seuls les PV de la classe demandée, ceux ayant le même `storageClassName` que le PVC, peuvent être liés au PVC. + +Les PVC n'ont pas nécessairement à demander une classe. +Un PVC avec son attribut `storageClassName` égal à `""` est toujours interprété comme demandant un PV sans classe, il ne peut donc être lié qu'à des PV sans classe (pas d'annotation ou une annotation égal à `""`). +Un PVC sans `storageClassName` n'est pas tout à fait la même et est traité différemment par le cluster, selon que le [`DefaultStorageClass` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) est activé. + +* Si le plug-in d'admission est activé, l'administrateur peut spécifier une valeur par défaut `StorageClass`. + Tous les PVC qui n'ont pas de `storageClassName` ne peuvent être liés qu'aux PV de cette valeur par défaut. + La spécification d'une `StorageClass` par défaut se fait en définissant l'annotation `storageclass.kubernetes.io/is-default-class` égal à `true` dans un objet `StorageClass`. + Si l'administrateur ne spécifie pas de valeur par défaut, le cluster répond à la création de PVC comme si le plug-in d'admission était désactivé. + Si plusieurs valeurs par défaut sont spécifiées, le plugin d'admission interdit la création de tous les PVC. +* Si le plugin d'admission est désactivé, il n'y a aucune notion de défaut `StorageClass`. + Tous les PVC qui n'ont pas `storageClassName` peut être lié uniquement aux PV qui n'ont pas de classe. + Dans ce cas, les PVC qui n'ont pas `storageClassName` sont traités de la même manière que les PVC qui ont leur `storageClassName` égal à `""`. + +Selon la méthode d'installation, une `StorageClass` par défaut peut être déployée sur un cluster Kubernetes par le gestionnaire d'extensions pendant l'installation. + +Lorsqu'un PVC spécifie un `selector` en plus de demander une `StorageClass`, les exigences sont ET ensemble: seul un PV de la classe demandée et avec les labels demandées peut être lié au PVC. + +{{< note >}} +Actuellement, un PVC avec un `selector` non vide ne peut pas avoir un PV provisionné dynamiquement pour cela. +{{< /note >}} + +Dans le passé, l'annotation `volume.beta.kubernetes.io/storage-class` a été utilisé au lieu de l'attribut `storageClassName`. +Cette annotation fonctionne toujours; cependant, elle ne sera pas pris en charge dans une future version de Kubernetes. + +## PVC sous forme de volumes + +Les pods accèdent au stockage en utilisant le PVC comme volume. +Les PVC et les pods qui les utilisent doivent exister dans le même namespace. +Le cluster trouve le PVC dans le namespace où se trouve le pod et l'utilise pour obtenir le `PersistentVolume` visé par le PVC. +Le volume est ensuite monté sur l'hôte et dans le pod. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: myfrontend + image: nginx + volumeMounts: + - mountPath: "/var/www/html" + name: mypd + volumes: + - name: mypd + persistentVolumeClaim: + claimName: myclaim +``` + +### Remarque au sujet des namespaces + +Les liaisons `PersistentVolumes` sont exclusives, et comme les objets `PersistentVolumeClaims` sont des objets vivant dans un namespace donné, le montage de PVC avec les modes "Many" (`ROX`, `RWX`) n'est possible qu'au sein d'un même namespace. + +## Prise en charge du volume de bloc brut + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +Les plug-ins de volume suivants prennent en charge les volumes de blocs bruts, y compris l'approvisionnement dynamique, le cas échéant: + +* AWSElasticBlockStore +* AzureDisk +* FC (Fibre Channel) +* GCEPersistentDisk +* iSCSI +* Local volume +* RBD (Ceph Block Device) +* VsphereVolume (alpha) + +{{< note >}} +Seuls les volumes FC et iSCSI prennent en charge les volumes de blocs bruts dans Kubernetes 1.9. +La prise en charge des plugins supplémentaires a été ajoutée dans 1.10. +{{< /note >}} + +### Volumes persistants utilisant un volume de bloc brut + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: block-pv +spec: + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + volumeMode: Block + persistentVolumeReclaimPolicy: Retain + fc: + targetWWNs: ["50060e801049cfd1"] + lun: 0 + readOnly: false +``` + +### Revendication de volume persistant demandant un volume de bloc brut + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: block-pvc +spec: + accessModes: + - ReadWriteOnce + volumeMode: Block + resources: + requests: + storage: 10Gi +``` + +### Spécification de pod ajoutant le chemin du périphérique de bloc brut dans le conteneur + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-with-block-volume +spec: + containers: + - name: fc-container + image: fedora:26 + command: ["/bin/sh", "-c"] + args: [ "tail -f /dev/null" ] + volumeDevices: + - name: data + devicePath: /dev/xvda + volumes: + - name: data + persistentVolumeClaim: + claimName: block-pvc +``` + +{{< note >}} +Lorsque vous ajoutez un périphérique de bloc brut pour un pod, vous spécifiez le chemin de périphérique dans le conteneur au lieu d'un chemin de montage. +{{< /note >}} + +### Lier des volumes bloc bruts + +Si un utilisateur demande un volume de bloc brut en l'indiquant à l'aide du champ `volumeMode` dans la spécification `PersistentVolumeClaim`, les règles de liaison diffèrent légèrement des versions précédentes qui ne considéraient pas ce mode comme faisant partie de la spécification. +Voici un tableau des combinaisons possibles que l'utilisateur et l'administrateur peuvent spécifier pour demander un périphérique de bloc brut. +Le tableau indique si le volume sera lié ou non compte tenu des combinaisons: +Matrice de liaison de volume pour les volumes provisionnés statiquement: + +| PV volumeMode | PVC volumeMode | Result | +|---------------|-:-:------------|--:------| +| unspecified | unspecified | BIND | +| unspecified | Block | NO BIND | +| unspecified | Filesystem | BIND | +| Block | unspecified | NO BIND | +| Block | Block | BIND | +| Block | Filesystem | NO BIND | +| Filesystem | Filesystem | BIND | +| Filesystem | Block | NO BIND | +| Filesystem | unspecified | BIND | + +{{< note >}} +Seuls les volumes provisionnés statiquement sont pris en charge pour la version alpha. +Les administrateurs doivent prendre en compte ces valeurs lorsqu'ils travaillent avec des périphériques de bloc brut. +{{< /note >}} + +## Snapshot et restauration de volumes + +{{< feature-state for_k8s_version="v1.12" state="alpha" >}} + +La fonction de snapshot de volume a été ajoutée pour prendre en charge uniquement les plug-ins de volume CSI. +Pour plus de détails, voir [volume snapshots](/docs/concepts/storage/volume-snapshots/). + +Pour activer la prise en charge de la restauration d'un volume à partir d'un snapshot de volume, activez la fonctionnalité `VolumeSnapshotDataSource` sur l'apiserver et le controller-manager. + +### Créer du PVC à partir d'un snapshot de volume + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: restore-pvc +spec: + storageClassName: csi-hostpath-sc + dataSource: + name: new-snapshot-test + kind: VolumeSnapshot + apiGroup: snapshot.storage.k8s.io + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## Clonage de volume + +{{< feature-state for_k8s_version="v1.16" state="beta" >}} + +La fonctionnalité de clonage de volume a été ajoutée pour prendre en charge uniquement les plug-ins de volume CSI. +Pour plus de détails, voir [clonage de volume](/docs/concepts/storage/volume-pvc-datasource/). + +Pour activer la prise en charge du clonage d'un volume à partir d'une source de données PVC, activez la propriété `VolumePVCDataSource` sur l'apiserver et le controller-manager. + +### Créer un PVC à partir d'un PVC existant + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: cloned-pvc +spec: + storageClassName: my-csi-plugin + dataSource: + name: existing-src-pvc-name + kind: PersistentVolumeClaim + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## Écriture d'une configuration portable + +Si vous écrivez des templates de configuration ou des exemples qui s'exécutent sur une large gamme de clusters et nécessitent un stockage persistant, il est recommandé d'utiliser le modèle suivant: + +* Incluez des objets `PersistentVolumeClaim` dans votre ensemble de config (aux côtés de `Deployments`, `ConfigMaps`, etc.). +* N'incluez pas d'objets `PersistentVolume` dans la configuration, car l'utilisateur qui instancie la configuration peut ne pas être autorisé à créer des `PersistentVolumes`. +* Donnez à l'utilisateur la possibilité de fournir un nom de classe de stockage lors de l'instanciation du template. + * Si l'utilisateur fournit un nom de classe de stockage, mettez cette valeur dans le champ `persistentVolumeClaim.storageClassName`. + Cela entraînera le PVC pour utiliser la bonne classe de stockage si le cluster a cette `StorageClasses` activé par l'administrateur. + * Si l'utilisateur ne fournit pas de nom de classe de stockage, laissez le champ `persistentVolumeClaim.storageClassName` à zéro. + Cela entraînera un PV à être automatiquement provisionné pour l'utilisateur avec la `StorageClass` par défaut dans le cluster. + De nombreux environnements de cluster ont une `StorageClass` par défaut installée, où les administrateurs peuvent créer leur propre `StorageClass` par défaut. +* Dans votre outillage, surveillez les PVCs qui ne sont pas liés après un certain temps et signalez-le à l'utilisateur, car cela peut indiquer que le cluster n'a pas de support de stockage dynamique (auquel cas l'utilisateur doit créer un PV correspondant) ou que le cluster n'a aucun système de stockage (auquel cas l'utilisateur ne peut pas déployer de configuration nécessitant des PVCs). + +{{% /capture %}} diff --git a/content/fr/docs/concepts/workloads/controllers/deployment.md b/content/fr/docs/concepts/workloads/controllers/deployment.md new file mode 100644 index 0000000000000..4e6fb3bda5d5e --- /dev/null +++ b/content/fr/docs/concepts/workloads/controllers/deployment.md @@ -0,0 +1,1225 @@ +--- +title: Déploiements +feature: + title: Déploiements et restaurations automatisés + description: > + Kubernetes déploie progressivement les modifications apportées à votre application ou à sa configuration, tout en surveillant l'intégrité de l'application pour vous assurer qu'elle ne tue pas toutes vos instances en même temps. + En cas de problème, Kubernetes annulera le changement pour vous. + Profitez d'un écosystème croissant de solutions de déploiement. + +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +Un _Deployment_ (déploiement en français) fournit des mises à jour déclaratives pour [Pods](/fr/docs/concepts/workloads/pods/pod/) et [ReplicaSets](/fr/docs/concepts/workloads/controllers/replicaset/). + +Vous décrivez un _état désiré_ dans un déploiement et le {{< glossary_tooltip term_id="controller" text="controlleur">}} déploiement change l'état réel à l'état souhaité à un rythme contrôlé. +Vous pouvez définir des Deployments pour créer de nouveaux ReplicaSets, ou pour supprimer des déploiements existants et adopter toutes leurs ressources avec de nouveaux déploiements. + +{{< note >}} +Ne gérez pas les ReplicaSets appartenant à un Deployment. +Pensez à ouvrir un ticket dans le dépot Kubernetes principal si votre cas d'utilisation n'est pas traité ci-dessous. +{{< /note >}} + +{{% /capture %}} + +{{% capture body %}} + +## Cas d'utilisation + +Voici des cas d'utilisation typiques pour les déploiements: + +* [Créer un déploiement pour déployer un ReplicaSet](#création-dun-déploiement). + Le ReplicaSet crée des pods en arrière-plan. + Vérifiez l'état du déploiement pour voir s'il réussit ou non. +* [Déclarez le nouvel état des Pods](#mise-à-jour-dun-déploiement) en mettant à jour le PodTemplateSpec du déploiement. + Un nouveau ReplicaSet est créé et le déploiement gère le déplacement des pods de l'ancien ReplicaSet vers le nouveau à un rythme contrôlé. + Chaque nouveau ReplicaSet met à jour la révision du déploiement. +* [Revenir à une révision de déploiement antérieure](#annulation-dun-déploiement) si l'état actuel du déploiement n'est pas stable. + Chaque restauration met à jour la révision du déploiement. +* [Augmentez le déploiement pour traiter plus de charge](#mise-à-léchelle-dun-déploiement). +* [Suspendre le déploiement](#pause-et-reprise-dun-déploiement) d'appliquer plusieurs correctifs à son PodTemplateSpec, puis de le reprendre pour démarrer un nouveau déploiement. +* [Utiliser l'état du déploiement](#statut-de-déploiement) comme indicateur qu'un déploiement est bloqué. +* [Nettoyer les anciens ReplicaSets](#politique-de-nettoyage) dont vous n'avez plus besoin. + +## Création d'un déploiement + +Voici un exemple de déploiement. +Il crée un ReplicaSet pour faire apparaître trois pods `nginx`: + +{{< codenew file="controllers/nginx-deployment.yaml" >}} + +Dans cet exemple: + +* Un déploiement nommé `nginx-deployment` est créé, indiqué par le champ `.metadata.name`. +* Le déploiement crée trois pods répliqués, indiqués par le champ `replicas`. +* Le champ `selector` définit comment le déploiement trouve les pods à gérer. + Dans ce cas, vous sélectionnez simplement un label définie dans le template de pod (`app:nginx`). + Cependant, des règles de sélection plus sophistiquées sont possibles, tant que le modèle de pod satisfait lui-même la règle. + + {{< note >}} + Le champ `matchLabels` est une table de hash {clé, valeur}. + Une seule {clé, valeur} dans la table `matchLabels` est équivalente à un élément de `matchExpressions`, dont le champ clé est "clé", l'opérateur est "In" et le tableau de valeurs contient uniquement "valeur". + Toutes les exigences, à la fois de `matchLabels` et de `matchExpressions`, doivent être satisfaites pour correspondre. + {{< /note >}} + +* Le champ `template` contient les sous-champs suivants: + * Les Pods reçoivent le label `app:nginx` dans le champ `labels`. + * La spécification du template de pod dans le champ `.template.spec`, indique que les pods exécutent un conteneur, `nginx`, qui utilise l'image `nginx` [Docker Hub](https://hub.docker.com/) à la version 1.7.9. + * Créez un conteneur et nommez-le `nginx` en utilisant le champ `name`. + +Suivez les étapes ci-dessous pour créer le déploiement ci-dessus: + +Avant de commencer, assurez-vous que votre cluster Kubernetes est opérationnel. + +1. Créez le déploiement en exécutant la commande suivante: + + {{< note >}} + Vous pouvez spécifier l'indicateur `--record` pour écrire la commande exécutée dans l'annotation de ressource `kubernetes.io/change-cause`. + C'est utile pour une future introspection. + Par exemple, pour voir les commandes exécutées dans chaque révision de déploiement. + {{< /note >}} + + ```shell + kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml + ``` + +1. Exécutez `kubectl get deployments` pour vérifier si le déploiement a été créé. + Si le déploiement est toujours en cours de création, la sortie est similaire à: + + ```shell + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 0/3 0 0 1s + ``` + + Lorsque vous inspectez les déploiements de votre cluster, les champs suivants s'affichent: + + * `NAME` répertorie les noms des déploiements dans le cluster. + * `DESIRED` affiche le nombre souhaité de _répliques_ de l'application, que vous définissez lorsque vous créez le déploiement. + C'est l'_état désiré_. + * `CURRENT` affiche le nombre de réplicas en cours d'exécution. + * `UP-TO-DATE` affiche le nombre de réplicas qui ont été mises à jour pour atteindre l'état souhaité. + * `AVAILABLE` affiche le nombre de réplicas de l'application disponibles pour vos utilisateurs. + * `AGE` affiche la durée d'exécution de l'application. + + Notez que le nombre de réplicas souhaitées est de 3 selon le champ `.spec.replicas`. + +1. Pour voir l'état du déploiement, exécutez: + + ```shell + kubectl rollout status deployment.v1.apps/nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```shell + Waiting for rollout to finish: 2 out of 3 new replicas have been updated... + deployment.apps/nginx-deployment successfully rolled out + ``` + +1. Exécutez à nouveau `kubectl get deployments` quelques secondes plus tard. + La sortie est similaire à ceci: + + ```text + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 3/3 3 3 18s + ``` + + Notez que le déploiement a créé les trois répliques et que toutes les répliques sont à jour (elles contiennent le dernier modèle de pod) et disponibles. + +1. Pour voir le ReplicaSet (`rs`) créé par le déploiement, exécutez `kubectl get rs`. + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT READY AGE + nginx-deployment-75675f5897 3 3 3 18s + ``` + + Notez que le nom du ReplicaSet est toujours formaté comme: `[DEPLOYMENT-NAME]-[RANDOM-STRING]`. + La chaîne aléatoire est générée aléatoirement et utilise le pod-template-hash comme graine. + +1. Pour voir les labels générées automatiquement pour chaque Pod, exécutez `kubectl get pods --show-labels`. + La sortie est similaire à ceci: + + ```text + NAME READY STATUS RESTARTS AGE LABELS + nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 + nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 + nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 + ``` + + Le ReplicaSet créé garantit qu'il y a trois pods `nginx`. + +{{< note >}} +Vous devez spécifier un sélecteur approprié et des labels de template de pod dans un déploiement (dans ce cas, `app: nginx`). +Ne superposez pas les étiquettes ou les sélecteurs avec d'autres contrôleurs (y compris d'autres déploiements et StatefulSets). +Kubernetes n'empêche pas les chevauchements de noms, et si plusieurs contrôleurs ont des sélecteurs qui se chevauchent, ces contrôleurs peuvent entrer en conflit et se comporter de façon inattendue. +{{< /note >}} + +### Étiquette pod-template-hash + +{{< note >}} +Ne modifiez pas ce label. +{{< /note >}} + +Le label `pod-template-hash` est ajoutée par le contrôleur de déploiement à chaque ReplicaSet créé ou adopté par un déploiement. + +Ce label garantit que les ReplicaSets enfants d'un déploiement ne se chevauchent pas. +Il est généré en hachant le `PodTemplate` du ReplicaSet et en utilisant le hachage résultant comme valeur de label qui est ajoutée au sélecteur ReplicaSet, aux labels de template de pod et dans tous les pods existants que le ReplicaSet peut avoir. + +## Mise à jour d'un déploiement + +{{< note >}} +Le re-déploiement d'un déploiement est déclenché si et seulement si le modèle de pod du déploiement (c'est-à-dire `.spec.template`) est modifié, par exemple si les labels ou les images de conteneur du template sont mis à jour. +D'autres mises à jour, telles que la mise à l'échelle du déploiement, ne déclenchent pas de rollout. +{{< /note >}} + +Suivez les étapes ci-dessous pour mettre à jour votre déploiement: + +1. Mettons à jour les pods nginx pour utiliser l'image `nginx: 1.9.1` au lieu de l'image `nginx: 1.7.9`. + + ```shell + kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + ``` + + ou utilisez la commande suivante: + + ```shell + kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment image updated + ``` + + Alternativement, vous pouvez `éditer` le déploiement et changer `.spec.template.spec.containers[0].image` de `nginx: 1.7.9` à `nginx: 1.9.1`: + + ```shell + kubectl edit deployment.v1.apps/nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment edited + ``` + +2. Pour voir l'état du déploiement, exécutez: + + ```shell + kubectl rollout status deployment.v1.apps/nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + Waiting for rollout to finish: 2 out of 3 new replicas have been updated... + ``` + + ou + + ```text + deployment.apps/nginx-deployment successfully rolled out + ``` + +Obtenez plus de détails sur votre déploiement mis à jour: + +* Une fois le déploiement réussi, vous pouvez afficher le déploiement en exécutant `kubectl get deployments`. + La sortie est similaire à ceci: + + ```text + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 3/3 3 3 36s + ``` + +* Exécutez `kubectl get rs` pour voir que le déploiement a mis à jour les pods en créant un nouveau ReplicaSet et en le redimensionnant jusqu'à 3 replicas, ainsi qu'en réduisant l'ancien ReplicaSet à 0 réplicas. + + ```shell + kubectl get rs + ``` + + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT READY AGE + nginx-deployment-1564180365 3 3 3 6s + nginx-deployment-2035384211 0 0 0 36s + ``` + +* L'exécution de `kubectl get pods` ne devrait désormais afficher que les nouveaux pods: + + ```shell + kubectl get pods + ``` + + La sortie est similaire à ceci: + + ```text + NAME READY STATUS RESTARTS AGE + nginx-deployment-1564180365-khku8 1/1 Running 0 14s + nginx-deployment-1564180365-nacti 1/1 Running 0 14s + nginx-deployment-1564180365-z9gth 1/1 Running 0 14s + ``` + + La prochaine fois que vous souhaitez mettre à jour ces pods, il vous suffit de mettre à jour le modèle de pod de déploiement à nouveau. + + Le déploiement garantit que seul un certain nombre de pods sont en panne pendant leur mise à jour. + Par défaut, il garantit qu'au moins 75% du nombre souhaité de pods sont en place (25% max indisponible). + + Le déploiement garantit également que seul un certain nombre de pods sont créés au-dessus du nombre souhaité de pods. + Par défaut, il garantit qu'au plus 125% du nombre de pods souhaité sont en hausse (surtension maximale de 25%). + + Par exemple, si vous regardez attentivement le déploiement ci-dessus, vous verrez qu'il a d'abord créé un nouveau pod, puis supprimé certains anciens pods et en a créé de nouveaux. + Il ne tue pas les anciens Pods tant qu'un nombre suffisant de nouveaux Pods n'est pas apparu, et ne crée pas de nouveaux Pods tant qu'un nombre suffisant de Pods anciens n'a pas été tué. + Il s'assure qu'au moins 2 pods sont disponibles et qu'au maximum 4 pods au total sont disponibles. + +* Obtenez les détails de votre déploiement: + + ```shell + kubectl describe deployments + ``` + + La sortie est similaire à ceci: + + ```text + Name: nginx-deployment + Namespace: default + CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000 + Labels: app=nginx + Annotations: deployment.kubernetes.io/revision=2 + Selector: app=nginx + Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.9.1 + Port: 80/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable + OldReplicaSets: + NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created) + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3 + Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1 + Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2 + Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2 + Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1 + Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3 + Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0 + ``` + + Ici, vous voyez que lorsque vous avez créé le déploiement pour la première fois, il a créé un ReplicaSet (nginx-deployment-2035384211) et l'a mis à l'échelle directement jusqu'à 3 réplicas. + Lorsque vous avez mis à jour le déploiement, il a créé un nouveau ReplicaSet (nginx-deployment-1564180365) et l'a mis à l'échelle jusqu'à 1, puis a réduit l'ancien ReplicaSet à 2, de sorte qu'au moins 2 pods étaient disponibles et au plus 4 pods ont été créés à chaque fois. + Il a ensuite poursuivi la montée en puissance du nouveau et de l'ancien ReplicaSet, avec la même stratégie de mise à jour continue. + Enfin, vous aurez 3 réplicas disponibles dans le nouveau ReplicaSet, et l'ancien ReplicaSet est réduit à 0. + +### Rollover (alias plusieurs mises à jour en vol) {#rollover} + +Chaque fois qu'un nouveau déploiement est observé par le contrôleur de déploiement, un ReplicaSet est créé pour afficher les pods souhaités. +Si le déploiement est mis à jour, le ReplicaSet existant qui contrôle les pods dont les étiquettes correspondent à `.spec.selector` mais dont le modèle ne correspond pas à `.spec.template` est réduit. +Finalement, le nouveau ReplicaSet est mis à l'échelle à `.spec.replicas` et tous les anciens ReplicaSets sont mis à l'échelle à 0. + +Si vous mettez à jour un déploiement alors qu'un déploiement existant est en cours, le déploiement crée un nouveau ReplicaSet conformément à la mise à jour et commence à le mettre à l'échelle, et arrête de mettre à jour le ReplicaSet qu'il augmentait précédemment - il l'ajoutera à sa liste de anciens ReplicaSets et commencera à le réduire. + +Par exemple, supposons que vous créez un déploiement pour créer 5 répliques de `nginx: 1.7.9`, puis mettez à jour le déploiement pour créer 5 répliques de `nginx: 1.9.1`, alors que seulement 3 répliques de `nginx:1.7.9` avait été créés. +Dans ce cas, le déploiement commence immédiatement à tuer les 3 pods `nginx: 1.7.9` qu'il avait créés et commence à créer des pods `nginx: 1.9.1`. +Il n'attend pas que les 5 répliques de `nginx: 1.7.9` soient créées avant de changer de cap. + +### Mises à jour du sélecteur de labels + +Il est généralement déconseillé de mettre à jour le sélecteur de labels et il est suggéré de planifier vos sélecteurs à l'avance. +Dans tous les cas, si vous devez effectuer une mise à jour du sélecteur de labels, soyez très prudent et assurez-vous d'avoir saisi toutes les implications. + +{{< note >}} +Dans la version d'API `apps/v1`, le sélecteur de label d'un déploiement est immuable après sa création. +{{< /note >}} + +* Les ajouts de sélecteur nécessitent que les labels de template de pod dans la spécification de déploiement soient également mises à jour avec les nouveaux labels, sinon une erreur de validation est renvoyée. + Cette modification ne se chevauche pas, ce qui signifie que le nouveau sélecteur ne sélectionne pas les ReplicaSets et les pods créés avec l'ancien sélecteur, ce qui entraîne la perte de tous les anciens ReplicaSets et la création d'un nouveau ReplicaSet. +* Les mises à jour du sélecteur modifient la valeur existante dans une clé de sélection - entraînent le même comportement que les ajouts. +* La suppression de sélecteur supprime une clé existante du sélecteur de déploiement - ne nécessite aucune modification dans les labels du template de pod. + Les ReplicaSets existants ne sont pas orphelins et aucun nouveau ReplicaSet n'est créé, mais notez que le label supprimé existe toujours dans tous les Pods et ReplicaSets existants. + +## Annulation d'un déploiement + +Parfois, vous souhaiterez peut-être annuler un déploiement; par exemple, lorsque le déploiement n'est pas stable, comme en cas d'échecs à répétition (CrashLoopBackOff). +Par défaut, tout l'historique des déploiements d'un déploiement est conservé dans le système afin que vous puissiez le restaurer à tout moment (vous pouvez le modifier en modifiant la limite de l'historique des révisions). + +{{< note >}} +La révision d'un déploiement est créée lorsque le déploiement d'un déploiement est déclenché. +Cela signifie qu'une nouvelle révision est créée si et seulement si le template de pod de déploiement (`.spec.template`) est modifié, par exemple si vous mettez à jour les labels ou les images de conteneur du template. +D'autres mises à jour, telles que la mise à l'échelle du déploiement, ne créent pas de révision de déploiement, de sorte que vous puissiez faciliter la mise à l'échelle manuelle ou automatique simultanée. +Cela signifie que lorsque vous revenez à une révision antérieure, seule la partie du template de pod de déploiement est annulée. +{{< /note >}} + +* Supposons que vous ayez fait une faute de frappe lors de la mise à jour du déploiement, en mettant le nom de l'image sous la forme `nginx:1.91` au lieu de `nginx: 1.9.1`: + + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment image updated + ``` + +* Le déploiement est bloqué. + Vous pouvez le vérifier en vérifiant l'état du déploiement: + + ```shell + kubectl rollout status deployment.v1.apps/nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + Waiting for rollout to finish: 1 out of 3 new replicas have been updated... + ``` + +* Appuyez sur Ctrl-C pour arrêter la surveillance d'état de déploiement ci-dessus. + Pour plus d'informations sur les déploiements bloqués, [en savoir plus ici](#deployment-status). + +* Vous voyez que le nombre d'anciens réplicas (`nginx-deployment-1564180365` et `nginx-deployment-2035384211`) est 2, et les nouveaux réplicas (`nginx-deployment-3066724191`) est 1. + + ```shell + kubectl get rs + ``` + + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT READY AGE + nginx-deployment-1564180365 3 3 3 25s + nginx-deployment-2035384211 0 0 0 36s + nginx-deployment-3066724191 1 1 0 6s + ``` + +* En regardant les pods créés, vous voyez que 1 pod créé par le nouveau ReplicaSet est coincé dans une boucle pour récupérer son image: + + ```shell + kubectl get pods + ``` + + La sortie est similaire à ceci: + + ```text + NAME READY STATUS RESTARTS AGE + nginx-deployment-1564180365-70iae 1/1 Running 0 25s + nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s + nginx-deployment-1564180365-hysrc 1/1 Running 0 25s + nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s + ``` + + {{< note >}} + Le contrôleur de déploiement arrête automatiquement le mauvais déploiement et arrête la mise à l'échelle du nouveau ReplicaSet. + Cela dépend des paramètres rollingUpdate (`maxUnavailable` spécifiquement) que vous avez spécifiés. + Kubernetes définit par défaut la valeur à 25%. + {{< /note >}} + +* Obtenez la description du déploiement: + + ```shell + kubectl describe deployment + ``` + + La sortie est similaire à ceci: + + ```text + Name: nginx-deployment + Namespace: default + CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700 + Labels: app=nginx + Selector: app=nginx + Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.91 + Port: 80/TCP + Host Port: 0/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True ReplicaSetUpdated + OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created) + NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created) + Events: + FirstSeen LastSeen Count From SubObjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2 + 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1 + 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3 + 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0 + 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1 + ``` + + Pour résoudre ce problème, vous devez revenir à une version précédente de Deployment qui est stable. + +### Vérification de l'historique de déploiement d'un déploiement + +Suivez les étapes ci-dessous pour vérifier l'historique de déploiement: + +1. Tout d'abord, vérifiez les révisions de ce déploiement: + + ```shell + kubectl rollout history deployment.v1.apps/nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + deployments "nginx-deployment" + REVISION CHANGE-CAUSE + 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true + 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + ``` + + `CHANGE-CAUSE` est copié de l'annotation de déploiement `kubernetes.io/change-cause` dans ses révisions lors de la création. + Vous pouvez spécifier le message`CHANGE-CAUSE` en: + + * Annoter le déploiement avec `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image mis à jour en 1.9.1"` + * Ajoutez le drapeau `--record` pour enregistrer la commande `kubectl` qui apporte des modifications à la ressource. + * Modification manuelle du manifeste de la ressource. + +2. Pour voir les détails de chaque révision, exécutez: + + ```shell + kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 + ``` + + La sortie est similaire à ceci: + + ```text + deployments "nginx-deployment" revision 2 + Labels: app=nginx + pod-template-hash=1159050644 + Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + Containers: + nginx: + Image: nginx:1.9.1 + Port: 80/TCP + QoS Tier: + cpu: BestEffort + memory: BestEffort + Environment Variables: + No volumes. + ``` + +### Revenir à une révision précédente + +Suivez les étapes ci-dessous pour restaurer le déploiement de la version actuelle à la version précédente, qui est la version 2. + +1. Vous avez maintenant décidé d'annuler le déploiement actuel et le retour à la révision précédente: + + ```shell + kubectl rollout undo deployment.v1.apps/nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment + ``` + + Alternativement, vous pouvez revenir à une révision spécifique en la spécifiant avec `--to-revision`: + + ```shell + kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment + ``` + + Pour plus de détails sur les commandes liées au déploiement, lisez [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout). + + Le déploiement est maintenant rétabli à une précédente révision stable. + Comme vous pouvez le voir, un événement `DeploymentRollback` pour revenir à la révision 2 est généré à partir du contrôleur de déploiement. + +2. Vérifiez si la restauration a réussi et que le déploiement s'exécute comme prévu, exécutez: + + ```shell + kubectl get deployment nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 3/3 3 3 30m + ``` + +3. Obtenez la description du déploiement: + + ```shell + kubectl describe deployment nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + Name: nginx-deployment + Namespace: default + CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 + Labels: app=nginx + Annotations: deployment.kubernetes.io/revision=4 + kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + Selector: app=nginx + Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.9.1 + Port: 80/TCP + Host Port: 0/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable + OldReplicaSets: + NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created) + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1 + Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2 + Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0 + ``` + +## Mise à l'échelle d'un déploiement + +Vous pouvez mettre à l'échelle un déploiement à l'aide de la commande suivante: + +```shell +kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 +``` + +La sortie est similaire à ceci: + +```text +deployment.apps/nginx-deployment scaled +``` + +En supposant que l'[horizontal Pod autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) est activé dans votre cluster, vous pouvez configurer une mise à l'échelle automatique pour votre déploiement et choisir le nombre minimum et maximum de pods que vous souhaitez exécuter en fonction de l'utilisation du processeur de vos pods existants. + +```shell +kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 +``` + +La sortie est similaire à ceci: + +```text +deployment.apps/nginx-deployment scaled +``` + +### Mise à l'échelle proportionnelle + +Les déploiements RollingUpdate prennent en charge l'exécution simultanée de plusieurs versions d'une application. +Lorsque vous ou un autoscaler mettez à l'échelle un déploiement RollingUpdate qui se trouve au milieu d'un déploiement (en cours ou en pause), le contrôleur de déploiement équilibre les réplicas supplémentaires dans les ReplicaSets actifs existants (ReplicaSets avec pods) afin d'atténuer le risque. +Ceci est appelé *mise à l'échelle proportionnelle*. + +Par exemple, vous exécutez un déploiement avec 10 réplicas, [maxSurge](#max-surge)=3, et [maxUnavailable](#max-unavailable)=2. + +* Assurez-vous que les 10 réplicas de votre déploiement sont en cours d'exécution. + + ```shell + kubectl get deploy + ``` + + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx-deployment 10 10 10 10 50s + ``` + +* Vous effectuez une mise à jour vers une nouvelle image qui s'avère impossible à résoudre depuis l'intérieur du cluster. + + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment image updated + ``` + +* La mise à jour de l'image démarre un nouveau déploiement avec ReplicaSet `nginx-deployment-1989198191`, mais elle est bloquée en raison de l'exigence `maxUnavailable` que vous avez mentionnée ci-dessus. + Découvrez l'état du déploiement: + + ```shell + kubectl get rs + ``` + + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT READY AGE + nginx-deployment-1989198191 5 5 0 9s + nginx-deployment-618515232 8 8 8 1m + ``` + +* Ensuite, une nouvelle demande de mise à l'échelle pour le déploiement arrive. + La mise à l'échelle automatique incrémente les réplicas de déploiement à 15. + Le contrôleur de déploiement doit décider où ajouter ces 5 nouvelles répliques. + Si vous n'utilisiez pas la mise à l'échelle proportionnelle, les 5 seraient ajoutés dans le nouveau ReplicaSet. + Avec une mise à l'échelle proportionnelle, vous répartissez les répliques supplémentaires sur tous les ReplicaSets. + Des proportions plus importantes vont aux ReplicaSets avec le plus de répliques et des proportions plus faibles vont aux ReplicaSets avec moins de replicas. + Tous les restes sont ajoutés au ReplicaSet avec le plus de répliques. + Les ReplicaSets avec zéro réplicas ne sont pas mis à l'échelle. + +Dans notre exemple ci-dessus, 3 répliques sont ajoutées à l'ancien ReplicaSet et 2 répliques sont ajoutées au nouveau ReplicaSet. +Le processus de déploiement devrait éventuellement déplacer toutes les répliques vers le nouveau ReplicaSet, en supposant que les nouvelles répliques deviennent saines. +Pour confirmer cela, exécutez: + +```shell +kubectl get deploy +``` + +La sortie est similaire à ceci: + +```text +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +nginx-deployment 15 18 7 8 7m +``` + +Le statut de déploiement confirme la façon dont les réplicas ont été ajoutés à chaque ReplicaSet. + +```shell +kubectl get rs +``` + +La sortie est similaire à ceci: + +```text +NAME DESIRED CURRENT READY AGE +nginx-deployment-1989198191 7 7 0 7m +nginx-deployment-618515232 11 11 11 7m +``` + +## Pause et reprise d'un déploiement + +Vous pouvez suspendre un déploiement avant de déclencher une ou plusieurs mises à jour, puis le reprendre. +Cela vous permet d'appliquer plusieurs correctifs entre la pause et la reprise sans déclencher de déploiements inutiles. + +* Par exemple, avec un déploiement qui vient d'être créé: + Obtenez les détails du déploiement: + + ```shell + kubectl get deploy + ``` + + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx 3 3 3 3 1m + ``` + + Obtenez le statut de déploiement: + + ```shell + kubectl get rs + ``` + + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT READY AGE + nginx-2142116321 3 3 3 1m + ``` + +* Mettez le déploiement en pause en exécutant la commande suivante: + + ```shell + kubectl rollout pause deployment.v1.apps/nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment paused + ``` + +* Mettez ensuite à jour l'image du déploiement: + + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment image updated + ``` + +* Notez qu'aucun nouveau déploiement n'a commencé: + + ```shell + kubectl rollout history deployment.v1.apps/nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + deployments "nginx" + REVISION CHANGE-CAUSE + 1 + ``` + +* Obtenez l'état de déploiement pour vous assurer que le déploiement est correctement mis à jour: + + ```shell + kubectl get rs + ``` + + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT READY AGE + nginx-2142116321 3 3 3 2m + ``` + +* Vous pouvez effectuer autant de mises à jour que vous le souhaitez, par exemple, mettre à jour les ressources qui seront utilisées: + + ```shell + kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment resource requirements updated + ``` + + L'état initial du déploiement avant de le suspendre continuera de fonctionner, mais les nouvelles mises à jour du déploiement n'auront aucun effet tant que le déploiement sera suspendu. + +* Finalement, reprenez le déploiement et observez un nouveau ReplicaSet à venir avec toutes les nouvelles mises à jour: + + ```shell + kubectl rollout resume deployment.v1.apps/nginx-deployment + ``` + + La sortie est similaire à ceci: + + ```text + deployment.apps/nginx-deployment resumed + ``` + +* Regardez l'état du déploiement jusqu'à ce qu'il soit terminé. + + ```shell + kubectl get rs -w + ``` + + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT READY AGE + nginx-2142116321 2 2 2 2m + nginx-3926361531 2 2 0 6s + nginx-3926361531 2 2 1 18s + nginx-2142116321 1 2 2 2m + nginx-2142116321 1 2 2 2m + nginx-3926361531 3 2 1 18s + nginx-3926361531 3 2 1 18s + nginx-2142116321 1 1 1 2m + nginx-3926361531 3 3 1 18s + nginx-3926361531 3 3 2 19s + nginx-2142116321 0 1 1 2m + nginx-2142116321 0 1 1 2m + nginx-2142116321 0 0 0 2m + nginx-3926361531 3 3 3 20s + ``` + +* Obtenez le statut du dernier déploiement: + + ```shell + kubectl get rs + ``` + + La sortie est similaire à ceci: + + ```text + NAME DESIRED CURRENT READY AGE + nginx-2142116321 0 0 0 2m + nginx-3926361531 3 3 3 28s + ``` + +{{< note >}} +Vous ne pouvez pas annuler un déploiement suspendu avant de le reprendre. +{{< /note >}} + +## Statut de déploiement + +Un déploiement entre dans différents états au cours de son cycle de vie. +Il peut être [progressant](#progressing-deployment) lors du déploiement d'un nouveau ReplicaSet, il peut être [effectué](#complete-deployment), ou il peut [ne pas progresser](#failed-deployment). + +### Progression du déploiement + +Kubernetes marque un déploiement comme _progressing_ lorsqu'une des tâches suivantes est effectuée: + +* Le déploiement crée un nouveau ReplicaSet. +* Le déploiement augmente son nouveau ReplicaSet. +* Le déploiement réduit ses anciens ReplicaSet. +* De nouveaux pods deviennent prêts ou disponibles (prêt pour au moins [MinReadySeconds](#min-ready-seconds)). + +Vous pouvez surveiller la progression d'un déploiement à l'aide de `kubectl rollout status`. + +### Déploiement effectué + +Kubernetes marque un déploiement comme _effectué_ lorsqu'il présente les caractéristiques suivantes: + +* Toutes les répliques associées au déploiement ont été mises à jour vers la dernière version que vous avez spécifiée, ce qui signifie que toutes les mises à jour que vous avez demandées ont été effectuées. +* Toutes les répliques associées au déploiement sont disponibles. +* Aucune ancienne réplique pour le déploiement n'est en cours d'exécution. + +Vous pouvez vérifier si un déploiement est terminé en utilisant `kubectl rollout status`. +Si le déploiement s'est terminé avec succès, `kubectl rollout status` renvoie un code de sortie de 0. + +```shell +kubectl rollout status deployment.v1.apps/nginx-deployment +``` + +La sortie est similaire à ceci: + +```text +Waiting for rollout to finish: 2 of 3 updated replicas are available... +deployment.apps/nginx-deployment successfully rolled out +$ echo $? +0 +``` + +### Déploiement échoué + +Votre déploiement peut rester bloqué en essayant de déployer son nouveau ReplicaSet sans jamais terminer. +Cela peut se produire en raison de certains des facteurs suivants: + +* Quota insuffisant +* Échecs de la sonde de préparation +* Erreurs d'extraction d'image +* Permissions insuffisantes +* Plages limites +* Mauvaise configuration de l'exécution de l'application + +Vous pouvez détecter cette condition en spécifiant un paramètre d'échéance dans votre spécification de déploiement: +([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds)). +`.spec.progressDeadlineSeconds` indique le nombre de secondes pendant lesquelles le contrôleur de déploiement attend avant d'indiquer (dans l'état de déploiement) que la progression du déploiement est au point mort. + +La commande `kubectl` suivante définit la spécification avec `progressDeadlineSeconds` pour que le contrôleur signale l'absence de progression pour un déploiement après 10 minutes: + +```shell +kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}' +``` + +La sortie est similaire à ceci: + +```text +deployment.apps/nginx-deployment patched +``` + +Une fois le délai dépassé, le contrôleur de déploiement ajoute un `DeploymentCondition` avec les attributs suivants aux `.status.conditions` du déploiement: + +* Type=Progressing +* Status=False +* Reason=ProgressDeadlineExceeded + +Voir les [conventions Kubernetes API](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) pour plus d'informations sur les conditions d'état. + +{{< note >}} +Kubernetes ne prend aucune mesure sur un déploiement bloqué, sauf pour signaler une condition d'état avec `Reason=ProgressDeadlineExceeded`. +Les orchestrateurs de niveau supérieur peuvent en tirer parti et agir en conséquence, par exemple, restaurer le déploiement vers sa version précédente. +{{< /note >}} + +{{< note >}} +Si vous suspendez un déploiement, Kubernetes ne vérifie pas la progression par rapport à votre échéance spécifiée. +Vous pouvez suspendre un déploiement en toute sécurité au milieu d'un déploiement et reprendre sans déclencher la condition de dépassement du délai. +{{< /note >}} + +Vous pouvez rencontrer des erreurs transitoires avec vos déploiements, soit en raison d'un délai d'attente bas que vous avez défini, soit en raison de tout autre type d'erreur pouvant être traité comme transitoire. +Par exemple, supposons que votre quota soit insuffisant. +Si vous décrivez le déploiement, vous remarquerez la section suivante: + +```shell +kubectl describe deployment nginx-deployment +``` + +La sortie est similaire à ceci: + +```text +<...> +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True ReplicaSetUpdated + ReplicaFailure True FailedCreate +<...> +``` + +Si vous exécutez `kubectl get deployment nginx-deployment -o yaml`, l'état de déploiement est similaire à ceci: + +```yaml +status: + availableReplicas: 2 + conditions: + - lastTransitionTime: 2016-10-04T12:25:39Z + lastUpdateTime: 2016-10-04T12:25:39Z + message: Replica set "nginx-deployment-4262182780" is progressing. + reason: ReplicaSetUpdated + status: "True" + type: Progressing + - lastTransitionTime: 2016-10-04T12:25:42Z + lastUpdateTime: 2016-10-04T12:25:42Z + message: Deployment has minimum availability. + reason: MinimumReplicasAvailable + status: "True" + type: Available + - lastTransitionTime: 2016-10-04T12:25:39Z + lastUpdateTime: 2016-10-04T12:25:39Z + message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota: + object-counts, requested: pods=1, used: pods=3, limited: pods=2' + reason: FailedCreate + status: "True" + type: ReplicaFailure + observedGeneration: 3 + replicas: 2 + unavailableReplicas: 2 +``` + +Finalement, une fois la date limite de progression du déploiement dépassée, Kubernetes met à jour le statut et la raison de la condition de progression: + +```text +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing False ProgressDeadlineExceeded + ReplicaFailure True FailedCreate +``` + +Vous pouvez résoudre un problème de quota insuffisant en réduisant votre déploiement, en réduisant d'autres contrôleurs que vous exécutez ou en augmentant le quota de votre namespace. +Si vous remplissez les conditions de quota et que le contrôleur de déploiement termine ensuite le déploiement de déploiement, vous verrez la mise à jour de l'état du déploiement avec une condition réussie (`Status=True` et `Reason=NewReplicaSetAvailable`). + +```text +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable +``` + +`Type=Available` avec `Status=True` signifie que votre déploiement a une disponibilité minimale. +La disponibilité minimale est dictée par les paramètres spécifiés dans la stratégie de déploiement. +`Type=Progressing` avec `Status=True` signifie que votre déploiement est soit au milieu d'un déploiement et qu'il progresse ou qu'il a terminé avec succès sa progression et que les nouvelles répliques minimales requises sont disponibles (voir la raison de la condition pour les détails - dans notre cas, `Reason=NewReplicaSetAvailable` signifie que le déploiement est terminé). + +Vous pouvez vérifier si un déploiement n'a pas pu progresser en utilisant `kubectl rollout status`. +`kubectl rollout status` renvoie un code de sortie différent de zéro si le déploiement a dépassé le délai de progression. + +```shell +kubectl rollout status deployment.v1.apps/nginx-deployment +``` + +La sortie est similaire à ceci: + +```text +Waiting for rollout to finish: 2 out of 3 new replicas have been updated... +error: deployment "nginx" exceeded its progress deadline +$ echo $? +1 +``` + +### Agir sur un déploiement échoué + +Toutes les actions qui s'appliquent à un déploiement complet s'appliquent également à un déploiement ayant échoué. +Vous pouvez le mettre à l'échelle à la hausse/baisse, revenir à une révision précédente ou même la suspendre si vous devez appliquer plusieurs réglages dans le modèle de pod de déploiement. + +## Politique de nettoyage + +Vous pouvez définir le champ `.spec.revisionHistoryLimit` dans un déploiement pour spécifier le nombre d'anciens ReplicaSets pour ce déploiement que vous souhaitez conserver. +Le reste sera effacé en arrière-plan. +Par défaut, c'est 10. + +{{< note >}} +La définition explicite de ce champ sur 0 entraînera le nettoyage de tout l'historique de votre déploiement, de sorte que le déploiement ne pourra pas revenir en arrière. +{{< /note >}} + +## Déploiement des Canaries + +Si vous souhaitez déployer des versions sur un sous-ensemble d'utilisateurs ou de serveurs à l'aide du déploiement, vous pouvez créer plusieurs déploiements, un pour chaque version, en suivant le modèle canari décrit dans [gestion des ressources](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments). + +## Écriture d'une spécification de déploiement + +Comme pour toutes les autres configurations Kubernetes, un déploiement a besoin des champs `apiVersion`, `kind` et `metadata`. +Pour des informations générales sur l'utilisation des fichiers de configuration, voir [déploiement d'applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/), configuration des conteneurs, et [Utilisation de kubectl pour gérer les ressources](/docs/concepts/overview/working-with-objects/object-management/). + +Un déploiement nécessite également un [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). + +### Pod Template + +Les `.spec.template` et `.spec.selector` sont les seuls champs obligatoires du `.spec`. + +Le `.spec.template` est un [Pod template](/fr/docs/concepts/workloads/pods/pod-overview/#pod-templates). +Il a exactement le même schéma qu'un [Pod](/fr/docs/concepts/workloads/pods/pod/), sauf qu'il est imbriqué et n'a pas de `apiVersion` ou de `kind`. + +En plus des champs obligatoires pour un pod, un Pod Template dans un déploiement doit spécifier des labels appropriées et une stratégie de redémarrage appropriée. +Pour les labels, assurez-vous de ne pas chevaucher l'action d'autres contrôleurs. +Voir [sélecteur](#selector)). + +Seulement un [`.spec.template.spec.restartPolicy`](/fr/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) égal à `Always` est autorisé, ce qui est la valeur par défaut s'il n'est pas spécifié. + +### Répliques + +`.spec.replicas` est un champ facultatif qui spécifie le nombre de pods souhaités. +Il vaut par défaut 1. + +### Sélecteur + +`.spec.selector` est un champ obligatoire qui spécifie un [sélecteur de labels](/docs/concepts/overview/working-with-objects/labels/) pour les pods ciblés par ce déploiement. + +`.spec.selector` doit correspondre `.spec.template.metadata.labels`, ou il sera rejeté par l'API. + +Dans la version d'API `apps/v1`, `.spec.selector` et `.metadata.labels` ne sont pas définis par défaut sur `.spec.template.metadata.labels` s'ils ne sont pas définis. +Ils doivent donc être définis explicitement. +Notez également que `.spec.selector` est immuable après la création du déploiement dans `apps/v1`. + +Un déploiement peut mettre fin aux pods dont les étiquettes correspondent au sélecteur si leur modèle est différent de `.spec.template` ou si le nombre total de ces pods dépasse `.spec.replicas`. +Il fait apparaître de nouveaux pods avec `.spec.template` si le nombre de pods est inférieur au nombre souhaité. + +{{< note >}} +Vous ne devez pas créer d'autres pods dont les labels correspondent à ce sélecteur, soit directement, en créant un autre déploiement, soit en créant un autre contrôleur tel qu'un ReplicaSet ou un ReplicationController. +Si vous le faites, le premier déploiement pense qu'il a créé ces autres pods. +Kubernetes ne vous empêche pas de le faire. +{{< /note >}} + +Si vous avez plusieurs contrôleurs qui ont des sélecteurs qui se chevauchent, les contrôleurs se battront entre eux et ne se comporteront pas correctement. + +### Stratégie + +`.spec.strategy` spécifie la stratégie utilisée pour remplacer les anciens pods par de nouveaux. +`.spec.strategy.type` peut être "Recreate" ou "RollingUpdate". +"RollingUpdate" est la valeur par défaut. + +#### Déploiment Recreate + +Tous les pods existants sont tués avant que de nouveaux ne soient créés lorsque `.spec.strategy.type==Recreate`. + +#### Déploiement de mise à jour continue + +Le déploiement met à jour les pods dans une [mise à jour continue](/docs/tasks/run-application/rolling-update-replication-controller/) quand `.spec.strategy.type==RollingUpdate`. +Vous pouvez spécifier `maxUnavailable` et `maxSurge` pour contrôler le processus de mise à jour continue. + +##### Max non disponible + +`.spec.strategy.rollingUpdate.maxUnavailable` est un champ facultatif qui spécifie le nombre maximal de pods qui peuvent être indisponibles pendant le processus de mise à jour. +La valeur peut être un nombre absolu (par exemple, 5) ou un pourcentage des pods souhaités (par exemple, 10%). +Le nombre absolu est calculé à partir du pourcentage en arrondissant vers le bas. +La valeur ne peut pas être 0 si `.spec.strategy.rollingUpdate.maxSurge` est 0. +La valeur par défaut est 25%. + +Par exemple, lorsque cette valeur est définie sur 30%, l'ancien ReplicaSet peut être réduit à 70% des pods souhaités immédiatement au démarrage de la mise à jour continue. +Une fois que les nouveaux pods sont prêts, l'ancien ReplicaSet peut être réduit davantage, suivi d'une augmentation du nouveau ReplicaSet, garantissant que le nombre total de pods disponibles à tout moment pendant la mise à jour est d'au moins 70% des pods souhaités. + +##### Max Surge + +`.spec.strategy.rollingUpdate.maxSurge` est un champ facultatif qui spécifie le nombre maximal de pods pouvant être créés sur le nombre de pods souhaité. +La valeur peut être un nombre absolu (par exemple, 5) ou un pourcentage des pods souhaités (par exemple, 10%). +La valeur ne peut pas être 0 si `MaxUnavailable` est 0. +Le nombre absolu est calculé à partir du pourcentage en arrondissant. +La valeur par défaut est 25%. + +Par exemple, lorsque cette valeur est définie sur 30%, le nouveau ReplicaSet peut être mis à l'échelle immédiatement au démarrage de la mise à jour continue, de sorte que le nombre total d'anciens et de nouveaux pods ne dépasse pas 130% des pods souhaités. +Une fois que les anciens pods ont été détruits, le nouveau ReplicaSet peut être augmenté davantage, garantissant que le nombre total de pods en cours d'exécution à tout moment pendant la mise à jour est au maximum de 130% des pods souhaités. + +### Progress Deadline Seconds + +`.spec.progressDeadlineSeconds` est un champ facultatif qui spécifie le nombre de secondes pendant lesquelles vous souhaitez attendre que votre déploiement progresse avant que le système ne signale que le déploiement a [échoué](#failed-deployment) - refait surface comme une condition avec `Type=Progressing`, `Status=False` et `Reason=ProgressDeadlineExceeded` dans l'état de la ressource. +Le contrôleur de déploiement continuera de réessayer le déploiement. +À l'avenir, une fois la restauration automatique implémentée, le contrôleur de déploiement annulera un déploiement dès qu'il observera une telle condition. + +S'il est spécifié, ce champ doit être supérieur à `.spec.minReadySeconds`. + +### Min Ready Seconds + +`.spec.minReadySeconds` est un champ facultatif qui spécifie le nombre minimum de secondes pendant lequel un pod nouvellement créé doit être prêt sans qu'aucun de ses conteneurs ne plante, pour qu'il soit considéré comme disponible. +Cette valeur par défaut est 0 (le pod sera considéré comme disponible dès qu'il sera prêt). +Pour en savoir plus sur le moment où un pod est considéré comme prêt, consultez [Sondes de conteneur](/fr/docs/concepts/workloads/pods/pod-lifecycle/#container-probes). + +### Rollback To + +Le champ `.spec.rollbackTo` est obsolète dans les versions d'API `extensions/v1beta1` et `apps/v1beta1` et n'est plus pris en charge dans les versions d'API commençant par `apps/v1beta2`. +Utilisez, `kubectl rollout undo` pour [Revenir à une révision précédente](#revenir-à-une-révision-précédente). + +### Limite de l'historique des révisions + +L'historique de révision d'un déploiement est stocké dans les ReplicaSets qu'il contrôle. + +`.spec.revisionHistoryLimit` est un champ facultatif qui spécifie le nombre d'anciens ReplicaSets à conserver pour permettre la restauration. +Ces anciens ReplicaSets consomment des ressources dans `etcd` et encombrent la sortie de `kubectl get rs`. +La configuration de chaque révision de déploiement est stockée dans ses ReplicaSets; par conséquent, une fois un ancien ReplicaSet supprimé, vous perdez la possibilité de revenir à cette révision du déploiement. +Par défaut, 10 anciens ReplicaSets seront conservés, mais sa valeur idéale dépend de la fréquence et de la stabilité des nouveaux déploiements. + +Plus précisément, la définition de ce champ à zéro signifie que tous les anciens ReplicaSets avec 0 réplicas seront nettoyés. +Dans ce cas, un nouveau panneau déroulant Déploiement ne peut pas être annulé, car son historique de révision est nettoyé. + +### Paused + +`.spec.paused` est un champ booléen facultatif pour suspendre et reprendre un déploiement. +La seule différence entre un déploiement suspendu et un autre qui n'est pas suspendu, c'est que toute modification apportée au `PodTemplateSpec` du déploiement suspendu ne déclenchera pas de nouveaux déploiements tant qu'il sera suspendu. +Un déploiement n'est pas suspendu par défaut lors de sa création. + +## Alternative aux déploiements + +### kubectl rolling-update + +[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) met à jour les pods et les ReplicationControllers de la même manière. +Mais les déploiements sont recommandés, car ils sont déclaratifs, côté serveur et ont des fonctionnalités supplémentaires, telles que la restauration de toute révision précédente même après la mise à jour progressive.. + +{{% /capture %}} diff --git a/content/fr/docs/concepts/workloads/pods/pod-lifecycle.md b/content/fr/docs/concepts/workloads/pods/pod-lifecycle.md index fa36f6a9c0b2b..d570b13bbad27 100644 --- a/content/fr/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/fr/docs/concepts/workloads/pods/pod-lifecycle.md @@ -113,7 +113,7 @@ en cours d'exécution : `Failure`. Si le Conteneur ne fournit pas de readiness probe, l'état par défaut est `Success`. -### Quand devez-vous uiliser une liveness ou une readiness probe ? +### Quand devez-vous utiliser une liveness ou une readiness probe ? Si le process de votre Conteneur est capable de crasher de lui-même lorsqu'il rencontre un problème ou devient inopérant, vous n'avez pas forcément besoin diff --git a/content/fr/docs/contribute/advanced.md b/content/fr/docs/contribute/advanced.md new file mode 100644 index 0000000000000..7cf7ad7cbad29 --- /dev/null +++ b/content/fr/docs/contribute/advanced.md @@ -0,0 +1,94 @@ +--- +title: Contributions avancées +slug: advanced +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +Cette page suppose que vous avez lu et maîtrisé les sujets suivants : [Commencez à contribuer](/docs/contribute/start/) et [Contribution Intermédiaire](/docs/contribute/intermediate/) et êtes prêts à apprendre plus de façons de contribuer. +Vous devez utiliser Git et d'autres outils pour certaines de ces tâches. + +{{% /capture %}} + +{{% capture body %}} + +## Soyez le trieur de PR pendant une semaine + +Les [approbateurs SIG Docs](/docs/contribute/participating/#approvers) peuvent être trieurs de Pull Request (PR). + +Les approbateurs SIG Docs sont ajoutés au [PR Wrangler rotation scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers) pour les rotations hebdomadaires. +Les fonctions de trieur de PR incluent: + +- Faire une revue quotidienne des nouvelles pull requests. + - Aidez les nouveaux contributeurs à signer le CLA et fermez toutes les PR où le CLA n'a pas été signé depuis deux semaines. + Les auteurs de PR peuvent rouvrir la PR après avoir signé le CLA, c’est donc un moyen à faible risque de s’assurer que rien n’est merged sans un CLA signé. + - Fournir des informations sur les modifications proposées, notamment en facilitant les examens techniques des membres d'autres SIGs. + - Faire un merge des PRs quand elles sont prêtes, ou fermer celles qui ne devraient pas être acceptées. +- Triez et étiquetez les tickets entrants (Github Issues) chaque jour. + Consultez [Contributions Intermédiaires](/docs/contribute/intermediate/) pour obtenir des instructions sur la manière dont SIG Docs utilise les métadonnées. + +### Requêtes Github utiles pour les trieurs + +Les requêtes suivantes sont utiles lors des opérations de triage. +Après avoir utilisé ces trois requêtes, la liste restante de PRs devant être examinées est généralement petite. +Ces requêtes excluent spécifiquement les PRs de localisation, et n'incluent que la branche `master` (sauf la derniere). + +- [Pas de CLA, non éligible au merge](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge+label%3Alanguage%2Fen): + Rappelez au contributeur de signer le CLA. S’ils ont déjà été rappelés à la fois par le bot et par un humain, fermez la PR et rappelez-leur qu'ils peuvent l'ouvrir après avoir signé le CLA. + **Nous ne pouvons même pas passer en revue les PR dont les auteurs n'ont pas signé le CLA !** +- [A besoin de LGTM](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-label%3Algtm+): + Si cela nécessite une révision technique, contactez l'un des réviseurs proposés par le bot. + Si cela nécessite une révision de la documentation ou une édition, vous pouvez soit suggérer des modifications, soit ajouter un commit d'édition à la PR pour la faire avancer. +- [A des LGTM, a besoin de docs approval](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+label%3Algtm): + Voyez si vous pouvez comprendre ce qui doit se passer pour que la PR soit mergée. +- [Not against master](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-base%3Amaster): Si c'est basé sur une branche `dev-`, c'est pour une release prochaine. + Assurez vous que le [release meister](https://github.com/kubernetes/sig-release/tree/master/release-team) est au courant. + Si elle se base sur une branche obsolète, aidez l'auteur de la PR à comprendre comment choisir la meilleure branche. + +## Proposer des améliorations + +Les [membres](/docs/contribute/participating/#members) SIG Docs peuvent proposer des améliorations. + +Après avoir contribué à la documentation de Kubernetes pendant un certain temps, vous pouvez avoir des idées pour améliorer le guide de style, les outils utilisés pour construire la documentation, le style du site, les processus de révision et faire un merge de pull requests, ou d'autres aspects de la documentation. +Pour une transparence maximale, ces types de propositions doivent être discutées lors d’une réunion SIG Docs ou sur la [liste de diffusion kubernetes-sig-docs](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). +En outre, il peut être vraiment utile de situer le fonctionnement actuel et de déterminer les raisons pour lesquelles des décisions antérieures ont été prises avant de proposer des changements radicaux. +Le moyen le plus rapide d’obtenir des réponses aux questions sur le fonctionnement actuel de la documentation est de le demander dans le canal `#sig-docs` sur le Slack officiel [kubernetes.slack.com](https://kubernetes.slack.com) + +Une fois que la discussion a eu lieu et que le SIG est d'accord sur le résultat souhaité, vous pouvez travailler sur les modifications proposées de la manière la plus appropriée. +Par exemple, une mise à jour du guide de style ou du fonctionnement du site Web peut impliquer l’ouverture d’une pull request, une modification liée aux tests de documentation peut impliquer de travailler avec sig-testing. + +## Coordonner la documentation pour une version de Kubernetes + +[Les approbateurs](/docs/contribute/participating/#approvers) SIG Docs peuvent coordonner les tâches liées à la documentation pour une release de Kubernetes. + +Chaque release de Kubernetes est coordonnée par une équipe de personnes participant au sig-release Special Interest Group (SIG). +Les autres membres de l'équipe de publication pour une release donnée incluent un responsable général de la publication, ainsi que des représentants de sig-pm, de sig-testing et d'autres. +Pour en savoir plus sur les processus de release de Kubernetes, reportez-vous à la section [https://github.com/kubernetes/sig-release](https://github.com/kubernetes/sig-release). + +Le représentant de SIG Docs pour une release donnée coordonne les tâches suivantes: + +- Surveillez le feature-tracking spreadsheet pour les fonctionnalités nouvelles ou modifiées ayant un impact sur la documentation. + Si la documentation pour une fonctionnalité donnée ne sera pas prête pour la release, la fonctionnalité peut ne pas être autorisée à entrer dans la release. +- Assistez régulièrement aux réunions de sig-release et donnez des mises à jour sur l'état de la documentation pour la release. +- Consultez et copiez la documentation de la fonctionnalité rédigée par le SIG responsable de la mise en œuvre de la fonctionnalité. +- Mergez les pull requests liées à la release et maintenir la branche de fonctionnalité Git pour la version. +- Encadrez d'autres contributeurs SIG Docs qui souhaitent apprendre à jouer ce rôle à l'avenir. +  Ceci est connu comme "l'observation" (shadowing en anglais). +- Publiez les modifications de la documentation relatives à la version lorsque les artefacts de la version sont publiés. + +La coordination d'une publication est généralement un engagement de 3 à 4 mois et les tâches sont alternées entre les approbateurs SIG Docs. + +## Parrainez un nouveau contributeur + +Les [relecteurs](/docs/contribute/participating/#reviewers) SIG Docs peuvent parrainer de nouveaux contributeurs. + +Après que les nouveaux contributeurs aient soumis avec succès 5 pull requests significatives vers un ou plusieurs dépôts Kubernetes, ils/elles sont éligibles pour postuler à l'[adhésion](/docs/contribute/participating#members) dans l'organisation Kubernetes. +L'adhésion des contributeurs doit être soutenue par deux sponsors qui sont déjà des réviseurs. + +Les nouveaux contributeurs docs peuvent demander des sponsors dans le canal #sig-docs sur le [Slack Kubernetes](https://kubernetes.slack.com) ou sur la [mailing list SIG Docs](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). +Si vous vous sentez confiant dans le travail des candidats, vous vous portez volontaire pour les parrainer. +Lorsqu’ils soumettent leur demande d’adhésion, répondez-y avec un "+1" et indiquez les raisons pour lesquelles vous estimez que les demandeurs sont des candidat(e)s valables pour devenir membre de l’organisation Kubernetes. + +{{% /capture %}} diff --git a/content/fr/docs/setup/learning-environment/minikube.md b/content/fr/docs/setup/learning-environment/minikube.md index 4d9ef1a3fcbe2..d1a521c69d264 100644 --- a/content/fr/docs/setup/learning-environment/minikube.md +++ b/content/fr/docs/setup/learning-environment/minikube.md @@ -462,13 +462,13 @@ Celles-ci ne sont pas configurables pour le moment et diffèrent selon le pilote Le partage de dossier hôte n'est pas encore implémenté dans le pilote KVM. {{< /note >}} -| Pilote | OS | HostFolder | VM | -|---------------|---------|------------|-----------| -| VirtualBox | Linux | /home | /hosthome | -| VirtualBox | macOS | /Users | /Users | -| VirtualBox | Windows | C://Users | /c/Users | -| VMware Fusion | macOS | /Users | /Users | -| Xhyve | macOS | /Users | /Users | +| Pilote | OS | HostFolder | VM | +|---------------|---------|-------------|-------------| +| VirtualBox | Linux | ``/home`` |``/hosthome``| +| VirtualBox | macOS | ``/Users`` |``/Users`` | +| VirtualBox | Windows | ``C:/Users``|``/c/Users`` | +| VMware Fusion | macOS | ``/Users`` |``/Users`` | +| Xhyve | macOS | ``/Users`` |``/Users`` | ## Registres de conteneurs privés diff --git a/content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md new file mode 100644 index 0000000000000..f40e6c4fa3e6b --- /dev/null +++ b/content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -0,0 +1,221 @@ +--- +title: Tableau de bord (Dashboard) +content_template: templates/concept +weight: 10 +card: + name: tasks + weight: 30 + title: Utiliser le tableau de bord (Dashboard) +--- + +{{% capture overview %}} + +Le tableau de bord (Dashboard) est une interface web pour Kubernetes. +Vous pouvez utiliser ce tableau de bord pour déployer des applications conteneurisées dans un cluster Kubernetes, dépanner votre application conteneurisée et gérer les ressources du cluster. +Vous pouvez utiliser le tableau de bord pour obtenir une vue d'ensemble des applications en cours d'exécution dans votre cluster, ainsi que pour créer ou modifier des ressources Kubernetes individuelles. (comme des Deployments, Jobs, DaemonSets, etc). +Par exemple, vous pouvez redimensionner un Deployment, lancer une mise à jour progressive, recréer un pod ou déployez de nouvelles applications à l'aide d'un assistant de déploiement. + +Le tableau de bord fournit également des informations sur l'état des ressources Kubernetes de votre cluster et sur les erreurs éventuelles. + +![Tableau de bord Kubernetes](/images/docs/ui-dashboard.png) + +{{% /capture %}} + +{{% capture body %}} + +## Déploiement du tableau de bord + +L'interface utilisateur du tableau de bord n'est pas déployée par défaut. +Pour le déployer, exécutez la commande suivante: + +```text +kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml +``` + +## Accès à l'interface utilisateur du tableau de bord + +Pour protéger vos données dans le cluster, le tableau de bord se déploie avec une configuration RBAC minimale par défaut. +Actuellement, le tableau de bord prend uniquement en charge la connexion avec un jeton de support. +Pour créer un jeton pour cette démo, vous pouvez suivre notre guide sur [créer un exemple d'utilisateur](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md). + +{{< warning >}} +L’exemple d’utilisateur créé dans le didacticiel disposera de privilèges d’administrateur et servira uniquement à des fins pédagogiques. +{{< /warning >}} + +### Proxy en ligne de commande + +Vous pouvez accéder au tableau de bord à l'aide de l'outil en ligne de commande kubectl en exécutant la commande suivante: + +```text +kubectl proxy +``` + +Kubectl mettra le tableau de bord à disposition à l'adresse suivante: . + +Vous ne pouvez accéder à l'interface utilisateur _que_ depuis la machine sur laquelle la commande est exécutée. +Voir `kubectl proxy --help` pour plus d'options. + +{{< note >}} +La méthode d'authentification Kubeconfig ne prend pas en charge les fournisseurs d'identité externes ni l'authentification basée sur un certificat x509. +{{< /note >}} + +## Page de bienvenue + +Lorsque vous accédez au tableau de bord sur un cluster vide, la page d'accueil s'affiche. +Cette page contient un lien vers ce document ainsi qu'un bouton pour déployer votre première application. +De plus, vous pouvez voir quelles applications système sont exécutées par défaut dans le [namespace](/docs/tasks/administer-cluster/namespaces/) `kubernetes-dashboard` de votre cluster, par exemple le tableau de bord lui-même. + +![Page d'accueil du tableau de bord Kubernetes](/images/docs/ui-dashboard-zerostate.png) + +## Déploiement d'applications conteneurisées + +Le tableau de bord vous permet de créer et de déployer une application conteneurisée en tant que Deployment et optionnellement un Service avec un simple assistant. +Vous pouvez spécifier manuellement les détails de l'application ou charger un fichier YAML ou JSON contenant la configuration de l'application. + +Cliquez sur le bouton **CREATE** dans le coin supérieur droit de n’importe quelle page pour commencer. + +### Spécifier les détails de l'application + +L'assistant de déploiement s'attend à ce que vous fournissiez les informations suivantes: + +- **App name** (obligatoire): Nom de votre application. + Un [label](/docs/concepts/overview/working-with-objects/labels/) avec le nom sera ajouté au Deployment et Service, le cas échéant, qui sera déployé. + + Le nom de l'application doit être unique dans son [namespace](/docs/tasks/administer-cluster/namespaces/) Kubernetes. + Il doit commencer par une lettre minuscule et se terminer par une lettre minuscule ou un chiffre et ne contenir que des lettres minuscules, des chiffres et des tirets (-). + Il est limité à 24 caractères. + Les espaces de début et de fin sont ignorés. + +- **Container image** (obligatoire): L'URL d'une [image de conteneur](/docs/concepts/containers/images/) sur n'importe quel registre, ou une image privée (généralement hébergée sur le registre de conteneurs Google ou le hub Docker). + La spécification d'image de conteneur doit se terminer par un deux-points. + +- **Number of pods** (obligatoire): Nombre cible de pods dans lesquels vous souhaitez déployer votre application. + La valeur doit être un entier positif. + + Un objet [Deployment](/docs/concepts/workloads/controllers/deployment/) sera créé pour maintenir le nombre souhaité de pods dans votre cluster. + +- **Service** (optionnel): Pour certaines parties de votre application (par exemple les serveurs frontaux), vous souhaiterez peut-être exposer un [Service](/docs/concepts/services-networking/service/) sur une adresse IP externe, peut-être publique, en dehors de votre cluster (Service externe). + Pour les Services externes, vous devrez peut-être ouvrir un ou plusieurs ports pour le faire. + Trouvez plus de détails [ici](/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/). + + Les autres services visibles uniquement de l'intérieur du cluster sont appelés Services internes. + + Quel que soit le type de service, si vous choisissez de créer un service et que votre conteneur écoute sur un port (entrant), vous devez spécifier deux ports. + Le Service sera créé en mappant le port (entrant) sur le port cible vu par le conteneur. + Ce Service acheminera le trafic vers vos pods déployés. + Les protocoles pris en charge sont TCP et UDP. + Le nom DNS interne de ce service sera la valeur que vous avez spécifiée comme nom d'application ci-dessus. + +Si nécessaire, vous pouvez développer la section **Options avancées** dans laquelle vous pouvez spécifier davantage de paramètres: + +- **Description**: Le texte que vous entrez ici sera ajouté en tant qu'[annotation](/docs/concepts/overview/working-with-objects/annotations/) au Deployment et affiché dans les détails de l'application. + +- **Labels**: Les [labels](/docs/concepts/overview/working-with-objects/labels/) par défaut à utiliser pour votre application sont le nom et la version de l’application. + Vous pouvez spécifier des labels supplémentaires à appliquer au Deployment, Service (le cas échéant), et Pods, tels que la release, l'environnement, le niveau, la partition et la piste d'édition. + + Exemple: + + ```conf + release=1.0 + tier=frontend + environment=pod + track=stable + ``` + +- **Namespace**: Kubernetes prend en charge plusieurs clusters virtuels s'exécutant sur le même cluster physique. + Ces clusters virtuels sont appelés [namespaces](/docs/tasks/administer-cluster/namespaces/). + Ils vous permettent de partitionner les ressources en groupes nommés de manière logique. + + Le tableau de bord propose tous les namespaces disponibles dans une liste déroulante et vous permet de créer un nouveau namespace. + Le nom du namespace peut contenir au maximum 63 caractères alphanumériques et des tirets (-), mais ne peut pas contenir de lettres majuscules. + Les noms de Namespace ne devraient pas être composés uniquement de chiffres. + Si le nom est défini sous la forme d’un nombre, tel que 10, le pod sera placé dans le namespace par défaut. + + Si la création du namespace réussit, celle-ci est sélectionnée par défaut. + Si la création échoue, le premier namespace est sélectionné. + +- **Image Pull Secret**: Si l'image de conteneur spécifiée est privée, il peut être nécessaire de configurer des identifiants de [pull secret](/docs/concepts/configuration/secret/). + + Le tableau de bord propose tous les secrets disponibles dans une liste déroulante et vous permet de créer un nouveau secret. + Le nom de secret doit respecter la syntaxe du nom de domaine DNS, par exemple. `new.image-pull.secret`. + Le contenu d'un secret doit être codé en base64 et spécifié dans un fichier [`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). + Le nom du secret peut contenir 253 caractères maximum. + + Si la création du secret d’extraction d’image est réussie, celle-ci est sélectionnée par défaut. + Si la création échoue, aucun secret n'est appliqué. + +- **CPU requirement (cores)** et **Memory requirement (MiB)**: Vous pouvez spécifier les [limites de ressource](/docs/tasks/configure-pod-container/limit-range/) minimales pour le conteneur. + Par défaut, les pods fonctionnent avec des limites de CPU et de mémoire illimitées. + +- **Run command** et **Run command arguments**: Par défaut, vos conteneurs exécutent les valeurs par défaut de la [commande d'entrée](/docs/user-guide/containers/#containers-and-commands) de l'image spécifiée. + Vous pouvez utiliser les options de commande et les arguments pour remplacer la valeur par défaut. + +- **Run as privileged**: Ce paramètre détermine si les processus dans [conteneurs privilégiés](/docs/user-guide/pods/#privileged-mode-for-pod-containers) sont équivalents aux processus s'exécutant en tant que root sur l'hôte. + Les conteneurs privilégiés peuvent utiliser des fonctionnalités telles que la manipulation de la pile réseau et l'accès aux périphériques. + +- **Environment variables**: Kubernetes expose ses Services via des [variables d'environnement](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). + Vous pouvez composer une variable d'environnement ou transmettre des arguments à vos commandes en utilisant les valeurs des variables d'environnement. + Ils peuvent être utilisés dans les applications pour trouver un Service. + Les valeurs peuvent référencer d'autres variables à l'aide de la syntaxe `$(VAR_NAME)`. + +### Téléchargement d'un fichier YAML ou JSON + +Kubernetes supporte la configuration déclarative. +Dans ce style, toute la configuration est stockée dans des fichiers de configuration YAML ou JSON à l'aide des schémas de ressources de l'[API](/docs/concepts/overview/kubernetes-api/) de Kubernetes. + +Au lieu de spécifier les détails de l'application dans l'assistant de déploiement, vous pouvez définir votre application dans des fichiers YAML ou JSON et télécharger les fichiers à l'aide du tableau de bord. + +## Utilisation du tableau de bord + +Les sections suivantes décrivent des vues du tableau de bord de Kubernetes; ce qu'elles fournissent et comment peuvent-elles être utilisées. + +### Navigation + +Lorsque des objets Kubernetes sont définis dans le cluster, le tableau de bord les affiche dans la vue initiale. +Par défaut, seuls les objets du namespace _default_ sont affichés, ce qui peut être modifié à l'aide du sélecteur d'espace de nom situé dans le menu de navigation. + +Le tableau de bord montre la plupart des types d'objets Kubernetes et les regroupe dans quelques catégories de menus. + +#### Vue d'ensemble de l'administrateur + +Pour les administrateurs de cluster et de namespace, le tableau de bord répertorie les noeuds, les namespaces et les volumes persistants et propose des vues de détail pour ceux-ci. +La vue Liste de nœuds contient les mesures d'utilisation de CPU et de la mémoire agrégées sur tous les nœuds. +La vue détaillée affiche les métriques d'un nœud, ses spécifications, son statut, les ressources allouées, les événements et les pods s'exécutant sur le nœud. + +#### Charges de travail + +Affiche toutes les applications en cours d'exécution dans le namespace selectionné. +La vue répertorie les applications par type de charge de travail. (e.g., Deployments, Replica Sets, Stateful Sets, etc.) et chaque type de charge de travail peut être visualisé séparément. +Les listes récapitulent les informations exploitables sur les charges de travail, telles que le nombre de Pods prêts pour un Replica Set ou l'utilisation actuelle de la mémoire pour un Pod. + +Les vues détaillées des charges de travail affichent des informations sur l'état et les spécifications, ainsi que les relations de surface entre les objets. +Par exemple, les Pods qu'un Replica Set controle ou bien les nouveaux Replica Sets et Horizontal Pod Autoscalers pour les Deployments. + +#### Services + +Affiche les ressources Kubernetes permettant d’exposer les services au monde externe et de les découvrir au sein d’un cluster. +Pour cette raison, les vues Service et Ingress montrent les Pods ciblés par eux, les points de terminaison internes pour les connexions au cluster et les points de terminaison externes pour les utilisateurs externes. + +#### Stockage + +La vue de stockage montre les ressources Persistent Volume Claim qui sont utilisées par les applications pour stocker des données. + +#### Config Maps et Secrets + +Affiche toutes les ressources Kubernetes utilisées pour la configuration en temps réel d'applications s'exécutant dans des clusters. +La vue permet d’éditer et de gérer des objets de configuration et d’afficher les secrets cachés par défaut. + +#### Visualisation de journaux + +Les listes de Pod et les pages de détail renvoient à une visionneuse de journaux intégrée au tableau de bord. +Le visualiseur permet d’exploiter les logs des conteneurs appartenant à un seul Pod. + +![Visualisation de journaux](/images/docs/ui-dashboard-logs-view.png) + +{{% /capture %}} + +{{% capture whatsnext %}} + +Pour plus d'informations, voir la page du projet [Kubernetes Dashboard](https://github.com/kubernetes/dashboard). + +{{% /capture %}} diff --git a/content/fr/examples/controllers/nginx-deployment.yaml b/content/fr/examples/controllers/nginx-deployment.yaml new file mode 100644 index 0000000000000..f7f95deebbb23 --- /dev/null +++ b/content/fr/examples/controllers/nginx-deployment.yaml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.7.9 + ports: + - containerPort: 80 diff --git a/content/id/community/_index.html b/content/id/community/_index.html new file mode 100644 index 0000000000000..c37f67c000c71 --- /dev/null +++ b/content/id/community/_index.html @@ -0,0 +1,236 @@ +--- +title: Komunitas +layout: basic +cid: community +--- + +
+
+ Kubernetes Conference Gallery + Kubernetes Conference Gallery +
+ +
+
+

Komunitas Kubernetes -- pengguna, kontributor, dan budaya yang telah kami bangun bersama -- merupakan salah satu alasan terbesar yang menjadikan proyek open source ini melejit. Nilai dan budaya kami terus tumbuh dan berkembang seiring dengan pertumbuhan dan perkembangan proyek ini sendiri. Kami bekerja bersama untuk terus menyempurnakan proyek dan proses di dalamnya. +

Kami adalah orang-orang yang membantu menemukan masalah dan pull request, mengikuti pertemuan SIG, Kubernetes meetups, dan KubeCon, serta menyerukan untuk adopsi dan inovasinya, menjalankan kubectl get pods, serta berkontribusi pada ribuan area penting lainnya. Baca tentang bagaimana cara agar kamu dapat terlibat dan menjadi bagian dari komunitas hebat ini.

+
+
+ + +

+
+
+
+ Kubernetes Conference Gallery +
+ +
+ Kubernetes Conference Gallery +
+ +
+ Kubernetes Conference Gallery +
+ Kubernetes Conference Gallery + + +
+ + + +
+
+

+

+

Kode Etik Komunitas

+Komunitas Kubernetes menghargai penghormatan dan inklusivitas, dan menerapkan Kode Etik pada semua interaksi. Jika kamu menemukan pelanggaran Kode Etik pada suatu acara atau pertemuan, di Slack, atau pada mekanisme komunikasi lainnya, silakan hubungi conduct@kubernetes.io. Semua laporan dijamin kerahasiaannya. Kamu dapat membaca tentang komite di sini. +
+ +

+ + +BACA LEBIH LANJUT + +
+
+
+
+ + + +
+

+

+

Video

+ +
Kami hadir di YouTube, sering malah. Pastikan kamu berlangganan untuk mendapatkan beragam topik menarik.
+ + +
+ + +
+

+

+

Diskusi

+ +
Kami sangat sering berdiskusi. Temukan kami dan bergabung dalam obrolan dan diskusi pada beragam platform.
+ +
+ +
+Forum" + +forum ▶ + +
+Diskusi berdasarkan topik teknis yang menjembatani dokumentasi, StackOverflow, dan banyak hal lainnya +
+
+ +
+Twitter + +twitter ▶ + +
Pengumuman langsung dari postingan blog, acara, berita, ide-ide menarik +
+
+ +
+GitHub + +github ▶ + +
+Semua hal tentang project tracking dan isu, dan tentu saja kode +
+
+ +
+Stack Overflow + +stack overflow ▶ + +
+ Pemecahan masalah teknis untuk masalah  apa saja + +
+
+ + + +
+
+
+

+

+
+

Acara Mendatang

+ {{< upcoming-events >}} +
+
+ +
+
+
+

Komunitas Global

+Terdapat lebih dari 150 meetups di seluruh dunia dan terus bertumbuh, temukan kube people terdekat. Jika belum ada di dekat kamu, ambil inisiatif dan buat komunitasmu. +
+ +
+TEMUKAN MEETUP +
+
+ +
+
+ + + + +
+

+

+

Berita Terkini

+ +
+ + +
+



+
+ +
diff --git a/content/id/community/code-of-conduct.md b/content/id/community/code-of-conduct.md new file mode 100644 index 0000000000000..05e778acc4f6c --- /dev/null +++ b/content/id/community/code-of-conduct.md @@ -0,0 +1,24 @@ +--- +title: Komunitas +layout: basic +cid: community +css: /css/community.css +--- + +
+

Kode Etik Komunitas Kubernetes

+ +Kubernetes mengikuti +Kode Etik CNCF. +Teks dari CoC CNCF yang direplikasi di bawah ini berdasarkan commit 214585e. +Jika kamu menemukan halaman ini kedaluarsa, mohon +laporkan masalah ini. + +Jika kamu menemukan pelanggaran terhadap Kode Etik pada suatu acara atau pertemuan, di Slack, atau mekanisme komunikasi lainnya, silakan hubungi Komite Kode Etik Kubernetes. +Kamu dapat menghubungi kami melalui email di conduct@kubernetes.io. +Anonimitas kamu akan dilindungi. + +
+{{< include "/static/cncf-code-of-conduct.md" >}} +
+
diff --git a/content/id/community/static/cncf-code-of-conduct.md b/content/id/community/static/cncf-code-of-conduct.md new file mode 100644 index 0000000000000..7ee127a6b3a43 --- /dev/null +++ b/content/id/community/static/cncf-code-of-conduct.md @@ -0,0 +1,31 @@ +Pedoman Perilaku Komunitas CNCF V1.0 +------------------------------------ + +### Kode Etik Kontributor + +Sebagai kontributor dan pengelola proyek ini, dan untuk kepentingan pembinaan sebuah komunitas yang terbuka dan ramah, kami berjanji untuk menghormati semua orang yang berkontribusi melalui masalah pelaporan, memposting permintaan fitur, memperbarui dokumentasi, mengajukan permintaan atau tambalan, dan kegiatan lainnya. + +Kami berkomitmen untuk menjadikan partisipasi dalam proyek ini pengalaman yang bebas dari pelecehan semua orang, terlepas dari tingkat pengalaman, jenis kelamin, identitas dan ekspresi gender, orientasi seksual, disabilitas, penampilan pribadi, ukuran tubuh, ras, etnis, usia, agama, atau kebangsaan. + +Contoh perilaku yang tidak dapat diterima oleh peserta termasuk di antaranya: + +- Penggunaan bahasa atau citra seksual +- Penyerangan pribadi +- Trolling atau komentar yang menghina/merendahkan +- Pelecehan secara publik atau pribadi +- Mempublikasikan informasi pribadi orang lain, seperti alamat fisik atau elektronik, tanpa izin tegas +- Perilaku tidak etis atau tidak profesional lainnya. + +Pengelola proyek memiliki hak dan tanggung jawab untuk menghapus, mengedit, atau menolak komentar, komit, kode, suntingan wiki, isu, dan kontribusi lain yang tidak selaras dengan Kode Etik ini. Dengan mengadopsi Kode Etik ini, pengelola proyek berkomitmen untuk menerapkan prinsip-prinsip ini secara adil dan konsisten pada setiap aspek pengelolaan proyek ini. Pengelola proyek yang tidak mengikuti atau menegakkan Kode Etik dapat dihapus secara permanen dari tim proyek. + +Kode etik ini berlaku baik di dalam ruang proyek maupun di ruang publik ketika seorang individu mewakili proyek atau komunitasnya. + +Contoh perilaku kasar, melecehkan, atau tidak dapat diterima di Kubernetes dapat dilaporkan dengan menghubungi [Komite Kode Etik Kubernetes](https://git.k8s.io/community/committee-code-of-conduct) melalui . Untuk proyek lain, silakan hubungi pengelola proyek CNCF atau mediator kami, Mishi Choudhary . + +Kode Etik ini diadaptasi dari Covenant Contributor +, versi 1.2.0, tersedia di + + +### Pedoman Perilaku Acara CNCF + +Acara CNCF ini diatur oleh [Kode Etik](https://events.linuxfoundation.org/code-of-conduct/) Linux Foundation yang tersedia di halaman acara. Ini dirancang agar kompatibel dengan kebijakan di atas dan juga mencakup rincian lebih lanjut tentang hal menanggapi insiden. \ No newline at end of file diff --git a/content/id/docs/concepts/configuration/manage-compute-resources-container.md b/content/id/docs/concepts/configuration/manage-compute-resources-container.md new file mode 100644 index 0000000000000..61212e571d0cb --- /dev/null +++ b/content/id/docs/concepts/configuration/manage-compute-resources-container.md @@ -0,0 +1,631 @@ +--- +title: Mengatur Sumber Daya Komputasi untuk Container +content_template: templates/concept +weight: 20 +feature: + title: Bin Packing Otomatis + description: > + Menaruh kontainer-kontainer secara otomatis berdasarkan kebutuhan sumber daya mereka dan batasan-batasan lainnya, tanpa mengorbankan ketersediaan. Membaurkan beban-beban kerja kritis dan _best-effort_ untuk meningkatkan penggunaan sumber daya dan menghemat lebih banyak sumber daya. +--- + +{{% capture overview %}} + +Saat kamu membuat spesifikasi sebuah [Pod](/docs/concepts/workloads/pods/pod/), kamu +dapat secara opsional menentukan seberapa banyak CPU dan memori (RAM) yang dibutuhkan +oleh setiap Container. Saat Container-Container menentukan _request_ (permintaan) sumber daya, +scheduler dapat membuat keputusan yang lebih baik mengenai Node mana yang akan dipilih +untuk menaruh Pod-Pod. Dan saat limit (batas) sumber daya Container-Container telah ditentukan, +maka kemungkinan rebutan sumber daya pada sebuah Node dapat dihindari. +Untuk informasi lebih lanjut mengenai perbedaan `request` dan `limit`, lihat [QoS Sumber Daya](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md). + +{{% /capture %}} + +{{% capture body %}} + +## Jenis-jenis sumber daya + +_CPU_ dan _memori_ masing-masing merupakan _jenis sumber daya_ (_resource type_). +Sebuah jenis sumber daya memiliki satuan dasar. CPU ditentukan dalam satuan jumlah _core_, +dan memori ditentukan dalam satuan _bytes_. Jika kamu menggunakan Kubernetes v1.14 keatas, +kamu dapat menentukan sumber daya _huge page_. _Huge page_ adalah fitur khusus Linux +di mana kernel Node mengalokasikan blok-blok memori yang jauh lebih besar daripada ukuran +_page_ bawaannya. + +Sebagai contoh, pada sebuah sistem di mana ukuran _page_ bawaannya adalah 4KiB, kamu +dapat menentukan sebuah limit, `hugepages-2Mi: 80Mi`. Jika kontainer mencoba mengalokasikan +lebih dari 40 _huge page_ berukuran 20MiB (total 80MiB), maka alokasi tersebut akan gagal. + +{{< note >}} +Kamu tidak dapat melakukan _overcommit_ terhadap sumber daya `hugepages-*`. +Hal ini berbeda dari sumber daya `memory` dan `cpu` (yang dapat di-_overcommit_). +{{< /note >}} + +CPU dan memori secara kolektif disebut sebagai _sumber daya komputasi_, atau cukup +_sumber daya_ saja. Sumber daya komputasi adalah jumlah yang dapat diminta, dialokasikan, +dan dikonsumsi. Mereka berbeda dengan [sumber daya API](/docs/concepts/overview/kubernetes-api/). +Sumber daya API, seperti Pod dan [Service](/docs/concepts/services-networking/service/) adalah +objek-objek yang dapat dibaca dan diubah melalui Kubernetes API Server. + +## Request dan Limit Sumber daya dari Pod dan Container + +Setiap Container dari sebuah Pod dapat menentukan satu atau lebih dari hal-hal berikut: + +* `spec.containers[].resources.limits.cpu` +* `spec.containers[].resources.limits.memory` +* `spec.containers[].resources.limits.hugepages-` +* `spec.containers[].resources.requests.cpu` +* `spec.containers[].resources.requests.memory` +* `spec.containers[].resources.requests.hugepages-` + +Walaupun `requests` dan `limits` hanya dapat ditentukan pada Container individual, akan +lebih mudah untuk membahas tentang request dan limit sumber daya dari Pod. Sebuah +_request/limit sumber daya Pod_ untuk jenis sumber daya tertentu adalah jumlah dari +request/limit sumber daya pada jenis tersebut untuk semua Container di dalam Pod tersebut. + +## Arti dari CPU + +Limit dan request untuk sumber daya CPU diukur dalam satuan _cpu_. +Satu cpu, dalam Kubernetes, adalah sama dengan: + +- 1 vCPU AWS +- 1 Core GCP +- 1 vCore Azure +- 1 vCPU IBM +- 1 *Hyperthread* pada sebuah prosesor Intel _bare-metal_ dengan Hyperthreading + +Request dalam bentuk pecahan diizinkan. Sebuah Container dengan +`spec.containers[].resources.requests.cpu` bernilai `0.5` dijamin mendapat +setengah CPU dibandingkan dengan yang meminta 1 CPU. Ekspresi nilai `0.1` ekuivalen +dengan ekspresi nilai `100m`, yang dapat dibaca sebagai "seratus milicpu". Beberapa +orang juga membacanya dengan "seratus milicore", dan keduanya ini dimengerti sebagai +hal yang sama. Sebuah request dengan angka di belakang koma, seperti `0.1` dikonversi +menjadi `100m` oleh API, dan presisi yang lebih kecil lagi dari `1m` tidak dibolehkan. +Untuk alasan ini, bentuk `100m` mungkin lebih disukai. + +CPU juga selalu diminta dalam jumlah yang mutlak, tidak sebagai jumlah yang relatif; +0.1 adalah jumlah CPU yang sama pada sebuah mesin _single-core_, _dual-core_, atau +_48-core_. + +## Arti dari Memori + +Limit dan request untuk `memory` diukur dalam satuan _bytes_. Kamu dapat mengekspresikan +memori sebagai _plain integer_ atau sebagai sebuah _fixed-point integer_ menggunakan +satu dari sufiks-sufiks berikut: E, P, T, G, M, K. Kamu juga dapat menggunakan bentuk +pangkat dua ekuivalennya: Ei, Pi, Ti, Gi, Mi, Ki. +Sebagai contoh, nilai-nilai berikut kurang lebih sama: + +```shell +128974848, 129e6, 129M, 123Mi +``` + +Berikut sebuah contoh. +Pod berikut memiliki dua Container. Setiap Container memiliki request 0.25 cpu dan +64MiB (226 bytes) memori. Setiap Container memiliki limit 0.5 cpu dan +128MiB memori. Kamu dapat berkata bahwa Pod tersebut memiliki request 0.5 cpu dan +128MiB memori, dan memiliki limit 1 cpu dan 265MiB memori. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: frontend +spec: + containers: + - name: db + image: mysql + env: + - name: MYSQL_ROOT_PASSWORD + value: "password" + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" + - name: wp + image: wordpress + resources: + requests: + memory: "64Mi" + cpu: "250m" + limits: + memory: "128Mi" + cpu: "500m" +``` + +## Bagaimana Pod-Pod dengan request sumber daya dijadwalkan + +Saat kamu membuat sebuah Pod, Kubernetes scheduler akan memilih sebuah Node +untuk Pod tersebut untuk dijalankan. Setiap Node memiliki kapasitas maksimum +untuk setiap jenis sumber daya: jumlah CPU dan memori yang dapat disediakan +oleh Node tersebut untuk Pod-Pod. Scheduler memastikan bahwa, untuk setiap +jenis sumber daya, jumlah semua request sumber daya dari Container-Container +yang dijadwalkan lebih kecil dari kapasitas Node tersebut. Perlu dicatat +bahwa walaupun penggunaan sumber daya memori atau CPU aktual/sesungguhnya pada +Node-Node sangat rendah, scheduler tetap akan menolak untuk menaruh sebuah +Pod pada sebuah Node jika pemeriksaan kapasitasnya gagal. Hal ini adalah untuk +menjaga dari kekurangan sumber daya pada sebuah Node saat penggunaan sumber daya +meningkat suatu waktu, misalnya pada saat titik puncak _traffic_ harian. + +## Bagaimana Pod-Pod dengan limit sumber daya dijalankan + +Saat Kubelet menjalankan sebuah Container dari sebuah Pod, Kubelet tersebut +mengoper limit CPU dan memori ke _runtime_ kontainer. + +Ketika menggunakan Docker: + +- `spec.containers[].resources.requests.cpu` diubah menjadi nilai _core_-nya, + yang mungkin berbentuk angka pecahan, dan dikalikan dengan 1024. Nilai yang + lebih besar antara angka ini atau 2 digunakan sebagai nilai dari _flag_ + [`--cpu-shares`](https://docs.docker.com/engine/reference/run/#cpu-share-constraint) + pada perintah `docker run`. + +- `spec.containers[].resources.limits.cpu` diubah menjadi nilai _millicore_-nya dan + dikalikan dengan 100. Nilai hasilnya adalah jumlah waktu CPU yang dapat digunakan oleh + sebuah kontainer setiap 100 milidetik. Sebuah kontainer tidak dapat menggunakan lebih + dari jatah waktu CPU-nya selama selang waktu ini. + + {{< note >}} + Periode kuota bawaan adalah 100ms. Resolusi minimum dari kuota CPU adalah 1 milidetik. + {{}} + +- `spec.containers[].resources.limits.memory` diubah menjadi sebuah bilangan bulat, dan + digunakan sebagai nilai dari _flag_ [`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints) + dari perintah `docker run`. + +Jika sebuah Container melebihi batas memorinya, Container tersebut mungkin akan diterminasi. +Jika Container tersebut dapat diulang kembali, Kubelet akan mengulangnya kembali, sama +seperti jenis kegagalan lainnya. + +Jika sebuah Container melebihi request memorinya, kemungkinan Pod-nya akan dipindahkan +kapanpun Node tersebut kehabisan memori. + +Sebuah Container mungkin atau mungkin tidak diizinkan untuk melebihi limit CPU-nya +untuk periode waktu yang lama. Tetapi, Container tersebut tidak akan diterminasi karena +penggunaan CPU yang berlebihan. + +Untuk menentukan apabila sebuah Container tidak dapat dijadwalkan atau sedang diterminasi +karena limit sumber dayanya, lihat bagian [Penyelesaian Masalah](#penyelesaian-masalah). + +## Memantau penggunaan sumber daya komputasi + +Penggunaan sumber daya dari sebuah Pod dilaporkan sebagai bagian dari kondisi Pod. + +Jika [_monitoring_ opsional](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/README.md) diaktifkan pada klaster kamu, maka penggunaan sumber daya Pod dapat diambil +dari sistem _monitoring_ kamu. + +## Penyelesaian Masalah + +### Pod-Pod saya berkondisi Pending (tertunda) dengan _event message_ failedScheduling + +Jika scheduler tidak dapat menemukan Node manapun yang muat untuk sebuah Pod, +Pod tersebut tidak akan dijadwalkan hingga ditemukannya sebuah tempat yang +muat. Sebuah _event_ akan muncul setiap kali scheduler gagal menemukan tempat +untuk Pod tersebut, seperti berikut: + +```shell +kubectl describe pod frontend | grep -A 3 Events +``` +``` +Events: + FirstSeen LastSeen Count From Subobject PathReason Message + 36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others +``` + +Pada contoh di atas, Pod bernama "frontend" gagal dijadwalkan karena kekurangan +sumber daya CPU pada Node tersebut. Pesan kesalahan yang serupa dapat juga menunjukkan +kegagalan karena kekurangan memori (PodExceedsFreeMemroy). Secara umum, jika sebuah +Pod berkondisi Pending (tertunda) dengan sebuah pesan seperti ini, ada beberapa hal yang +dapat dicoba: + +- Tambah lebih banyak Node pada klaster. +- Terminasi Pod-Pod yang tidak dibutuhkan untuk memberikan ruangan untuk Pod-Pod yang + tertunda. +- Periksa jika nilai request Pod tersebut tidak lebih besar dari Node-node yang ada. + Contohnya, jika semua Node memiliki kapasitas `cpu: 1`, maka Pod dengan request + `cpu: 1.1` tidak akan pernah dijadwalkan. + +Kamu dapat memeriksa kapasitas Node-Node dan jumlah-jumlah yang telah dialokasikan +dengan perintah `kubectl describe nodes`. Contohnya: + +```shell +kubectl describe nodes e2e-test-node-pool-4lw4 +``` +``` +Name: e2e-test-node-pool-4lw4 +[ ... lines removed for clarity ...] +Capacity: + cpu: 2 + memory: 7679792Ki + pods: 110 +Allocatable: + cpu: 1800m + memory: 7474992Ki + pods: 110 +[ ... beberapa baris dihapus untuk kejelasan ...] +Non-terminated Pods: (5 in total) + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits + --------- ---- ------------ ---------- --------------- ------------- + kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%) + kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%) + kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%) + kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%) + kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) +Allocated resources: + (Total limit mungkin melebihi 100 persen, misalnya, karena _overcommit_.) + CPU Requests CPU Limits Memory Requests Memory Limits + ------------ ---------- --------------- ------------- + 680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%) +``` + +Pada keluaran di atas, kamu dapat melihat bahwa jika sebuah Pod meminta lebih dari +1120m CPU atau 6.23Gi memori, Pod tersebut tidak akan muat pada Node tersebut. + +Dengan melihat pada bagian `Pods`, kamu dapat melihat Pod-Pod mana saja yang memakan +sumber daya pada Node tersebut. +Jumlah sumber daya yang tersedia untuk Pod-Pod kurang dari kapasitas Node, karena +_daemon_ sistem menggunakan sebagian dari sumber daya yang ada. Kolom `allocatable` pada +[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core) +memberikan jumlah sumber daya yang tersedia untuk Pod-Pod. Untuk lebih lanjut, lihat +[Sumber daya Node yang dapat dialokasikan](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md). + +Fitur [kuota sumber daya](/docs/concepts/policy/resource-quotas/) dapat disetel untuk +membatasi jumlah sumber daya yang dapat digunakan. Jika dipakai bersama dengan Namespace, +kuota sumber daya dapat mencegah suatu tim menghabiskan semua sumber daya. + +### Container saya diterminasi + +Container kamu mungkin diterminasi karena Container tersebut melebihi batasnya. Untuk +memeriksa jika sebuah Container diterminasi karena ia melebihi batas sumber dayanya, +gunakan perintah `kubectl describe pod` pada Pod yang bersangkutan: + +```shell +kubectl describe pod simmemleak-hra99 +``` +``` +Name: simmemleak-hra99 +Namespace: default +Image(s): saadali/simmemleak +Node: kubernetes-node-tf0f/10.240.216.66 +Labels: name=simmemleak +Status: Running +Reason: +Message: +IP: 10.244.2.75 +Replication Controllers: simmemleak (1/1 replicas created) +Containers: + simmemleak: + Image: saadali/simmemleak + Limits: + cpu: 100m + memory: 50Mi + State: Running + Started: Tue, 07 Jul 2015 12:54:41 -0700 + Last Termination State: Terminated + Exit Code: 1 + Started: Fri, 07 Jul 2015 12:54:30 -0700 + Finished: Fri, 07 Jul 2015 12:54:33 -0700 + Ready: False + Restart Count: 5 +Conditions: + Type Status + Ready False +Events: + FirstSeen LastSeen Count From SubobjectPath Reason Message + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a +``` + +Pada contoh di atas, `Restart Count: 5` menunjukkan bahwa Container `simmemleak` +pada Pod tersebut diterminasi dan diulang kembali sebanyak lima kali. + +Kamu dapat menggunakan perintah `kubectl get pod` dengan opsi `-o go-template=...` untuk +mengambil kondisi dari Container-Container yang sebelumnya diterminasi: + +```shell +kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99 +``` +``` +Container Name: simmemleak +LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]] +``` + +Kamu dapat lihat bahwa Container tersebut diterminasi karena `reason:OOM Killed`, di mana +`OOM` merupakan singkatan dari _Out Of Memory_, atau kehabisan memori. + + +## Penyimpanan lokal sementara +{{< feature-state state="beta" >}} + +Kubernetes versi 1.8 memperkenalkan sebuah sumber daya baru, _ephemeral-storage_ untuk mengatur penyimpanan lokal yang bersifat sementara. Pada setiap Node Kubernetes, direktori _root_ dari Kubelet (secara bawaan /var/lib/kubelet) dan direktori log (/var/log) ditaruh pada partisi _root_ dari Node tersebut. Partisi ini juga digunakan bersama oleh Pod-Pod melalui volume emptyDir, log kontainer, lapisan _image_, dan lapisan kontainer yang dapat ditulis. + +Partisi ini bersifat "sementara" dan aplikasi-aplikasi tidak dapat mengharapkan SLA kinerja (misalnya _Disk IOPS_) dari partisi ini. Pengelolaan penyimpanan lokal sementara hanya berlaku untuk partisi _root_; partisi opsional untuk lapisan _image_ dan lapisan yang dapat ditulis berada di luar ruang lingkup. + +{{< note >}} +Jika sebuah partisi _runtime_ opsional digunakan, partisi _root_ tidak akan menyimpan lapisan _image_ ataupun lapisan yang dapat ditulis manapun. +{{< /note >}} + +### Menyetel request dan limit dari penyimpanan lokal sementara + +Setiap Container dari sebuah Pod dapat menentukan satu atau lebih dari hal-hal berikut: + +* `spec.containers[].resources.limits.ephemeral-storage` +* `spec.containers[].resources.requests.ephemeral-storage` + +Limit dan request untuk `ephemeral-storage` diukur dalam satuan _bytes_. Kamu dapat menyatakan +penyimpanan dalam bilangan bulat biasa, atau sebagai _fixed-point integer_ menggunakan satu dari +sufiks-sufiks ini: E, P, T, G, M, K. Kamu jika dapat menggunakan bentuk pangkat dua ekuivalennya: +Ei, Pi, Ti, Gi, Mi, Ki. Contohnya, nilai-nilai berikut kurang lebih sama: + +```shell +128974848, 129e6, 129M, 123Mi +``` + +Contohnya, Pod berikut memiliki dua Container. Setiap Container memiliki request 2GiB untuk penyimpanan lokal sementara. Setiap Container memiliki limit 4GiB untuk penyimpanan lokal sementara. Maka, Pod tersebut memiliki jumlah request 4GiB penyimpanan lokal sementara, dan limit 8GiB. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: frontend +spec: + containers: + - name: db + image: mysql + env: + - name: MYSQL_ROOT_PASSWORD + value: "password" + resources: + requests: + ephemeral-storage: "2Gi" + limits: + ephemeral-storage: "4Gi" + - name: wp + image: wordpress + resources: + requests: + ephemeral-storage: "2Gi" + limits: + ephemeral-storage: "4Gi" +``` + +### Bagaimana Pod-Pod dengan request ephemeral-storage dijadwalkan + +Saat kamu membuat sebuah Pod, Kubernetes scheduler memilih sebuah Node di mana Pod +tersebut akan dijalankan. Setiap Node memiliki jumlah maksimum penyimpanan lokal sementara yang dapat disediakan. +Untuk lebih lanjut, lihat ["Hal-hal yang dapat dialokasikan Node"](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable). + +Scheduler memastikan bahwa jumlah dari request-request sumber daya dari Container-Container yang dijadwalkan lebih kecil dari kapasitas Node. + +### Bagaimana Pod-Pod dengan limit ephemeral-storage dijalankan + +Untuk isolasi pada tingkat kontainer, jika lapisan yang dapat ditulis dari sebuah Container dan penggunaan log melebihi limit penyimpanannya, maka Pod tersebut akan dipindahkan. Untuk isolasi pada tingkat Pod, jika jumlah dari penyimpanan lokal sementara dari semua Container dan juga volume emptyDir milik Pod melebihi limit, maka Pod teresebut akan dipindahkan. + +### Memantau penggunaan ephemeral-storage + +Saat penyimpanan lokal sementara digunakan, ia dipantau terus-menerus +oleh Kubelet. Pemantauan dilakukan dengan cara memindai setiap volume +emptyDir, direktori log, dan lapisan yang dapat ditulis secara periodik. +Dimulai dari Kubernetes 1.15, volume emptyDir (tetapi tidak direktori log +atau lapisan yang dapat ditulis) dapat, sebagai pilihan dari operator +klaster, dikelola dengan menggunakan [_project quotas_](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html). +_Project quotas_ aslinya diimplementasikan dalam XFS, dan baru-baru ini +telah diubah ke ext4fs. _Project quotas_ dapat digunakan baik untuk +_monitoring_ dan pemaksaan; sejak Kubernetes 1.16, mereka tersedia sebagai +fitur _alpha_ untuk _monitoring_ saja. + +_Quota_ lebih cepat dan akurat dibandingkan pemindaian direktori. Saat +sebuah direktori ditentukan untuk sebuah proyek, semua berkas yang dibuat +pada direktori tersebut dibuat untuk proyek tersebut, dan kernel hanya +perlu melacak berapa banyak blok yang digunakan oleh berkas-berkas pada +proyek tersebut. Jika sebuah berkas dibuat dan dihapus, tetapi tetap dengan +sebuah _file descriptor_ yang terbuka, maka berkas tersebut tetap akan +memakan ruangan penyimpanan. Ruangan ini akan dilacak oleh _quota_ tersebut, +tetapi tidak akan terlihat oleh sebuah pemindaian direktori. + +Kubernetes menggunakan ID proyek yang dimulai dari 1048576. ID-ID yang +digunakan akan didaftarkan di dalam `/etc/projects` dan `/etc/projid`. +Jika ID-ID proyek pada kisaran ini digunakan untuk tujuan lain pada sistem, +ID-ID proyek tersebut harus terdaftar di dalam `/etc/projects` dan `/etc/projid` +untuk mencegah Kubernetes menggunakan ID-ID tersebut. + +Untuk mengaktifkan penggunaan _project quotas_, operator klaster +harus melakukan hal-hal berikut: + +* Aktifkan _feature gate_ `LocalStorageCapacityIsolationFSQuotaMonitoring=true` + pada konfigurasi Kubelet. Nilainya secara bawaan `false` pada + Kubernetes 1.16, jadi harus secara eksplisit disetel menjadi `true`. + +* Pastikan bahwa partisi _root_ (atau partisi opsional _runtime_) + telah dibangun (_build_) dengan mengaktifkan _project quotas_. Semua sistem berkas (_filesystem_) + XFS mendukung _project quotas_, tetapi sistem berkas ext4 harus dibangun + secara khusus untuk mendukungnya + +* Pastikan bahwa partisi _root_ (atau partisi opsional _runtime_) ditambatkan (_mount_) + dengan _project quotas_ yang telah diaktifkan. + +#### Membangun dan menambatkan sistem berkas dengan _project quotas_ yang telah diaktifkan + +Sistem berkas XFS tidak membutuhkan tindakan khusus saat dibangun; +mereka secara otomatis telah dibangun dengan _project quotas_ yang +telah diaktifkan. + +Sistem berkas _ext4fs_ harus dibangun dengan mengaktifkan _quotas_, +kemudian mereka harus diaktifkan pada sistem berkas tersebut. + +``` +% sudo mkfs.ext4 other_ext4fs_args... -E quotatype=prjquota /dev/block_device +% sudo tune2fs -O project -Q prjquota /dev/block_device +``` + +Untuk menambatkan sistem berkasnya, baik ext4fs dan XFS membutuhkan opsi +`prjquota` disetel di dalam `/etc/fstab`: + +``` +/dev/block_device /var/kubernetes_data defaults,prjquota 0 0 +``` + + +## Sumber daya yang diperluas + +Sumber daya yang diperluas (_Extended Resource_) adalah nama sumber daya di luar domain `kubernetes.io`. +Mereka memungkinkan operator klaster untuk menyatakan dan pengguna untuk menggunakan +sumber daya di luar sumber daya bawaan Kubernetes. + +Ada dua langkah untuk menggunakan sumber daya yang diperluas. Pertama, operator +klaster harus menyatakan sebuah Extended Resource. Kedua, pengguna harus meminta +sumber daya yang diperluas tersebut di dalam Pod. + +### Mengelola sumber daya yang diperluas + +#### Sumber daya yang diperluas pada tingkat Node + +Sumber daya yang diperluas pada tingkat Node terikat pada Node. + +##### Sumber daya Device Plugin yang dikelola + +Lihat [Device +Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) untuk +cara menyatakan sumber daya _device plugin_ yang dikelola pada setiap node. + +##### Sumber daya lainnya + +Untuk menyatakan sebuah sumber daya yang diperluas tingkat Node, operator klaster +dapat mengirimkan permintaan HTTP `PATCH` ke API server untuk menentukan kuantitas +sumber daya yang tersedia pada kolom `status.capacity` untuk Node pada klaster. +Setelah itu, `status.capacity` pada Node akan memiliki sumber daya baru tersebut. +Kolom `status.allocatable` diperbarui secara otomatis dengan sumber daya baru +tersebut secara _asynchrounous_ oleh Kubelet. Perlu dicatat bahwa karena scheduler +menggunakan nilai `status.allocatable` milik Node saat mengevaluasi muat atau tidaknya +Pod, mungkin ada waktu jeda pendek antara melakukan `PATCH` terhadap kapasitas Node +dengan sumber daya baru dengan Pod pertama yang meminta sumber daya tersebut untuk +dapat dijadwalkan pada Node tersebut. + +**Contoh:** + +Berikut sebuah contoh yang menunjukkan bagaimana cara menggunakan `curl` untuk +mengirim permintaan HTTP yang menyatakan lima sumber daya "example.com/foo" pada +Node `k8s-node-1` yang memiliki master `k8s-master`. + +```shell +curl --header "Content-Type: application/json-patch+json" \ +--request PATCH \ +--data '[{"op": "add", "path": "/status/capacity/example.com~1foo", "value": "5"}]' \ +http://k8s-master:8080/api/v1/nodes/k8s-node-1/status +``` + +{{< note >}} +Pada permintaan HTTP di atas, `~1` adalah _encoding_ untuk karakter `/` pada jalur (_path_) _patch_. +Nilai jalur operasi tersebut di dalam JSON-Patch diinterpretasikan sebagai sebuah JSON-Pointer. +Untuk lebih lanjut, lihat [IETF RFC 6901, bagian 3](https://tools.ietf.org/html/rfc6901#section-3). +{{< /note >}} + +#### Sumber daya yang diperluas pada tingkat klaster + +Sumber daya yang diperluas pada tingkat klaster tidak terikat pada Node. Mereka +biasanya dikelola oleh _scheduler extender_, yang menangani penggunaan sumber daya +dan kuota sumber daya. + +Kamu dapat menentukan sumber daya yang diperluas yang ditangani oleh _scheduler extender_ +pada [konfigurasi kebijakan scheduler](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31). + +**Contoh:** + +Konfigurasi untuk sebuah kebijakan scheduler berikut menunjukkan bahwa +sumber daya yang diperluas pada tingkat klaster "example.com/foo" ditangani +oleh _scheduler extender_. + +- Scheduler mengirim sebuah Pod ke _scheduler extender_ hanya jika Pod tersebut + meminta "example.com/foo". +- Kolom `ignoredByScheduler` menentukan bahwa scheduler tidak memeriksa sumber daya + "example.com/foo" pada predikat `PodFitsResources` miliknya. + +```json +{ + "kind": "Policy", + "apiVersion": "v1", + "extenders": [ + { + "urlPrefix":"", + "bindVerb": "bind", + "managedResources": [ + { + "name": "example.com/foo", + "ignoredByScheduler": true + } + ] + } + ] +} +``` + +### Menggunakan sumber daya yang diperluas + +Pengguna dapat menggunakan sumber daya yang diperluas di dalam spesifikasi Pod +seperti CPU dan memori. Scheduler menangani akuntansi sumber daya tersebut agar +tidak ada alokasi untuk yang melebihi jumlah yang tersedia. + +API server membatasi jumlah sumber daya yang diperluas dalam bentuk +bilangan bulat. Contoh jumlah yang _valid_ adalah `3`, `3000m`, dan +`3Ki`. Contoh jumlah yang _tidak valid_ adalah `0.5` dan `1500m`. + +{{< note >}} +Sumber daya yang diperluas menggantikan Opaque Integer Resource. +Pengguna dapat menggunakan prefiks nama domain selain `kubernetes.io` yang sudah dipakai. +{{< /note >}} + +Untuk menggunakan sebuah sumber daya yang diperluas di sebuah Pod, masukkan nama +sumber daya tersebut sebagai nilai _key_ dari map `spec.containers[].resources.limit` +pada spesifikasi Container. + +{{< note >}} +Sumber daya yang diperluas tidak dapat di-_overcommit_, sehingga +request dan limit nilainya harus sama jika keduanya ada di spesifikasi +sebuah Container. +{{< /note >}} + +Sebuah Pod hanya dijadwalkan jika semua request sumber dayanya terpenuhi, termasuk +CPU, memori, dan sumber daya yang diperluas manapun. Pod tersebut akan tetap +berada pada kondisi `PENDING` selama request sumber daya tersebut tidak terpenuhi. + +**Contoh:** + +Pod di bawah meminta 2 CPU dan 1 "example.com/foo" (sebuah sumber daya yang diperluas). + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: my-pod +spec: + containers: + - name: my-container + image: myimage + resources: + requests: + cpu: 2 + example.com/foo: 1 + limits: + example.com/foo: 1 +``` + + + +{{% /capture %}} + + +{{% capture whatsnext %}} + +* Dapatkan pengalaman langsung [menentukan sumber daya memori untuk Container dan Pod](/docs/tasks/configure-pod-container/assign-memory-resource/). + +* Dapatkan pengalaman langsung [menentukan sumber daya CPU untuk Container dan Pod](/docs/tasks/configure-pod-container/assign-cpu-resource/). + +* [Container API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) + +* [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) + +{{% /capture %}} diff --git a/content/id/docs/concepts/extend-kubernetes/compute-storage-net/_index.md b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/_index.md new file mode 100644 index 0000000000000..3df1ff1bdcbe4 --- /dev/null +++ b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/_index.md @@ -0,0 +1,4 @@ +--- +title: Ekstensi Komputasi, Penyimpanan, dan Jaringan +weight: 30 +--- diff --git a/content/id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md new file mode 100644 index 0000000000000..beb972e9bf740 --- /dev/null +++ b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -0,0 +1,234 @@ +--- +reviewers: +title: Plugin Perangkat +description: Gunakan kerangka kerja _plugin_ perangkat Kubernetes untuk mengimplementasikan plugin untuk GPU, NIC, FPGA, InfiniBand, dan sumber daya sejenis yang membutuhkan setelan spesifik vendor. +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} +{{< feature-state for_k8s_version="v1.10" state="beta" >}} + +Kubernetes menyediakan [kerangka kerja _plugin_ perangkat](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md) +sehingga kamu dapat memakainya untuk memperlihatkan sumber daya perangkat keras sistem ke dalam {{< glossary_tooltip term_id="kubelet" >}}. + +Daripada menkustomisasi kode Kubernetes itu sendiri, vendor dapat mengimplementasikan +_plugin_ perangkat yang di-_deploy_ secara manual atau sebagai {{< glossary_tooltip term_id="daemonset" >}}. +Perangkat yang dituju termasuk GPU, NIC berkinerja tinggi, FPGA, adaptor InfiniBand, +dan sumber daya komputasi sejenis lainnya yang perlu inisialisasi dan setelan spesifik vendor. + +{{% /capture %}} + +{{% capture body %}} + +## Pendaftaran _plugin_ perangkat + +Kubelet mengekspor servis gRPC `Registration`: + +```gRPC +service Registration { + rpc Register(RegisterRequest) returns (Empty) {} +} +``` + +Plugin perangkat bisa mendaftarkan dirinya sendiri dengan kubelet melalui servis gRPC. +Dalam pendaftaran, _plugin_ perangkat perlu mengirim: + + * Nama Unix socket-nya. + * Versi API Plugin Perangkat yang dipakai. + * `ResourceName` yang ingin ditunjukkan. `ResourceName` ini harus mengikuti + [skema penamaan sumber daya ekstensi](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) + sebagai `vendor-domain/tipe-sumber-daya`. + (Contohnya, NVIDIA GPU akan dinamai `nvidia.com/gpu`.) + +Setelah registrasi sukses, _plugin_ perangkat mengirim daftar perangkat yang diatur +ke kubelet, lalu kubelet kemudian bertanggung jawab untuk mengumumkan sumber daya tersebut +ke peladen API sebagai bagian pembaruan status node kubelet. +Contohnya, setelah _plugin_ perangkat mendaftarkan `hardware-vendor.example/foo` dengan kubelet +dan melaporkan kedua perangkat dalam node dalam kondisi sehat, status node diperbarui +untuk menunjukkan bahwa node punya 2 perangkat “Foo” terpasang dan tersedia. + +Kemudian, pengguna dapat meminta perangkat dalam spesifikasi +[Kontainer](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) +seperti meminta tipe sumber daya lain, dengan batasan berikut: + +* Sumber daya ekstensi hanya didukung sebagai sumber daya integer dan tidak bisa _overcommitted_. +* Perangkat tidak bisa dibagikan antar Kontainer. + +Semisal klaster Kubernetes menjalankan _plugin_ perangkat yang menunjukkan sumber daya `hardware-vendor.example/foo` +pada node tertentu. Berikut contoh Pod yang meminta sumber daya itu untuk menjalankan demo beban kerja: + +```yaml +--- +apiVersion: v1 +kind: Pod +metadata: + name: demo-pod +spec: + containers: + - name: demo-container-1 + image: k8s.gcr.io/pause:2.0 + resources: + limits: + hardware-vendor.example/foo: 2 +# +# Pod ini perlu 2 perangkat perangkat-vendor.example/foo +# dan hanya dapat menjadwalkan ke Node yang bisa memenuhi +# kebutuhannya. +# +# Jika Node punya lebih dari 2 perangkat tersedia, +# maka kelebihan akan dapat digunakan Pod lainnya. +``` + +## Implementasi _plugin_ perangkat + +Alur kerja umum dari _plugin_ perangkat adalah sebagai berikut: + +* Inisiasi. Selama fase ini, _plugin_ perangkat melakukan inisiasi spesifik vendor + dan pengaturan untuk memastikan perangkat pada status siap. + +* Plugin memulai servis gRPC, dengan Unix socket pada lokasi + `/var/lib/kubelet/device-plugins/`, yang mengimplementasi antarmuka berikut: + + ```gRPC + service DevicePlugin { + // ListAndWatch mengembalikan aliran dari List of Devices + // Kapanpun Device menyatakan perubahan atau kehilangan Device, ListAndWatch + // mengembalikan daftar baru + rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} + + // Allocate dipanggil saat pembuatan kontainer sehingga Device + // Plugin dapat menjalankan operasi spesifik perangkat dan menyuruh Kubelet + // dari operasi untuk membuat Device tersedia di kontainer + rpc Allocate(AllocateRequest) returns (AllocateResponse) {} + } + ``` + +* Plugin mendaftarkan dirinya sendiri dengan kubelet melalui Unix socket pada lokasi host + `/var/lib/kubelet/device-plugins/kubelet.sock`. + +* Seteleh sukses mendaftarkan dirinya sendiri, _plugin_ perangkat berjalan dalam mode peladen, dan selama itu +dia tetap mengawasi kesehatan perangkat dan melaporkan balik ke kubelet terhadap perubahan status perangkat. +Dia juga bertanggung jawab untuk melayani _request_ gRPC `Allocate`. Selama `Allocate`, _plugin_ perangkat dapat +membuat persiapan spesifik-perangkat; contohnya, pembersihan GPU atau inisiasi QRNG. +Jika operasi berhasil, _plugin_ perangkat mengembalikan `AllocateResponse` yang memuat konfigurasi +runtime kontainer untuk mengakses perangkat teralokasi. Kubelet memberikan informasi ini ke runtime kontainer. + +### Menangani kubelet yang _restart_ + +Plugin perangkat diharapkan dapat mendeteksi kubelet yang _restart_ dan mendaftarkan dirinya sendiri kembali dengan +_instance_ kubelet baru. Pada implementasi sekarang, sebuah _instance_ kubelet baru akan menghapus semua socket Unix yang ada +di dalam `/var/lib/kubelet/device-plugins` ketika dijalankan. Plugin perangkat dapat mengawasi penghapusan +socket Unix miliknya dan mendaftarkan dirinya sendiri kembali ketika hal tersebut terjadi. + +## Deployment _plugin_ perangkat + +Kamu dapat melakukan _deploy_ sebuah _plugin_ perangkat sebagai DaemonSet, sebagai sebuah paket untuk sistem operasi node-mu, +atau secara manual. + +Direktori _canonical_ `/var/lib/kubelet/device-plugins` membutuhkan akses berprivilese, +sehingga _plugin_ perangkat harus berjalan dalam konteks keamanan dengan privilese. +Jika kamu melakukan _deploy_ _plugin_ perangkat sebagai DaemonSet, `/var/lib/kubelet/device-plugins` +harus dimuat sebagai {{< glossary_tooltip term_id="volume" >}} pada +[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) +plugin. + +Jika kamu memilih pendekatan DaemonSet, kamu dapat bergantung pada Kubernetes untuk meletakkan Pod +_plugin_ perangkat ke Node, memulai-ulang Pod daemon setelah kegagalan, dan membantu otomasi pembaruan. + +## Kecocokan API + +Dukungan pada _plugin_ perangkat Kubernetes sedang dalam beta. API dapat berubah hingga stabil, +dalam cara yang tidak kompatibel. Sebagai proyek, Kubernetes merekomendasikan para developer _plugin_ perangkat: + +* Mengamati perubahan pada rilis mendatang. +* Mendukung versi API _plugin_ perangkat berbeda untuk kompatibilitas-maju/mundur. + +Jika kamu menyalakan fitur DevicePlugins dan menjalankan _plugin_ perangkat pada node yang perlu diperbarui +ke rilis Kubernetes dengan versi API plugin yang lebih baru, perbarui _plugin_ perangkatmu +agar mendukung kedua versi sebelum membarui para node ini. Memilih pendekatan demikian akan +menjamin fungsi berkelanjutan dari alokasi perangkat selama pembaruan. + +## Mengawasi Sumber Daya Plugin Perangkat + +{{< feature-state for_k8s_version="v1.15" state="beta" >}} + +Dalam rangka mengawasi sumber daya yang disediakan _plugin_ perangkat, agen monitoring perlu bisa +menemukan kumpulan perangkat yang terpakai dalam node dan mengambil metadata untuk mendeskripsikan +pada kontainer mana metrik harus diasosiasikan. Metrik [prometheus](https://prometheus.io/) +diekspos oleh agen pengawas perangkat harus mengikuti +[Petunjuk Instrumentasi Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/instrumentation.md), +mengidentifikasi kontainer dengan label prometheus `pod`, `namespace`, dan `container`. + +Kubelet menyediakan servis gRPC untuk menyalakan pencarian perangkat yang terpakai, dan untuk menyediakan metadata +untuk perangkat berikut: + +```gRPC +// PodResourcesLister adalah layanan yang disediakan kubelet untuk menyediakan informasi tentang +// sumber daya node yang dikonsumsi Pod dan kontainer pada node +service PodResourcesLister { + rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {} +} +``` + +Servis gRPC dilayani lewat socket unix pada `/var/lib/kubelet/pod-resources/kubelet.sock`. +Agen pengawas untuk sumber daya _plugin_ perangkat dapat di-_deploy_ sebagai daemon, atau sebagai DaemonSet. +Direktori _canonical_ `/var/lib/kubelet/pod-resources` perlu akses berprivilese, +sehingga agen pengawas harus berjalan dalam konteks keamanan dengan privilese. Jika agen pengawas perangkat berjalan +sebagai DaemonSet, `/var/lib/kubelet/pod-resources` harus dimuat sebagai +{{< glossary_tooltip term_id="volume" >}} pada plugin +[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core). + +Dukungan untuk "servis PodResources" butuh [gerbang fitur](/docs/reference/command-line-tools-reference/feature-gates/) +`KubeletPodResources` untuk dinyalakan. Mulai dari Kubernetes 1.15 nilai bawaannya telah dinyalakan. + +## Integrasi Plugin Perangkat dengan Topology Manager + +{{< feature-state for_k8s_version="v1.17" state="alpha" >}} + +Topology Manager adalah komponen Kubelet yang membolehkan sumber daya untuk dikoordinasi secara selaras dengan Topology. Untuk melakukannya, API Plugin Perangkat telah dikembangkan untuk memasukkan struct `TopologyInfo`. + + +```gRPC +message TopologyInfo { + repeated NUMANode nodes = 1; +} + +message NUMANode { + int64 ID = 1; +} +``` +Plugin Perangkat yang ingin memanfaatkan Topology Manager dapat mengembalikan beberapa _struct_ TopologyInfo sebagai bagian dari pendaftaran perangkat, bersama dengan ID perangkat dan status kesehatan perangkat. Manajer perangkat akan memakai informasi ini untuk konsultasi dengan Topology Manager dan membuat keputusan alokasi sumber daya. + +`TopologyInfo` mendukung kolom `nodes` yang bisa `nil` (sebagai bawaan) atau daftar node NUMA. Ini membuat Plugin Perangkat mengumumkan apa saja yang bisa meliputi node NUMA. + +Contoh _struct_ `TopologyInfo` untuk perangkat yang dipopulate oleh Plugin Perangkat: + +``` +pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}} +``` + +## Contoh _plugin_ perangkat {#contoh} + +Berikut beberapa contoh implementasi _plugin_ perangkat: + +* [Plugin perangkat AMD GPU](https://github.com/RadeonOpenCompute/k8s-device-plugin) +* [Plugin perangkat Intel](https://github.com/intel/intel-device-plugins-for-kubernetes) untuk perangkat GPU, FPGA, dan QuickAssist Intel +* [Plugin perangkat KubeVirt](https://github.com/kubevirt/kubernetes-device-plugins) untuk virtualisasi hardware-assisted +* [Plugin perangkat NVIDIA GPU](https://github.com/NVIDIA/k8s-device-plugin) + * Perlu [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) versi 2.0 yang memungkinkan untuk menjalakan kontainer Docker yang memuat GPU. +* [Plugin perangkat NVIDIA GPU untuk Container-Optimized OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu) +* [Plugin perangkat RDMA](https://github.com/hustcat/k8s-rdma-device-plugin) +* [Plugin perangkat Solarflare](https://github.com/vikaschoudhary16/sfc-device-plugin) +* [Plugin perangkat SR-IOV Network](https://github.com/intel/sriov-network-device-plugin) +* [Plugin perangkat Xilinx FPGA](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-fpga-device-plugin/trunk) untuk perangkat Xilinx FPGA + +{{% /capture %}} +{{% capture whatsnext %}} + +* Pelajari bagaimana [menjadwalkan sumber daya GPU](/docs/tasks/manage-gpus/scheduling-gpus/) dengan _plugin_ perangkat +* Pelajari bagaimana [mengumumkan sumber daya ekstensi](/docs/tasks/administer-cluster/extended-resource-node/) pada node +* Baca tentang penggunaan [akselerasi perangkat keras untuk ingress TLS](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) dengan Kubernetes +* Pelajari tentang [Topology Manager] (/docs/tasks/adminster-cluster/topology-manager/) + +{{% /capture %}} diff --git a/content/id/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md new file mode 100644 index 0000000000000..7bf34d22d425b --- /dev/null +++ b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -0,0 +1,158 @@ +--- +title: Plugin Jaringan +content_template: templates/concept +weight: 10 +--- + + +{{% capture overview %}} + +{{< feature-state state="alpha" >}} +{{< warning >}}Fitur-fitur Alpha berubah dengan cepat. {{< /warning >}} + +_Plugin_ jaringan di Kubernetes hadir dalam beberapa varian: + +* _Plugin_ CNI : mengikuti spesifikasi appc / CNI, yang dirancang untuk interoperabilitas. +* _Plugin_ Kubenet : mengimplementasi `cbr0` sederhana menggunakan _plugin_ `bridge` dan `host-local` CNI + +{{% /capture %}} + +{{% capture body %}} + +## Instalasi + +Kubelet memiliki _plugin_ jaringan bawaan tunggal, dan jaringan bawaan umum untuk seluruh kluster. _Plugin_ ini memeriksa _plugin-plugin_ ketika dijalankan, mengingat apa yang ditemukannya, dan mengeksekusi _plugin_ yang dipilih pada waktu yang tepat dalam siklus pod (ini hanya berlaku untuk Docker, karena rkt mengelola _plugin_ CNI sendiri). Ada dua parameter perintah Kubelet yang perlu diingat saat menggunakan _plugin_: + +* `cni-bin-dir`: Kubelet memeriksa direktori ini untuk _plugin-plugin_ saat _startup_ +* `network-plugin`: _Plugin_ jaringan untuk digunakan dari `cni-bin-dir`. Ini harus cocok dengan nama yang dilaporkan oleh _plugin_ yang diperiksa dari direktori _plugin_. Untuk _plugin_ CNI, ini (nilainya) hanyalah "cni". + +## Persyaratan _Plugin_ Jaringan + +Selain menyediakan [antarmuka `NetworkPlugin`](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go) untuk mengonfigurasi dan membersihkan jaringan Pod, _plugin_ ini mungkin juga memerlukan dukungan khusus untuk kube-proxy. Proksi _iptables_ jelas tergantung pada _iptables_, dan _plugin_ ini mungkin perlu memastikan bahwa lalu lintas kontainer tersedia untuk _iptables_. Misalnya, jika plugin menghubungkan kontainer ke _bridge_ Linux, _plugin_ harus mengatur nilai sysctl `net/bridge/bridge-nf-call-iptables` menjadi ` 1` untuk memastikan bahwa proksi _iptables_ berfungsi dengan benar. Jika _plugin_ ini tidak menggunakan _bridge_ Linux (melainkan sesuatu seperti Open vSwitch atau mekanisme lainnya), _plugin_ ini harus memastikan lalu lintas kontainer dialihkan secara tepat untuk proksi. + +Secara bawaan jika tidak ada _plugin_ jaringan Kubelet yang ditentukan, _plugin_ `noop` digunakan, yang menetapkan `net/bridge/bridge-nf-call-iptables=1` untuk memastikan konfigurasi sederhana (seperti Docker dengan sebuah _bridge_) bekerja dengan benar dengan proksi _iptables_. + +### CNI + +_Plugin_ CNI dipilih dengan memberikan opsi _command-line_ `--network-plugin=cni` pada Kubelet. Kubelet membaca berkas dari `--cni-conf-dir` (bawaan `/etc/cni/net.d`) dan menggunakan konfigurasi CNI dari berkas tersebut untuk mengatur setiap jaringan Pod. Berkas konfigurasi CNI harus sesuai dengan [spesifikasi CNI](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration), dan setiap _plugin_ CNI yang diperlukan oleh konfigurasi harus ada di `--cni-bin-dir` (nilai bawaannya adalah `/opt/cni/bin`). + +Jika ada beberapa berkas konfigurasi CNI dalam direktori, Kubelet menggunakan berkas yang pertama dalam urutan abjad. + +Selain plugin CNI yang ditentukan oleh berkas konfigurasi, Kubernetes memerlukan _plugin_ CNI standar [`lo`](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) _plugin_ , minimal pada versi 0.2.0. + +#### Dukungan hostPort + +_Plugin_ jaringan CNI mendukung `hostPort`. Kamu dapat menggunakan _plugin_ [portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap) resmi yang ditawarkan oleh tim _plugin_ CNI atau menggunakan _plugin_ kamu sendiri dengan fungsionalitas _portMapping_. + +Jika kamu ingin mengaktifkan dukungan `hostPort`, kamu harus menentukan `portMappings capability` di `cni-conf-dir` kamu. +Contoh: + +```json +{ + "name": "k8s-pod-network", + "cniVersion": "0.3.0", + "plugins": [ + { + "type": "calico", + "log_level": "info", + "datastore_type": "kubernetes", + "nodename": "127.0.0.1", + "ipam": { + "type": "host-local", + "subnet": "usePodCidr" + }, + "policy": { + "type": "k8s" + }, + "kubernetes": { + "kubeconfig": "/etc/cni/net.d/calico-kubeconfig" + } + }, + { + "type": "portmap", + "capabilities": {"portMappings": true} + } + ] +} +``` + +#### Dukungan pembentukan lalu-lintas + +_Plugin_ jaringan CNI juga mendukung pembentukan lalu-lintas yang masuk dan keluar dari Pod. Kamu dapat menggunakan _plugin_ resmi [_bandwidth_](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth) yang ditawarkan oleh tim _plugin_ CNI atau menggunakan _plugin_ kamu sendiri dengan fungsionalitas kontrol _bandwidth_. + +Jika kamu ingin mengaktifkan pembentukan lalu-lintas, kamu harus menambahkan _plugin_ `bandwidth` ke berkas konfigurasi CNI kamu (nilai bawaannya adalah `/etc/cni/ net.d`). + +```json +{ + "name": "k8s-pod-network", + "cniVersion": "0.3.0", + "plugins": [ + { + "type": "calico", + "log_level": "info", + "datastore_type": "kubernetes", + "nodename": "127.0.0.1", + "ipam": { + "type": "host-local", + "subnet": "usePodCidr" + }, + "policy": { + "type": "k8s" + }, + "kubernetes": { + "kubeconfig": "/etc/cni/net.d/calico-kubeconfig" + } + }, + { + "type": "bandwidth", + "capabilities": {"bandwidth": true} + } + ] +} +``` + +Sekarang kamu dapat menambahkan anotasi `kubernetes.io/ingress-bandwidth` dan `kubernetes.io/egress-bandwidth` ke Pod kamu. +Contoh: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + annotations: + kubernetes.io/ingress-bandwidth: 1M + kubernetes.io/egress-bandwidth: 1M +... +``` + +### Kubenet + +Kubenet adalah _plugin_ jaringan yang sangat mendasar dan sederhana, hanya untuk Linux. Ia, tidak dengan sendirinya, mengimplementasi fitur-fitur yang lebih canggih seperti jaringan _cross-node_ atau kebijakan jaringan. Ia biasanya digunakan bersamaan dengan penyedia layanan cloud yang menetapkan aturan _routing_ untuk komunikasi antar Node, atau dalam lingkungan Node tunggal. + +Kubenet membuat _bridge_ Linux bernama `cbr0` dan membuat pasangan _veth_ untuk setiap Pod dengan ujung _host_ dari setiap pasangan yang terhubung ke `cbr0`. Ujung Pod dari pasangan diberi alamat IP yang dialokasikan dari rentang yang ditetapkan untuk Node baik melalui konfigurasi atau oleh controller-manager. `cbr0` memiliki MTU yang cocok dengan MTU terkecil dari antarmuka normal yang diaktifkan pada _host_. + +_Plugin_ ini memerlukan beberapa hal: + +* _Plugin_ CNI `bridge`, `lo` dan `host-local` standar diperlukan, minimal pada versi 0.2.0. Kubenet pertama-tama akan mencari mereka di `/opt/cni/bin`. Tentukan `cni-bin-dir` untuk menyediakan lokasi pencarian tambahan. Hasil pencarian pertama akan digunakan. +* Kubelet harus dijalankan dengan argumen `--network-plugin=kubenet` untuk mengaktifkan _plugin_ +* Kubelet juga harus dijalankan dengan argumen `--non-masquerade-cidr=` untuk memastikan lalu-lintas ke IP-IP di luar rentang ini akan menggunakan _masquerade_ IP. +* Node harus diberi subnet IP melalui perintah kubelet `--pod-cidr` atau perintah controller-manager `--allocate-node-cidrs=true --cluster-cidr=`. + +### Menyesuaikan MTU (dengan kubenet) + +MTU harus selalu dikonfigurasi dengan benar untuk mendapatkan kinerja jaringan terbaik. _Plugin_ jaringan biasanya akan mencoba membuatkan MTU yang masuk akal, tetapi terkadang logika tidak akan menghasilkan MTU yang optimal. Misalnya, jika _bridge_ Docker atau antarmuka lain memiliki MTU kecil, kubenet saat ini akan memilih MTU tersebut. Atau jika kamu menggunakan enkapsulasi IPSEC, MTU harus dikurangi, dan perhitungan ini di luar cakupan untuk sebagian besar _plugin_ jaringan. + +Jika diperlukan, kamu dapat menentukan MTU secara eksplisit dengan opsi `network-plugin-mtu` kubelet. Sebagai contoh, pada AWS `eth0` MTU biasanya adalah 9001, jadi kamu dapat menentukan `--network-plugin-mtu=9001`. Jika kamu menggunakan IPSEC, kamu dapat menguranginya untuk memungkinkan/mendukung _overhead_ enkapsulasi pada IPSEC, contoh: `--network-plugin-mtu=8873`. + +Opsi ini disediakan untuk _plugin_ jaringan; Saat ini **hanya kubenet yang mendukung `network-plugin-mtu`**. + +## Ringkasan Penggunaan + +* `--network-plugin=cni` menetapkan bahwa kita menggunakan _plugin_ jaringan `cni` dengan _binary-binary plugin_ CNI aktual yang terletak di `--cni-bin-dir` (nilai bawaannya `/opt/cni/bin`) dan konfigurasi _plugin_ CNI yang terletak di `--cni-conf-dir` (nilai bawaannya `/etc/cni/net.d`). +* `--network-plugin=kubenet` menentukan bahwa kita menggunakan _plugin_ jaringan` kubenet` dengan `bridge` CNI dan _plugin-plugin_ `host-local` yang terletak di `/opt/cni/bin` atau `cni-bin-dir`. +* `--network-plugin-mtu=9001` menentukan MTU yang akan digunakan, saat ini hanya digunakan oleh _plugin_ jaringan `kubenet`. + +{{% /capture %}} + +{{% capture whatsnext %}} + +{{% /capture %}} diff --git a/content/id/docs/concepts/workloads/controllers/daemonset.md b/content/id/docs/concepts/workloads/controllers/daemonset.md new file mode 100644 index 0000000000000..c68a207edfe93 --- /dev/null +++ b/content/id/docs/concepts/workloads/controllers/daemonset.md @@ -0,0 +1,236 @@ +--- +title: DaemonSet +content_template: templates/concept +weight: 50 +--- + +{{% capture overview %}} + +DaemonSet memastikan semua atau sebagian Node memiliki salinan sebuah Pod. +Ketika Node baru ditambahkan ke klaster, Pod ditambahkan ke Node tersebut. +Ketika Node dihapus dari klaster, Pod akan dibersihkan oleh _garbage collector_. +Menghapus DaemonSet akan menghapus semua Pod yang ia buat. + +Beberapa penggunaan umum DaemonSet, yaitu: + +- menjalankan _daemon_ penyimpanan di klaster, seperti `glusterd`, `ceph`, di + setiap Node. +- menjalankan _daemon_ pengumpulan log di semua Node, seperti `fluentd` atau + `logstash`. +- menjalankan _daemon_ pemantauan Node di setiap Node, seperti [Prometheus Node Exporter](https://github.com/prometheus/node_exporter), [Flowmill](https://github.com/Flowmill/flowmill-k8s/), [Sysdig Agent](https://docs.sysdig.com), `collectd`, [Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/), [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes), [Datadog agent](https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/), [New Relic agent](https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration), Ganglia `gmond` atau [Instana Agent](https://www.instana.com/supported-integrations/kubernetes-monitoring/). + +Dalam kasus sederhana, satu DaemonSet, mencakup semua Node, akan digunakan untuk +setiap jenis _daemon_. Pengaturan yang lebih rumit bisa saja menggunakan lebih +dari satu DaemonSet untuk satu jenis _daemon_, tapi dengan _flag_ dan/atau +permintaan cpu/memori yang berbeda untuk jenis _hardware_ yang berbeda. + +{{% /capture %}} + + +{{% capture body %}} + +## Menulis Spek DaemonSet + +### Buat DaemonSet + +Kamu bisa definisikan DaemonSet dalam berkas YAML. Contohnya, berkas +`daemonset.yaml` di bawah mendefinisikan DaemonSet yang menjalankan _image_ Docker +fluentd-elasticsearch: + +{{< codenew file="controllers/daemonset.yaml" >}} + +* Buat DaemonSet berdasarkan berkas YAML: +``` +kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml +``` + +### _Field_ Wajib + +Seperti semua konfigurasi Kubernetes lainnya, DaemonSet membutuhkan _field_ +`apiVersion`, `kind`, dan `metadata`. Untuk informasi umum tentang berkas konfigurasi, lihat dokumen [men-_deploy_ aplikasi](/docs/user-guide/deploying-applications/), +[pengaturan kontainer](/docs/tasks/), dan [pengelolaan objek dengan kubectl](/docs/concepts/overview/working-with-objects/object-management/). + +DaemonSet juga membutuhkan bagian [`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). + +### Templat Pod + +`.spec.template` adalah salah satu _field_ wajib di dalam `.spec`. + +`.spec.template` adalah sebuah [templat Pod](/id/docs/concepts/workloads/pods/pod-overview/#templat-pod). Skemanya benar-benar sama dengan [Pod](/id/docs/concepts/workloads/pods/pod/), kecuali bagian bahwa ia bersarang/_nested_ dan tidak memiliki `apiVersion` atau `kind`. + +Selain _field_ wajib untuk Pod, templat Pod di DaemonSet harus +menspesifikasikan label yang sesuai (lihat [selektor Pod](#selektor-pod)). + +Templat Pod di DaemonSet harus memiliki [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) +yang bernilai `Always`, atau tidak dispesifikasikan, sehingga _default_ menjadi `Always`. +DaemonSet dengan nilai `Always` membuat Pod akan selalu di-_restart_ saat kontainer +keluar/berhenti atau terjadi _crash_. + +### Selektor Pod + +_Field_ `.spec.selector` adalah selektor Pod. Cara kerjanya sama dengan `.spec.selector` pada [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/). + +Pada Kubernetes 1.8, kamu harus menspesifikasikan selektor Pod yang cocok dengan label pada `.spec.template`. +Selektor Pod tidak akan lagi diberi nilai _default_ ketika dibiarkan kosong. Nilai _default_ selektor tidak +cocok dengan `kubectl apply`. Juga, sesudah DaemonSet dibuat, `.spec.selector` tidak dapat diubah. +Mengubah selektor Pod dapat menyebabkan Pod _orphan_ yang tidak disengaja, dan membingungkan pengguna. + +Objek `.spec.selector` memiliki dua _field_: + +* `matchLabels` - bekerja seperti `.spec.selector` pada [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/). +* `matchExpressions` - bisa digunakan untuk membuat selektor yang lebih canggih + dengan mendefinisikan _key_, daftar _value_ dan operator yang menyatakan + hubungan antara _key_ dan _value_. + +Ketika keduanya dispesifikasikan hasilnya diperoleh dari operasi AND. + +Jika `.spec.selector` dispesifikasikan, nilainya harus cocok dengan `.spec.template.metadata.labels`. Konfigurasi yang tidak cocok akan ditolak oleh API. + +Selain itu kamu tidak seharusnya membuat Pod apapun yang labelnya cocok dengan +selektor tersebut, entah secara langsung, via DaemonSet lain, atau via _workload resource_ lain seperti ReplicaSet. +Jika kamu coba buat, {{< glossary_tooltip term_id="controller" >}} DaemonSet akan +berpikir bahwa Pod tersebut dibuat olehnya. Kubernetes tidak akan menghentikan +kamu melakukannya. Contoh kasus di mana kamu mungkin melakukan ini dengan +membuat Pod dengan nilai yang berbeda di sebuah Node untuk _testing_. + +### Menjalankan Pod di Sebagian Node + +Jika kamu menspesifikasikan `.spec.template.spec.nodeSelector`, maka _controller_ DaemonSet akan +membuat Pod pada Node yang cocok dengan [selektor +Node](/docs/concepts/configuration/assign-pod-node/). Demikian juga, jika kamu menspesifikasikan `.spec.template.spec.affinity`, +maka _controller_ DaemonSet akan membuat Pod pada Node yang cocok dengan [Node affinity](/docs/concepts/configuration/assign-pod-node/). +Jika kamu tidak menspesifikasikan sama sekali, maka _controller_ DaemonSet akan +membuat Pod pada semua Node. + +## Bagaimana Pod Daemon Dijadwalkan + +### Dijadwalkan oleh _default scheduler_ + +{{< feature-state state="stable" for-kubernetes-version="1.17" >}} + +DaemonSet memastikan bahwa semua Node yang memenuhi syarat menjalankan salinan +Pod. Normalnya, Node yang menjalankan Pod dipilih oleh _scheduler_ Kubernetes. +Namun, Pod DaemonSet dibuat dan dijadwalkan oleh _controller_ DaemonSet. Hal ini +mendatangkan masalah-masalah berikut: + + * Inkonsistensi perilaku Pod: Pod normal yang menunggu dijadwalkan akan dibuat + dalam keadaan `Pending`, tapi Pod DaemonSet tidak seperti itu. Ini + membingungkan untuk pengguna. + * [Pod preemption](/docs/concepts/configuration/pod-priority-preemption/) + ditangani oleh _default scheduler_. Ketika _preemption_ dinyalakan, + _controller_ DaemonSet akan membuat keputusan penjadwalan tanpa + memperhitungkan prioritas Pod dan _preemption_. + +`ScheduleDaemonSetPods` mengizinkan kamu untuk menjadwalkan DaemonSet +menggunakan _default scheduler_ daripada _controller_ DaemonSet, dengan +menambahkan syarat `NodeAffinity` pada Pod DaemonSet daripada syarat +`.spec.nodeName`. Kemudian, _default scheduler_ digunakan untuk mengikat Pod ke +host target. Jika afinitas Node dari Pod DaemonSet sudah ada, maka ini +akan diganti. _Controller DaemonSet_ hanya akan melakukan operasi-operasi ini +ketika membuat atau mengubah Pod DaemonSet, dan tidak ada perubahan yang terjadi +pada `spec.template` DaemonSet. + +```yaml +nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchFields: + - key: metadata.name + operator: In + values: + - target-host-name +``` + +Sebagai tambahan, _toleration_ `node.kubernetes.io/unschedulable:NoSchedule` +ditambahkan secara otomatis pada Pod DaemonSet. _Default scheduler_ akan +mengabaikan Node `unschedulable` ketika menjadwalkan Pod DaemonSet. + +### _Taint_ dan _Toleration_ + +Meskipun Pod Daemon menghormati +[taint dan toleration](/docs/concepts/configuration/taint-and-toleration), +_toleration_ berikut ini akan otomatis ditambahkan ke Pod DaemonSet sesuai +dengan fitur yang bersangkutan. + +| _Toleration Key_ | _Effect_ | Versi | Deskripsi | +| ---------------------------------------- | ---------- | ------- | ------------------------------------------------------------------------------------------------------------ | +| `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | Pod DaemonSet tidak akan menjadi _evicted_ ketika ada masalah Node seperti partisi jaringan. | +| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | Pod DaemonSet tidak akan menjadi _evicted_ ketika ada masalah Node seperti partisi jaringan. | +| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | | +| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | | +| `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | Pod DaemonSet mentoleransi atribut `unschedulable` _default scheduler_. | +| `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | Pod DaemonSet yang menggunakan jaringan host mentoleransi atribut `network-unavailable` _default scheduler_. | + + + +## Berkomunikasi dengan Pod Daemon + +Beberapa pola yang mungkin digunakan untuk berkomunikasi dengan Pod dalam DaemonSet, yaitu: + +- **Push**: Pod dalam DaemonSet diatur untuk mengirim pembaruan status ke servis lain, + contohnya _stats database_. Pod ini tidak memiliki klien. +- **IP Node dan Konvensi Port**: Pod dalam DaemonSet dapat menggunakan `hostPort`, sehingga Pod dapat diakses menggunakan IP Node. Klien tahu daftar IP Node dengan suatu cara, dan tahu port berdasarkan konvensi. +- **DNS**: Buat [headless service](/docs/concepts/services-networking/service/#headless-services) dengan Pod selektor yang sama, + dan temukan DaemonSet menggunakan _resource_ `endpoints` atau mengambil beberapa A _record_ dari DNS. +- **Service**: Buat Servis dengan Pod selektor yang sama, dan gunakan Servis untuk mengakses _daemon_ pada + Node random. (Tidak ada cara mengakses spesifik Node) + +## Melakukan Pembaruan DaemonSet + +Jika label Node berubah, DaemonSet akan menambahkan Pod ke Node cocok yang baru dan menghapus Pod dari +Node tidak cocok yang baru. + +Kamu bisa mengubah Pod yang dibuat DaemonSet. Namun, Pod tidak membolehkan perubahan semua _field_. +Perlu diingat, _controller_ DaemonSet akan menggunakan templat yang asli di waktu selanjutnya +Node baru (bahkan dengan nama yang sama) dibuat. + +Kamu bisa menghapus DaemonSet. Jika kamu spesifikasikan `--cascade=false` dengan `kubectl`, maka +Pod akan dibiarkan pada Node. Jika kamu pada waktu kemudian membuat DaemonSet baru dengan selektor +yang sama, DaemonSet yang baru akan mengadopsi Pod yang sudah ada. Jika ada Pod yang perlu diganti, +DaemonSet akan mengganti sesuai dengan `updateStrategy`. + +Kamu bisa [melakukan rolling update](/docs/tasks/manage-daemon/update-daemon-set/) pada DaemonSet. + +## Alternatif DaemonSet + +### _Init Scripts_ + +Kamu mungkin menjalankan proses _daemon_ dengan cara menjalankan mereka langsung pada Node (e.g. +menggunakan `init`, `upstartd`, atau `systemd`). Tidak ada salahnya seperti itu. Namun, ada beberapa +keuntungan menjalankan proses _daemon_ via DaemonSet. + +- Kemampuan memantau dan mengatur log _daemon_ dengan cara yang sama dengan aplikasi. +- Bahasa dan alat Konfigurasi yang sama (e.g. Templat Pod, `kubectl`) untuk _daemon_ dan aplikasi. +- Menjalankan _daemon_ dalam kontainer dengan batasan _resource_ meningkatkan isolasi antar _daemon_ dari + kontainer aplikasi. Namun, hal ini juga bisa didapat dengan menjalankan _daemon_ dalam kontainer tapi + tanpa Pod (e.g. dijalankan langsung via Docker). + +### Pod Polosan + +Dimungkinkan untuk membuat Pod langsung dengan menspesifikasikan Node mana untuk dijalankan. Namun, +DaemonSet akan menggantikan Pod yang untuk suatu alasan dihapus atau dihentikan, seperti pada saat +kerusakan Node atau pemeliharaan Node yang mengganggu seperti pembaruan _kernel_. Oleh karena itu, kamu +perlu menggunakan DaemonSet daripada membuat Pod satu per satu. + +### Pod Statis + +Dimungkinkan untuk membuat Pod dengan menulis sebuah berkas ke direktori tertentu yang di-_watch_ oleh Kubelet. +Pod ini disebut dengan istilah [Pod statis](/docs/concepts/cluster-administration/static-pod/). +Berbeda dengan DaemonSet, Pod statis tidak dapat dikelola menggunakan kubectl atau klien API Kubernetes +yang lain. Pod statis tidak bergantung kepada apiserver, membuat Pod statis berguna pada kasus-kasus +_bootstrapping_ klaster. + + +### Deployment + +DaemonSet mirip dengan [Deployment](/docs/concepts/workloads/controllers/deployment/) sebab mereka +sama-sama membuat Pod, dan Pod yang mereka buat punya proses yang seharusnya tidak berhenti (e.g. peladen web, +peladen penyimpanan) + +Gunakan Deployment untuk layanan _stateless_, seperti _frontend_, di mana proses _scaling_ naik +dan turun jumlah replika dan _rolling update_ lebih penting daripada mengatur secara tepat di +host mana Pod berjalan. Gunakan DaemonSet ketika penting untuk satu salinan Pod +selalu berjalan di semua atau sebagian host, dan ketika Pod perlu berjalan +sebelum Pod lainnya. + +{{% /capture %}} diff --git a/content/id/docs/reference/glossary/etcd.md b/content/id/docs/reference/glossary/etcd.md index dc09267a21a56..957c9a006278e 100644 --- a/content/id/docs/reference/glossary/etcd.md +++ b/content/id/docs/reference/glossary/etcd.md @@ -15,4 +15,4 @@ tags: -Selalu perhatikan mekanisme untuk mem-backup data etcd pada klaster Kubernetes kamu. Untuk informasi lebih lanjut tentang etcd, lihat [dokumentasi etcd](https://github.com/coreos/etcd/blob/master/Documentation/docs.md). +Selalu perhatikan mekanisme untuk mem-backup data etcd pada klaster Kubernetes kamu. Untuk informasi lebih lanjut tentang etcd, lihat [dokumentasi etcd](https://etcd.io/docs). diff --git a/content/id/examples/controllers/daemonset.yaml b/content/id/examples/controllers/daemonset.yaml new file mode 100644 index 0000000000000..1bfa082833c72 --- /dev/null +++ b/content/id/examples/controllers/daemonset.yaml @@ -0,0 +1,42 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: fluentd-elasticsearch + namespace: kube-system + labels: + k8s-app: fluentd-logging +spec: + selector: + matchLabels: + name: fluentd-elasticsearch + template: + metadata: + labels: + name: fluentd-elasticsearch + spec: + tolerations: + - key: node-role.kubernetes.io/master + effect: NoSchedule + containers: + - name: fluentd-elasticsearch + image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 + resources: + limits: + memory: 200Mi + requests: + cpu: 100m + memory: 200Mi + volumeMounts: + - name: varlog + mountPath: /var/log + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + terminationGracePeriodSeconds: 30 + volumes: + - name: varlog + hostPath: + path: /var/log + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers diff --git a/content/ja/_index.html b/content/ja/_index.html index f8f2bdf54dd80..e458e117547e1 100644 --- a/content/ja/_index.html +++ b/content/ja/_index.html @@ -1,8 +1,9 @@ --- -title: "Production-Grade Container Orchestration" +title: "プロダクショングレードのコンテナ管理基盤" abstract: "自動化されたコンテナのデプロイ・スケール・管理" cid: home --- +{{< announcement >}} {{< deprecationwarning >}} @@ -44,12 +45,12 @@

150以上のマイクロサービスアプリケーションをKubernetes上


- 2019年5月のKubeCon バルセロナに参加する + 2020年4月のKubeCon アムステルダムに参加する



- 2019年6月のKubeCon 上海に参加する + 2020年7月のKubeCon 上海に参加する

diff --git a/content/ja/case-studies/appdirect/index.html b/content/ja/case-studies/appdirect/index.html index 687560aee7c9e..276a174752e95 100644 --- a/content/ja/case-studies/appdirect/index.html +++ b/content/ja/case-studies/appdirect/index.html @@ -30,7 +30,7 @@

課題

AppDirect はクラウ そのため、提供までのパイプラインにボトルネックがあったのです。」 これと同時に、エンジニアリングチームが大きくなっていき、その成長を後押しし加速する上でも、より良いインフラが必要であることに同社は気づいたのです。

ソリューション

Lacerteは言います。「私のアイデアは、チームがサービスをもっと高速にデプロイできる環境を作ろうぜ、というものです。そうすれば彼らも『そうだね、モノリスはもう建てたくないしサービスを構築したいよ』と言うでしょう。」 -彼らは、2016年初めKubernetes の採用を決定するにあたり、他のいくつかのテクノロジーを調査・検討し、プロトタイプを作りました。 Lacerteのチームはこのプラットフォームへの監視のためにPrometheus +彼らは、2016年初めKubernetes の採用を決定するにあたり、他のいくつかの技術を調査・検討し、プロトタイプを作りました。 Lacerteのチームはこのプラットフォームへの監視のためにPrometheus モニタリングツールを統合しました。この次にあるのはトレーシングです。今やAppDirectは本番環境で50以上のマイクロサービス、15のKubernetesクラスターをAWS 上や世界中のオンプレミス環境で展開しています。

インパクト

Kubernetesプラットフォームは、エンジニアリングチームのここ数年の10倍成長を後押ししてきました。 彼らが継続的に機能追加しているという事実と相まって「この新たなインフラがなければ、我々は大幅なスローダウンを強いられていたと思います」と、Lacerte氏は述べています。Kubernetesとサービス化へ移行していくことは、SCPコマンドを用いた、カスタムメイドで不安定なシェルスクリプトへの依存性を弱め、非常に高速になったことを意味していました。 新しいバージョンをデプロイする時間は4時間から数分間に短縮されました。 @@ -51,7 +51,7 @@

AppDirect は2009年以来、クラ
「正しいタイミングで正しい判断ができました。Kubernetesとクラウドネイティブ技術は、いまやデファクトのエコシステムとみなされています。スケールアウトしていく中で直面する新たな難題に取り組むにはどこに注力すべきか、私たちはわかっています。このコミュニティーはとても活発で、当社の優秀なチームをすばらしく補完してくれています。」

- AppDirect ソフトウェア開発者 Alexandre Gervais

-
Lacerteは当初から言っていました。「私のアイデアは、チームがサービスをもっと高速にデプロイできる環境を作ろう、というものです。そうすれば彼らもこう言うでしょう『そうだよ、モノリスを建てるなんてもうしたくないしサービスを構築したいんだ』と」(Lacerteは2019年に同社を退社)。

Lacerteのグループは運用チームと連携することで同社の AWSのインフラにより多くアクセスし、コントロールするようになりました。そして、いくつかのオーケストレーションテクノロジーのプロトタイプを作り始めたのです。「当時を振り返ると、Kubernetesはちょっとアンダーグラウンドというか、それほど知られていなかったように思います。」と彼は言います。「しかし、コミュニティーやPull requestの数、GitHub上でのスピードなどをよく見てみると勢いが増してきていることがわかりました。他のテクノロジーよりも管理がはるかに簡単であることもわかりました。」彼らは、Kubernetes上で ChefTerraform によるプロビジョニングを用いながら最初のいくつかのサービスを開発しました。その後さらにサービスも、自動化されるところも増えました。「韓国、オーストラリア、ドイツ、そしてアメリカ、私たちのクラスターは世界中にあります。」とLacerteは言います。「自動化は私たちにとって極めて重要です。」今彼らは大部分でKopsを使っていて、いくつかのクラウドプロバイダーから提供されるマネージドKubernetesサービスも視野に入れていれています。

今もモノリスは存在してはいますが、コミットや機能はどんどん少なくなってきています。あらゆるチームがこの新たなインフラ上でデプロイしていて、それらはサービスとして提供されるのが一般的です。今やAppDirectは本番環境で50以上のマイクロサービス、15のKubernetesクラスターをAWS上や世界中のオンプレミス環境で展開しています。

Kubernetesプラットフォームがデプロイ時間に非常に大きなインパクトを与えたことから、Lacerteの戦略が究極的に機能しました。カスタムメイドで不安定だった、SCPコマンドを用いたシェルスクリプトに対する依存性を弱めることで、新しいバージョンをデプロイする時間は4時間から数分にまで短縮されるようになったのです。こういったことに加え同社は、開発者たちが自らのサービスとして仕立て上げるよう、数多くの努力をしてきました。「新しいサービスを始めるのに、 Jiraのチケットや他のチームとのミーティングはもはや必要ないのです」とLacerteは言います。以前、週あたり1〜30だった同社のデプロイ数は、いまや週1,600デプロイにまでなっています。 +
Lacerteは当初から言っていました。「私のアイデアは、チームがサービスをもっと高速にデプロイできる環境を作ろう、というものです。そうすれば彼らもこう言うでしょう『そうだよ、モノリスを建てるなんてもうしたくないしサービスを構築したいんだ』と」(Lacerteは2019年に同社を退社)。

Lacerteのグループは運用チームと連携することで同社の AWSのインフラにより多くアクセスし、コントロールするようになりました。そして、いくつかのオーケストレーション技術のプロトタイプを作り始めたのです。「当時を振り返ると、Kubernetesはちょっとアンダーグラウンドというか、それほど知られていなかったように思います。」と彼は言います。「しかし、コミュニティーやPull requestの数、GitHub上でのスピードなどをよく見てみると勢いが増してきていることがわかりました。他の技術よりも管理がはるかに簡単であることもわかりました。」彼らは、Kubernetes上で ChefTerraform によるプロビジョニングを用いながら最初のいくつかのサービスを開発しました。その後さらにサービスも、自動化されるところも増えました。「韓国、オーストラリア、ドイツ、そしてアメリカ、私たちのクラスターは世界中にあります。」とLacerteは言います。「自動化は私たちにとって極めて重要です。」今彼らは大部分でKopsを使っていて、いくつかのクラウドプロバイダーから提供されるマネージドKubernetesサービスも視野に入れていれています。

今もモノリスは存在してはいますが、コミットや機能はどんどん少なくなってきています。あらゆるチームがこの新たなインフラ上でデプロイしていて、それらはサービスとして提供されるのが一般的です。今やAppDirectは本番環境で50以上のマイクロサービス、15のKubernetesクラスターをAWS上や世界中のオンプレミス環境で展開しています。

Kubernetesプラットフォームがデプロイ時間に非常に大きなインパクトを与えたことから、Lacerteの戦略が究極的に機能しました。カスタムメイドで不安定だった、SCPコマンドを用いたシェルスクリプトに対する依存性を弱めることで、新しいバージョンをデプロイする時間は4時間から数分にまで短縮されるようになったのです。こういったことに加え同社は、開発者たちが自らのサービスとして仕立て上げるよう、数多くの努力をしてきました。「新しいサービスを始めるのに、 Jiraのチケットや他のチームとのミーティングはもはや必要ないのです」とLacerteは言います。以前、週あたり1〜30だった同社のデプロイ数は、いまや週1,600デプロイにまでなっています。
diff --git a/content/ja/case-studies/chinaunicom/index.html b/content/ja/case-studies/chinaunicom/index.html index 561c47e0d37fb..4c288aa97ab1f 100644 --- a/content/ja/case-studies/chinaunicom/index.html +++ b/content/ja/case-studies/chinaunicom/index.html @@ -9,7 +9,7 @@ featured: true weight: 1 quote: > - Kubernetesが私たちのクラウドインフラの経験値を上げてくれました。今のところこれに代わるテクノロジーはありません。 + Kubernetesが私たちのクラウドインフラの経験値を上げてくれました。今のところ、これに代わる技術はありません。 ---
@@ -32,7 +32,7 @@

課題



ソリューション

- 急成長し、オープンソースコミュニティも成熟しているKubernetesはChina Unicomにとって自然な選択となりました。同社のKubernetes対応クラウドプラットフォームは、現状の50のマイクロサービスに加え、これから新たに開発されるすべてをここでホストしていくそうです。「Kubernetesが私たちのクラウドインフラの経験値を上げてくれました」とZhangはいいます。「今のところこれに代わるテクノロジーはありません。」また、China Unicomはそのマイクロサービスフレームワークのために、IstioEnvoyCoreDNS、そしてFluentdも活用しています。 + 急成長し、オープンソースコミュニティも成熟しているKubernetesはChina Unicomにとって自然な選択となりました。同社のKubernetes対応クラウドプラットフォームでは、現状の50のマイクロサービスに加え、これから新たに開発されるすべてをここでホストしていくそうです。「Kubernetesが私たちのクラウドインフラの経験値を上げてくれました」とZhangはいいます。「今のところ、これに代わる技術はありません。」また、China Unicomはそのマイクロサービスフレームワークのために、IstioEnvoyCoreDNS、そしてFluentdも活用しています。

インパクト

KubernetesはChina Unicomの運用と開発、両方について効率を高めてくれました。 @@ -44,7 +44,7 @@

インパクト

- 「Kubernetesが私達のクラウドインフラの経験値を上げてくれました。今のところこれに代わるテクノロジーはありません。」 + 「Kubernetesが私達のクラウドインフラの経験値を上げてくれました。今のところ、これに代わる技術はありません。」
- Chengyu Zhang、 China Unicom プラットフォーム技術R&D グループリーダー
@@ -54,7 +54,7 @@

China Unicomは、3億人を超えるユーザーを抱える、中国国内 その舞台裏で、同社は2016年以来、Dockerコンテナ、VMware、OpenStackインフラなどを用いて、数千のサーバーを持つデータセンターを複数運用しています。残念ながら、「リソース利用率は相対的に低かった」と、プラットフォーム技術のR&D部門のグループリーダーであるChengyu Zhangは語っています。「そして、私たちには何百ものアプリケーションを収容できるクラウドプラットフォームがありませんでした。」

- そこで新しいテクノロジー、研究開発(R&D)、およびプラットフォームの責務を担うZhangのチームは、IT管理におけるソリューションの探索を始めました。以前は完全な国営企業だったChina Unicomは、近年BAT(Baidu、Alibaba、Tencent)およびJD.comからの民間投資を受け、今は商用製品ではなくオープンソース技術を活用した社内開発に注力するようになりました。こういったこともあり、Zhangのチームはクラウドインフラのオープンソースオーケストレーションツールを探し始めたのです。 + そこで新しい技術、研究開発(R&D)、およびプラットフォームの責務を担うZhangのチームは、IT管理におけるソリューションの探索を始めました。以前は完全な国営企業だったChina Unicomは、近年BAT(Baidu、Alibaba、Tencent)およびJD.comからの民間投資を受け、今は商用製品ではなくオープンソース技術を活用した社内開発に注力するようになりました。こういったこともあり、Zhangのチームはクラウドインフラのオープンソースオーケストレーションツールを探し始めたのです。

@@ -67,9 +67,9 @@

China Unicomは、3億人を超えるユーザーを抱える、中国国内
China Unicomはすでにコアとなる事業運用システムにMesosを活用していましたが、チームにとっては新しいクラウドプラットフォームにはKubernetesの選択が自然だろうと感じられたのです。「大きな理由は、Kubernetesには成熟したコミュニティがある、ということでした」とZhangは言います。「さらにKubernetesは非常に早いペースで成長していることもあり、さまざまな人のベストプラクティスから多くを学ぶことができるのです。」 またChina UnicomはマイクロサービスフレームワークのためにIstio、Envoy、CoreDNS、およびFluentdも使用しています。

- 同社のKubernetes対応クラウドプラットフォームは、現状の50のマイクロサービスに加え、これから新たに開発されるすべてをここでホストしていくそうです。China Unicomの開発者たちは自身の手による開発を省き、APIを介すことで簡単にテクノロジーが利用できるようになりました。このクラウドプラットフォームは、同社データセンタのPaaSプラットフォームに繋がった20〜30のサービスを提供することに加え、中国国内の31省にわたる拠点の社内ユーザーたちが行うビッグデータ分析などもサポートしています。

+ 同社のKubernetes対応クラウドプラットフォームでは、現状の50のマイクロサービスに加え、これから新たに開発されるすべてをここでホストしていくそうです。China Unicomの開発者たちは自身の手による開発を省き、APIを介すことで簡単に技術が利用できるようになりました。このクラウドプラットフォームは、同社データセンタのPaaSプラットフォームに繋がった20〜30のサービスを提供することに加え、中国国内の31省にわたる拠点の社内ユーザーたちが行うビッグデータ分析などもサポートしています。

- 「Kubernetesが私達のクラウドインフラの経験値を上げてくれました。」とZhangはいいます。「今のところこれに代わるテクノロジーはありません。」 + 「Kubernetesが私達のクラウドインフラの経験値を上げてくれました。」とZhangはいいます。「今のところ、これに代わる技術はありません。」
@@ -87,12 +87,12 @@

China Unicomは、3億人を超えるユーザーを抱える、中国国内
-「企業はRancherのような事業者が提供するマネージドサービスを活用することができます。こういったテクノロジーはすでにカスタマイズされて提供されるので、簡単に利用することができるでしょう。」

- Jie Jia、China Unicom プラットフォーム技術 R&D
+「企業はRancherのような事業者が提供するマネージドサービスを活用することができます。こうした技術はすでにカスタマイズされて提供されるので、簡単に利用することができるでしょう。」

- Jie Jia、China Unicom プラットフォーム技術 R&D

- プラットフォーム技術 R&D チームの一員であるJie Jiaは、「この技術は比較的複雑ですが、開発者が慣れれば、恩恵をすべて享受できるのではないかと思います」と付け加えています。一方でZhangは、仮想マシンベースのクラウドでの経験から見ると、「Kubernetesとこれらのクラウドネイティブテクノロジーは比較的シンプルなのではないか」と指摘しています。

- 「企業は Rancher のような事業者が提供するマネージドサービスを活用することができます。こういったテクノロジーはカスタマイズてされて提供されるので、簡単に利用することができるでしょう。」

+ プラットフォーム技術 R&D チームの一員であるJie Jiaは、「この技術は比較的複雑ですが、開発者が慣れれば、恩恵をすべて享受できるのではないかと思います」と付け加えています。一方でZhangは、仮想マシンベースのクラウドでの経験から見ると、「Kubernetesとこれらのクラウドネイティブ技術は比較的シンプルなのではないか」と指摘しています。

+ 「企業は Rancher のような事業者が提供するマネージドサービスを活用することができます。こうした技術はカスタマイズされて提供されるので、簡単に利用することができるでしょう。」

今後China Unicomはビッグデータと機械学習に重点を置いて、Kubernetes上でより多くのアプリケーションを開発することを計画しています。彼らのチームは築き上げたクラウドプラットフォームを継続的に最適化しており、CNCFの認定Kubernetesコンフォーマンスプログラム(Certified Kubernetes Conformance Program)に参加するべく、そのための適合テスト(Conformance test)への合格を目指しています。また彼らは、どこかのタイミングでコミュニティにコードをコントリビューションすることも目指しています。

diff --git a/content/ja/case-studies/nav/index.html b/content/ja/case-studies/nav/index.html new file mode 100644 index 0000000000000..c9ad5ab65b327 --- /dev/null +++ b/content/ja/case-studies/nav/index.html @@ -0,0 +1,93 @@ +--- +title: Navケーススタディ +linkTitle: Nav +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +logo: nav_featured_logo.png +featured: true +weight: 3 +quote: > + コミュニティは非常に活発です。アイデアを出し合い、皆が直面する多くの類似課題について話すことができ、そして支援を得ることができます。私たちはさまざまな理由から同じ問題に取り組み、そこでお互いに助け合うことができる、そういう点が気に入っています。 +--- + +
+

ケーススタディ:
スタートアップはどのようにしてKubernetesでインフラコストを50%も削減したのか

+ +
+ +
+ 企業名  Nav     所在地  ユタ州ソルトレイクシティ、カリフォルニア州サンマテオ     業界  事業者向け金融サービス +
+ +
+
+
+
+

+

課題

+2012年に設立された Navは、小規模事業者たちに、民間信用調査企業主要3社 —Equifax、Experian、Dun&Bradstreet— におけるビジネス信用スコアと、彼らのニーズに最適な資金調達オプションを提供しています。このスタートアップは5年で急速に成長したことで、「クラウド環境が非常に大きくなっていったのですが、これらの環境の使用率は極端に低く、1%を下回っていました」とエンジニアリングディレクターのTravis Jeppsonは述べています。「クラウド環境の使用率と実際私たちに必要なものとを連動させたかったので、同じリソースプールを共有しながら複数のワークロードそれぞれを分離して実行できるコンテナ化やオーケストレーションを検討しました。」 +

+

ソリューション

+数多くのオーケストレーション ソリューションを評価した結果、Navチームは AWS上で稼働する Kubernetesを採用することを決めました。Kubernetesを取り巻くコミュニティの強みは人を引きつける点にあり、それがGoogleから生まれたものであることもその一つです。加えて、「他のソリューションは、かなり手間がかかり、本当に複雑で大きなものでした。そしてすぐに管理できるかという点においても厳しいものになりがちでした」とJeppsonは言います。「Kubernetesはその当時の私たちのニーズに合ったオーケストレーションソリューションに踏み出せる、とてもシンプルなやり方を提供してくれました。さらにその拡張性は、私たちがKubernetesと共に成長し、その後の追加機能を組み入れることを可能にしてくれました。」 + +

インパクト

+4人編成のチームは、6か月でKubernetesを稼働させ、その後の6ヶ月でNavの25あったマイクロサービスすべてのマイグレーションを完了させました。その結果は目覚しいものでした。導入のきっかけとなったリソース使用率については、1%から40%まで増加しました。かつて新しいサービスを立ち上げるのに2人の開発者が2週間かけていましたが、いまや開発者はたった一人で10分もかかりません。デプロイ数は5倍増えました。そして同社はインフラコストを50%削減しています。 + +
+
+
+ +
+
+ +

+「Kubernetesはその当時の私たちのニーズに合ったオーケストレーションソリューションに踏み出せる、とてもシンプルなやり方を提供してくれました。さらにその拡張性は、私たちがKubernetesと共に成長し、その後の追加機能を組み入れることを可能にしてくれました。」 +

- Travis Jeppson、Nav エンジニアリング ディレクター
+
+
+
+

2012年に設立された Navは、小規模事業者たちに、民間信用調査企業主要3社 —Equifax、Experian、Dun&Bradstreet— におけるビジネス信用スコアと、彼らのニーズに最適な資金調達オプションを提供しています。「スモールビジネスの成功率を上げていくこと。」そのミッションはここに凝縮される、とエンジニアリングディレクターのTravis Jeppsonは言います。

+数年前、Navは自分たちの成功への道筋に、障害があることを認識しました。ビジネスが急速に成長し、「クラウド環境が非常に大きくなっていったのですが、これらの環境の使用率は極端に低く、1%を下回っていました」と、Jeppsonは言います。「問題の大部分はスケールに関するものでした。私たちはそこにただお金を投入しようとしていました。『もっと多くのサーバーを稼働させよう。増えた負荷をさばくためにより多く作業しよう』といった具合に。私たちはスタートアップなので、そんなことをしていては終焉の一途をたどりかねませんし、そんなことにに使えるほどお金の余裕は我々にはないのです。」 +

+ こういったことに加えてすべての新サービスは違う10人を経由してリリースされなければならず、サービス立ち上げに2週間という受け入れがたいほどの時間をかけていたのです。パッチ管理とサーバ管理のすべてが手動で行われていたので、皆がそれらを見守り、うまく維持していく必要があったのです」とJeppsonは付け加えます。「非常にやっかいなシステムでした。」 + +
+
+
+
「コミュニティは非常に活発です。アイデアを出し合い、皆が直面する多くの類似課題について話すことができ、そして支援を得ることができます。私たちはさまざまな理由から同じ問題に取り組み、そこでお互いに助け合うことができる、そういう点が気に入っています。」

- Travis Jeppson、Nav エンジニアリングディレクター
+ + +
+
+
Jeppsonは前職でコンテナを取り扱っていたため、Navの経営陣にこれらの問題の解決策としてこの技術を売り込みました。そして2017年初め彼の提案にゴーサインがでました。「クラウド環境の使用率と実際私たちに必要なものとを連動させたかったので、類似したリソースプールを共有しながら複数のワークロードそれぞれを分離して実行できるコンテナ化やオーケストレーションを検討しました」と、彼は言います。

+ 数多くのオーケストレーションソリューションを評価した結果、Navチームは AWSでのKubernetes 採用を決断しました。Kubernetesを取り巻くコミュニティの強みは人を引きつける点にあり、それがGoogleから生まれたものであることもその一つです。加えて、「他のソリューションは、かなり手間がかかり、本当に複雑で大きなものでした。そしてすぐに管理できるかという点においても厳しいものになりがちでした」とJeppsonは言います。「Kubernetesはその当時の私たちのニーズに合ったオーケストレーションソリューションに踏み出せる、とてもシンプルなやり方を提供してくれました。一方でその拡張性は、私たちがKubernetesと共に成長し、その後の追加機能を組み入れることを可能にしてくれました。」

+ Jeppsonの4人編成のエンジニアリングサービスチームは、Kubernetesを立ち上げ、稼働させるのに6ヶ月かけました(クラスターを動かすために Kubespray を使いました)。そして、その後6ヶ月かけNavの25のマイクロサービスと一つのモノリシックな主要サービスのフルマイグレーションを完了させました。「すべて書き換えたり、止めることはできませんでした」と彼は言います。「稼働し、利用可能であり続けなければいけなかったですし、ダウンタイムがあってもをそれを最小にしなければなりませんでした。そのためパイプライン作成、メトリクスやロギングといったことについてよくわかるようになりました。さらにKubernetes自身についても習熟し、起動、アップグレード、サービス提供の仕方についてもわかるようになりました。そうして移行を少しずつ進めていきました。」 +
+
+
+
+「Kubernetesは、これまで経験したことのない新たな自由とたくさんの価値をNavにもたらしてくれました。」

- Travis Jeppson、Nav エンジニアリングディレクター
+
+
+ +
+ +
+この過程で重要だったのは、Navの50人のエンジニアを教育すること、そしてマイグレーションに当たり新たなワークフローやロードマップについて透明性を確保することでした。 +そこでJeppsonはエンジニアリングスタッフ全員に対し定期的なプレゼンテーションや、一週間にわたる1日4時間の実習の場を設けました。そして彼はすべての情報を置いておくために GitLabにリポジトリを作成しました。 「フロントエンドとバックエンドの開発者たち全員に、kubectlを用い、独力でnamespaceを作成し、取り扱う方法を見せていきました」と彼は言います。「いまや、彼らはやってきて『これは準備OKだ』というだけで済むことが多くなりました。GitLabの小さなボタンをクリックすれば本番環境にリリースできるようになっているので彼らはすぐに次の行動に移ることができます。」

+ マイグレーションが2018年初めに完了したあとの結果は目覚しいものでした。導入のきっかけとなったリソース使用率については、1%から40%まで増加しました。かつて新しいサービスを立ち上げるのに2人の開発者が2週間かけていましたが、いまや開発者はたった一人で10分もかかりません。デプロイ数は1日あたり10だったものから50となり5倍増えました。そして同社はインフラコストを50%削減しています。「次はデータベース側に取り組みたいです。それができればかなりのコスト削減を継続できるでしょう」とJeppsonは言います。

+ また、KubernetesはNavのコンプライアンスのニーズにも力を貸しました。以前は、「1つのアプリケーションを1つのサーバーにマッピングする必要がありました。これは主にデータ周辺でコンプライアンスの異なるレギュレーションがあったためです」とJeppsonは言います。「KubernetesのAPIを用いれば、ネットワークポリシーを追加し、必要に応じてそれらのデータを分離し制限をかけることができるようになります。」同社は、クラスターを規制のないゾーンと、独自ノードセットを持ったデータ保護を行うべき規制ゾーンに分離しています。また、Twistlockツールを使用することでセキュリティを確保しています。「夜、よく眠れることもね」と彼は付け加えます。 +
+ +
+
「今私たちが扱っているトラフィック量の4〜10倍が流れたとしても、『ああ、大丈夫だよ、Kubernetesがやってくれるから』と話しています。」

- Travis Jeppson、Nav エンジニアリング ディレクター
+
+ +
+ Kubernetesが導入された中、Navチームは Prometheusを採用してシステムのメトリクスやロギングの改良も始めました。。「Prometheusは開発者にとって、とても採用しやすいメトリクスの標準を作ってくれました」とJeppsonは言います。「彼らには、何をしたいかを示し、したいことを実践し、そして彼らのコードベースをクリーンな状態に保つ自由があります。そして私たちにとってそれはまちがいなく必須事項でした。」

+ これから先を見据え、次にNavが意欲的に視野に入れているのは、トレーシング(Tracing)、ストレージ、そしてサービスメッシュです。そしてKubeConで多くの時間をいろんな企業との対話に費やしたその後で、現在彼らはEnvoyOpenTracing、そして Jaegerを検証しています。「コミュニティは非常に活発です。アイデアを出し合い、皆が直面する多くの類似課題について話すことができ、そして支援を得ることができます。私たちはさまざまな理由から同じ問題に取り組み、そこでお互いに助け合うことができる、そういう点が気に入っています」とJeppsonは言います。「クラウドネイティブなソリューションをフルに採用できるようになるには、スケーラビリティ面でやるべきことがまだたくさんあります。」

+ もちろん、すべてはKubernetesから始まります。Jeppsonのチームは、この技術でNavをスケール可能にするプラットフォームを構築しました。そして「これまで経験したことのない新たな自由、たくさんの価値をNavにもたらしてくれたのです。」と彼は言います。新製品を検討しようにも、隔離された環境を用意するのに6か月待たなければならず、その後もトラフィックが急上昇するのに対応するやりかたも考え出さなければならないという事実があり、身動きが取れなくなってしまっていました。「しかし、もうそういった話もなくなりました。」とJeppsonは言います。「今私たちが扱っているトラフィック量の4〜10倍が流れたとしても、『ああ、大丈夫だよ、Kubernetesがやってくれるから』と話しています。」 + +
+
diff --git a/content/ja/case-studies/nav/nav_featured_logo.png b/content/ja/case-studies/nav/nav_featured_logo.png new file mode 100644 index 0000000000000000000000000000000000000000..22d96017c432a4434bdb3a7e6d4123533957f041 GIT binary patch literal 4218 zcmcIoc{r49`}V+(nc5%p#(l`UDauMx6l z&qPETWUK75H7QH#8}C=|_xt1hegA#WaXj~O9oKo?=XG7@{m1h}SYps3LPv%8`1nLj zj18=KGl%yE@8;)?T+8P!yy*zj(4J{cB{F?o=mb7}Jk^x|F!6FB5v&Ll}ogj0Nc z+ZD)Id#1hlSp<&i1#;Q)0r`2+cxXPpQ+j?h7n}!y32-Hl$P``RVs$+bK*s9=?KI6H z<}?(+jcj~{PO!d$!Q!rX;NWs01_wd4)HUF66@VH9sttx{gQ4oGP%VT81fiw| z`1JsJr_u35gq4BOud{eNU7#D2Nkf3azP`R7Uv&_bP69*Wa5xyE23Av3<$0(w{3%Qq zKUE6j#P0|O1O|>yrZLG>3ScXui!0TesSD&W{dWppG;{O615+5k3dJiK*w2LqhJqkq zFR!h<{_+%W4Jc9r3N?ho4K$G`xVE|hN*|`Kje`Bgnot-_7YdH>8%zEV z*6_b#5hyyrg-NAjsZ`J3-LrI~GN}wVDh+@_ZFLI!wgLt_io!RbW?P(-N$%>al#&g11U5%xgk7-U7X%o}!h@Z1Tia z0LFk1NGSi48-{1Doe4j~KVYo(GBF-jR39pRaQFLR*tgC^ldCCr^t6-WD~2{|>nBD= zYf*TINzrDkpC`Y7O*kMM`M(%tTT7;Y5Q8k8%h0F2;rYv4U{~k7>TOJg=-Wlh1)G%q z1uhHQIJ`bcQT|3dKG*HT|262-8D>fvJpK8`NIam=?tp=nMsMFhOFy5*t8bo)BIKVn zO~~9SS=Z4DfvB^}`j0@_81NEX=%cT;rM|Br)wm^++IRVVgAif7dIf+E;|Pe|l8R8u z!2LWq!2+H-D+$#>U#POUP$R(#e~{a_Q)b~?5nQ}^@0kv)i$cfd60T^M*pB8Q2aozj z^Py?M&=LttR_2q^vq$Q;vBH%!66Xo2gk2D8ees0OG~FK-;Lu`qOO~9v$1UG)hC4R4 z+qlU|1kLt~NbAjt+PH!0qSur8_xYG8Hy^OJ;4a&xKHeC=C|urZ_|9gv57)jqSu zQ1~M8I3q$<$7a>X_a&+}LvBtrm$2gLZ146~GDqpYf~H$ldwB@-k-J+DQzFH)Pw&{+ zp~v3uUv7(wb~>e0L;q%QdcnqzMbhJr#t^qT9k{-}u|oOzPM+WK184n_qo0gGu-Vu4 zQnh9l%9AoPM^P!*M=I*UItffNeKNnyDr|WF)blFM8q3d&V96@S519v&Lx)XA)0PgY zSNQ3@v8@y26eZ6^E>Dc!_GQ)=(D^$e!1Hejpn{SS5Io$wD-^&^gxP*l~J8Do%(_d{HmQv6Vi2U9< zj3_OY9jxv>1buyVL@xSXCX(;_i&k}_ZE)>(|FIdVtL59bfhuc^>xQMsFM9`(V|TdB zJ=jFk>C>b5L*n0*1c(V0nV(m)>e}`M8J$0ZJUcxG65Ov@*L>%PB6FrSQf-Ho^X=oH zob*EIv!WaQYBsy2{d z7bNmIeKX@;4w5fyvFTml>l)lMyX1<#ejQ{;O|42A|K6U%M@{caW$QMkdbj0hr3FYj z1}1g##j`__e@qW4i%w2@-i`t;C27bc_Y?$wiIel{+xe#F3L@%MNn-YfVOkc__~VTX z!K$pKpMlvfO?O>uv@Rf2tmjK$_t6_Ys%qk#u%{ifK@MVt869jnW=Zokq4yZGg|Zn; zI^awor6@(~Wv|QbbRshzQV}J?)r?NCG=EF1Tw5(>{ow@kkk?MypM+IUe_p*9do7-= zj6RGFy3>j>jxCTCdMYd`(etqKiFuQ8n99T1gpXC>RsvQxXAyKrhlsIwxj7^9!s6@G zgeCJ!e_)2k*YF8uqq+XvRdI@Q$yKL!Bl@k(Ls_GgnO-AWgus;@Bg;%Lzqx0km8-R3 zov>@lfh!7zi270$fyl@4`{`w-+uF=Yu>oc=H6elnCiMqh8UULo)p_^$o8HpVuB)x5uQ)j!%g?;1xbx}e3C?mwj1Q-CN)e!# zI68WTMr=&pnPh)|CR2GRH>OX$Gogx8|0q%MSfX?eTGK0YXQ+1RjURqK{xLPo;~$d) zx~AD?Z~yciN>Owd+!j!|#%a^;`SfRcONXpXi;-_9k!gD}Pu_icXgLbVXd~u|b~N2Q zk50=dHG9*_<_o(Okma%Bc{9u}JQtMe!Cya)k$iVg0Ihe!YS)~_F7R4H`Fa354qz@A z@q!WbZufF-qw?DDqzNX#K(Ta}j6L6>tr+Oa=bcoW zSxIKtW49?~RAOvnyR}^BD4z%0rO1sNcam*uaXCYIuN0PSUkXn?@x5=I*cUO}EbROk zS~TQg@kH(GOY5p$7A(;;!7>$=QK)THaphrrbM<3wI~hn>^3tbEn;4~fS^I{xud^Zd z2T#<;O?CSg9n@*3sZ-XDRP^T1Hg-%Hwe2B~I4hfJ~+Oh3yA|k1QljSBqx88l@9}eXGzPz606Q7;=}7o9VARNJ)-KiDiHRj zy}fcmNBad|!nOTY&rZPJysTZ`k*ssIIP1mudz>LFN6QK_FjH1`Z?hT`v%Ev>WKXq- zAvHv2#+PaVg$FdQqS7`oKj6JwU_O|eAExWrJt-O z%_okI7R#UmT_PS``BARtSfYSB?M6xzB{LRvku+xah7)PM7_C&WPw&}qwz(=O=F6j` z2o+BiSpMBUu7*DU>W2&Mfo8Tc96zT1dkvCC@EfxB=8LcgLP=^i7mf%qN^dKgRlBE! zSy#n0wpugEwczfvode>XC;P>F1)=Wb?>m9jK z=PF-&MSRc390I+%7Fb^}oLvSQZjm?5(99KGy?l|}Zs~N;YLa85W951IBDMt5md)xO zaQMV+sXQkmFl0e+YRizMWCMx=l)Ge>mkSyj$RYZr1^ZU+Iu;2PEq%3bTlyNdXFvUJ z{1L13dBvmy)KF`l`eix~kd&DAFJ_k`r72Cv16mZS>l`HKp>?TE?GzbjvHpsluAPqO zSRxsNFY4Qj(uFT`Wl0aA+xU#}!vdts$tCLA+;vE~Q}ofSqhioD!s>QK?w%$QFiL>2 zGGg!*$uyGsnOEkBGjVkuls5&|8@OA)^>qTs6I7QXNF9(-V%-Ym;x(41U0E0XNA z7QJ*VQju{V@y!i`DBq7G6}o(@=hYPebZdU9$6dWVz=~rDplolPrUK-XMGDz1e!B@Ab~5 zyB%RizZ*$WeDw;SGIhKVdr!PQ%cJS2eA%7+arw=HDhcD+>+V-g-1;KM&=+?+{eJL> zZ0^#BQnieHN!!}VO^$DkBYDJDzxQM~o8KTR#4r=ltWSL!Z|{LG*g&k_E{r zeleB!sPXNegMO0>4{xy>geK0gtR|@aDF2B^u>uig6=KERO**^BLKuUB1*8DwT+s;J g|M)9lEroA8AK&F*DBWiqvGq5=#1Laph`bc?FNHme!T影響
- 私たちは常にテクノロジーを通じて最適化してより大きな価値を提供する方法を探しています。Kubernetesを用いて私たちは開発効率と運用効率という2つの効率を示します。これは双方にとって好都合です。 + 私たちは常に、技術の最適化を通じてより大きな価値を提供する方法を探しています。Kubernetesを用いて私たちは開発効率と運用効率という2つの効率を示します。これは双方にとって好都合です。

-Nordstrom社シニアエンジニア Dhawal Patel
diff --git a/content/ja/case-studies/sos/index.html b/content/ja/case-studies/sos/index.html index bce98342753d0..3fe901610ba9a 100644 --- a/content/ja/case-studies/sos/index.html +++ b/content/ja/case-studies/sos/index.html @@ -26,22 +26,22 @@

ケーススタディ:

課題

SOS Internationalは60年にわたり、北欧諸国の顧客に信頼性の高い緊急医療および旅行支援を提供してきました。近年、同社のビジネス戦略では、デジタル分野での開発をさらに強化する必要がありましたが、ITシステムに関しては -3つの従来のモノリス(Java, .NET, およびIBMのAS/400)とウォーターフォールアプローチにおいて「SOSには非常に断片化された遺産があります。」とエンタープライズアーキテクチャー責任者のMartin Ahrentsen氏は言います。「新しいテクノロジーと新しい働き方の両方を導入することを余儀なくされているので、市場投入までの時間を短縮して効率を高めることができました。それははるかに機敏なアプローチであり、私たちにはそれをビジネスに提供するのに役立つプラットフォームが必要でした。」 +3つの従来のモノリス(Java, .NET, およびIBMのAS/400)とウォーターフォールアプローチにおいて「SOSには非常に断片化された遺産があります。」とエンタープライズアーキテクチャー責任者のMartin Ahrentsen氏は言います。「新しい技術と新しい働き方の両方を導入することを余儀なくされているので、市場投入までの時間を短縮して効率を高めることができました。それははるかに機敏なアプローチであり、私たちにはそれをビジネスに提供するのに役立つプラットフォームが必要でした。」

ソリューション

- 標準システムの模索に失敗した後、同社はプラットフォームアプローチを採用し、Kubernetesとコンテナテクノロジーを包含するソリューションを探すことにしました。RedHat OpenShiftはSOSの断片化されたシステムに最適であることが証明されました。「私たちはコード言語とその他の両方を使用する多くの異なる技術を持っていますが、それらはすべて新しいプラットフォーム上のリソースを使用できます。」とAhrentsen氏は言います。同社の3つのモノリスのうち、「この最先端のテクノロジーを2つ(.NETとJava)に提供できます。」このプラットフォームは2018年春に公開されました。現在、マイクロサービスアーキテクチャーに基づく6つの未開発プロジェクトが進行中であり、さらに、同社のJavaアプリケーションはすべて「リフト&シフト」移行を行っています。 + 標準システムの模索に失敗した後、同社はプラットフォームアプローチを採用し、Kubernetesとコンテナ技術を包含するソリューションを探すことにしました。RedHat OpenShiftはSOSの断片化されたシステムに最適であることが証明されました。「私たちはコード言語とその他の両方を使用する多くの異なる技術を持っていますが、それらはすべて新しいプラットフォーム上のリソースを使用できます。」とAhrentsen氏は言います。同社にある3つのモノリスの中で、「2つ(.NETとJava)に対してこの最先端の技術を提供できます。」このプラットフォームは2018年春に公開されました。現在、マイクロサービスアーキテクチャーに基づく6つの未開発プロジェクトが進行中であり、さらに、同社のJavaアプリケーションはすべて「リフト&シフト」移行を行っています。

影響

- Kubernetesによって「市場投入までの時間、アジリティ、および変更と新しいテクノロジーに適応する能力の向上を実現しました。」とAhrentsen氏は語ります。「ソフトウェアのリリース準備ができてからリリースできるまでの時間が大幅に改善されました。」SOS Internationalの考え方も劇的に変わりました。「自動化、CI/CDパイプラインの作成を容易にするKubernetesとスクリプトへの簡単なアクセスがあるので、この完全自動化の方法に至る所で多くの内部的な関心が生まれています。旅を始めるために非常に良い気候を作り出しています。」と彼は言います。さらに、クラウドネイティブのコミュニティの一員であることは、同社が人材を引き付けるのに役立ちました。「彼らはクールで新しいテクノロジーを使いたいと思っています」とAhrentsen氏は言います。「新しいテクノロジーを提供したという理由でITプロフェッショナルが我が社を選んでいたことが新人研修の時にわかりました。」 + Kubernetesによって「市場投入までの時間、アジリティ、および変更と新しい技術に適応する能力の向上を実現しました。」とAhrentsen氏は語ります。「ソフトウェアのリリース準備ができてからリリースできるまでの時間が大幅に改善されました。」SOS Internationalの考え方も劇的に変わりました。「自動化、CI/CDパイプラインの作成を容易にするKubernetesとスクリプトへの簡単なアクセスがあるので、この完全自動化の方法に至る所で多くの内部的な関心が生まれています。旅を始めるために非常に良い気候を作り出しています。」と彼は言います。さらに、クラウドネイティブのコミュニティの一員であることは、同社が人材を引き付けるのに役立ちました。「彼らはクールで新しい技術を使いたいと思っています」とAhrentsen氏は言います。「ITプロフェッショナルが新しい技術を提供したという理由で我が社を選んでいたことが新人研修の時にわかりました。」
- 「クラウドネイティブソフトウェアとテクノロジーが現在推進している変化の速度は驚くべきものであり、それをフォローして採用することは私たちにとって非常に重要です。Kubernetesとクラウドネイティブが提供する驚くべき技術はデジタルの未来に向けてSOSに変化をもたらしました。 + 「クラウドネイティブなソフトウェアや技術が現在推進している変化の速度は驚くべきものであり、それに追従して導入することは私たちにとって非常に重要です。Kubernetesとクラウドネイティブが提供する驚くべき技術はデジタルの未来に向けてSOSに変化をもたらしました。

- SOS International エンタープライズアーキテクチャー責任者 Martin Ahrentsen
@@ -49,16 +49,16 @@

影響

SOS Internationalは60年にわたり、北欧諸国の顧客に信頼性の高い緊急医療および旅行支援を提供してきました。

SOSのオペレータは年間100万件の案件を扱い、100万件以上の電話を処理しています。しかし、過去4年間で同社のビジネス戦略にデジタル空間でのますます激しい開発が必要になりました。

- ITシステムに関していえば、会社のデータセンターで稼働する3つの伝統的なモノリスとウォーターフォールアプローチにおいて「SOSは非常に断片化された資産があります。」とエンタープライズアーキテクチャー責任者のMartin Ahrentsen氏は言います。「市場投入までの時間を短縮し、効率を高めるために新しいテクノロジーと新しい働き方の両方を導入する必要がありました。それははるかに機敏なアプローチであり、それをビジネスに提供するために役立つプラットフォームが必要でした。」 + ITシステムに関していえば、会社のデータセンターで稼働する3つの伝統的なモノリスとウォーターフォールアプローチにおいて「SOSは非常に断片化された資産があります。」とエンタープライズアーキテクチャー責任者のMartin Ahrentsen氏は言います。「市場投入までの時間を短縮し、効率を高めるために新しい技術と新しい働き方の両方を導入する必要がありました。それははるかに機敏なアプローチであり、それをビジネスに提供するために役立つプラットフォームが必要でした。」

- Ahrentsen氏と彼のチームは長い間SOSで機能する標準のソリューションを探していました。「私たちのような支援会社はそれほど多くないので、それにふさわしい標準システムを入手することはできません。完全に一致するものがないのです。」と彼は言います。「標準システムを採用したとしても、あまりにもひねりすぎて、もはや標準ではないものになるでしょう。そのため、新しいデジタルシステムとコアシステムを構築するために使用できるいくつかの共通コンポーネントを備えたテクノロジープラットフォームを見つけることにしました。」 + Ahrentsen氏と彼のチームは長い間SOSで機能する標準のソリューションを探していました。「私たちのような支援会社はそれほど多くないので、それにふさわしい標準システムを入手することはできません。完全に一致するものがないのです。」と彼は言います。「標準システムを採用したとしても、あまりにもひねりすぎて、もはや標準ではないものになるでしょう。そのため、新しいデジタルシステムとコアシステムを構築するために使用できるいくつかの共通コンポーネントを備えた技術プラットフォームを見つけることにしました。」
- 「私たちは新しいデジタルサービスを提供しなければなりませんが、古いものも移行する必要があります。そして、コアシステムをこのプラットフォーム上に構築された新しいシステムに変換する必要があります。このテクノロジーを選んだ理由の1つは古いデジタルサービスを変更しながら新しいサービスを構築できるからです。」 + 「私たちは新しいデジタルサービスを提供しなければなりませんが、古いものも移行する必要があります。そして、コアシステムをこのプラットフォーム上に構築された新しいシステムに変換する必要があります。この技術を選んだ理由の1つは古いデジタルサービスを変更しながら新しいサービスを構築できるからです。」

- SOS International エンタープライズアーキテクチャー責任者 Martin Ahrentsen
@@ -68,14 +68,14 @@

SOS Internationalは60年にわたり、北欧諸国の顧客に信頼性の
Kubernetesができることを理解すると、Ahrentsen氏はすぐにビジネスニーズを満たすことができるプラットフォームに目を向けました。同社はDockerコンテナとKubernetesを組み込んだRed HatのOpenShift Container Platformを採用しました。また、RedHat Hyperconverged Infrastructureや一部のミッドウェアコンポーネントなど、すべてオープンソースコミュニティで提供されている技術スタックも利用することを決めました。

- テクノロジーやアジリティの適合性、法的要件、およびコンピテンシーという同社の基準に基づくと、OpenShiftソリューションはSOSの断片化されたシステムに完全に適合するように思われました。「私たちはコード言語とそれ以外の両方を使用する多くの異なる技術を持っています。それらはすべて新しいプラットフォーム上のリソースを使用できます。」とAhrentsen氏は言います。同社の3つのモノリスのうち、「この最先端のテクノロジーを2つ(.NETとJava)に提供できます。」

+ 技術やアジリティの適合性、法的要件、およびコンピテンシーという同社の基準に基づくと、OpenShiftソリューションはSOSの断片化されたシステムに完全に適合するように思われました。「私たちはコード言語とそれ以外の両方を使用する多くの異なる技術を持っています。それらはすべて新しいプラットフォーム上のリソースを使用できます。」とAhrentsen氏は言います。同社にある3つのモノリスの中で、「2つ(.NETとJava)に対してこの最先端の技術を提供できます。」

プラットフォームは2018年春に公開されました。マイクロサービスアーキテクチャーに基づく6つの未開発のプロジェクトが最初に開始されました。さらに、同社のJavaアプリケーションはすべて「リフト&シフト」移行を行っています。最初に稼働しているKubernetesベースのプロジェクトの一つがRemote Medical Treatmentです。これは顧客が音声、チャット、ビデオを介してSOSアラームセンターに連絡できるソリューションです。「完全なCI/CDパイプラインと最新のマイクロサービスアーキテクチャーをすべて2つのOpenShiftクラスターセットアップで実行することに焦点を当てて、非常に短時間で開発できました。」とAhrentsen氏は言います。北欧諸国へのレスキュートラックの派遣に使用されるOnsite、および、レッカー車の追跡を可能にするFollow Your Truckも展開されています。
- 「新しいテクノロジーを提供したという理由でITプロフェッショナルが我が社を選んでいたことが新人研修の時にわかりました。」 + 「ITプロフェッショナルが新しい技術を提供したという理由で我が社を選んでいたことが新人研修の時にわかりました。」

- SOS International エンタープライズアーキテクチャー責任者 Martin Ahrentsen
@@ -84,11 +84,11 @@

SOS Internationalは60年にわたり、北欧諸国の顧客に信頼性の
プラットフォームがまだオンプレミスで稼働しているのは、保険業界のSOSの顧客の一部は同社がデータを処理しているためまだクラウド戦略を持っていないためです。KubernetesはSOSがデータセンターで開始し、ビジネスの準備ができたらクラウドに移行できるようにします。「今後3~5年にわたって、彼らすべてが戦略を持ち、そして、データを取り出してクラウドに移行できるでしょう。」とAhrentsen氏は言います。機密データと非機密データのハイブリッドクラウド設定に移行する可能性もあります。

- SOSの技術は確かに過渡期にあります。「新しいデジタルサービスを提供する必要がありますが、古いものも移行する必要があり、コアシステムをこのプラットフォーム上に構築された新しいシステムに変換しなければなりません。」とAhrentsen氏は言います。「このテクノロジーを選んだ理由の1つは古いデジタルサービスを変更しながら新しいサービスを構築できるからです。」

+ SOSの技術は確かに過渡期にあります。「新しいデジタルサービスを提供する必要がありますが、古いものも移行する必要があり、コアシステムをこのプラットフォーム上に構築された新しいシステムに変換しなければなりません。」とAhrentsen氏は言います。「この技術を選んだ理由の1つは古いデジタルサービスを変更しながら新しいサービスを構築できるからです。」

しかし、Kubernetesはすでに市場投入までの時間を短縮しており、そのことは、新興プロジェクトがいかに迅速に開発され、リリースされたかにも表れています。「ソフトウェアのリリース準備ができてからリリース可能になるまでの時間は劇的に改善されました。」とAhrentsen氏は言います。

- さらに、クラウドネイティブのコミュニティの一員であることは、エンジニア、オペレーター、アーキテクトの数を今年60から100に増やすという目標を追求するうえで、同社が人材を引き付けるのに役立ちました。「彼らはクールで新しいテクノロジーを使いたいと思っています。」とAhrentsenは言います。「新しいテクノロジーを提供したという理由でITプロフェッショナルが我が社を選んでいたことが新人研修の時にわかりました。」 + さらに、クラウドネイティブのコミュニティの一員であることは、エンジニア、オペレーター、アーキテクトの数を今年60から100に増やすという目標を追求するうえで、同社が人材を引き付けるのに役立ちました。「彼らはクールで新しい技術を使いたいと思っています。」とAhrentsenは言います。「ITプロフェッショナルが新しい技術を提供したという理由で我が社を選んでいたことが新人研修の時にわかりました。」
@@ -105,6 +105,6 @@

SOS Internationalは60年にわたり、北欧諸国の顧客に信頼性の 代表例:自動車へのIoTの導入。欧州委員会は現在、すべての新車にeCallを装備することを義務づけています。eCallは重大な交通事故が発生した場合に位置やその他データを送信します。SOSはこのサービスをスマート自動支援として提供しています。「電話を受けて、緊急対応チームを派遣する必要があるかどうか、またはそれほど大きな影響がないどうかを確認します。」とAhrentsen氏は言います。「すべてが接続され、データを送信する未来の世界は、新しい市場機会という点で私たちにとって大きな可能性を生み出します。しかし、それはまたITプラットフォームと私たちが提供すべきものに大きな需要をもたらすでしょう。」

- Ahrentsen氏はSOSが技術の選択を行ってきたことを考えると、この課題に十分対応できると感じています。「クラウドネイティブソフトウェアとテクノロジーが現在推進している変化の速度は驚くべきものであり、それに追従して採用することは私たちにとって非常に重要です。」と彼は言います。「Kubernetesとクラウドネイティブが提供する驚くべきテクノロジーは、デジタルの未来に向けてSOSに変化をもたらし始めました。」 + Ahrentsen氏はSOSが技術の選択を行ってきたことを考えると、この課題に十分対応できると感じています。「クラウドネイティブなソフトウェアや技術が現在推進している変化の速度は驚くべきものであり、それに追従して採用することは私たちにとって非常に重要です。」と彼は言います。「Kubernetesとクラウドネイティブが提供する驚くべき技術は、デジタルの未来に向けてSOSに変化をもたらし始めました。」

diff --git a/content/ja/case-studies/spotify/index.html b/content/ja/case-studies/spotify/index.html new file mode 100644 index 0000000000000..0725723b68351 --- /dev/null +++ b/content/ja/case-studies/spotify/index.html @@ -0,0 +1,120 @@ +--- +title: Spotifyケーススタディ +linkTitle: Spotify +case_study_styles: true +cid: caseStudies +css: /css/style_case_studies.css +logo: spotify_featured_logo.png +featured: true +weight: 2 +quote: > + Kubernetesを中心に成長した素晴らしいコミュニティを見て、その一部になりたかったのです。スピードの向上とコスト削減のメリットを享受し、ベストプラクティスとツールについて業界の他の企業と連携したいとも思いました。 +--- + +
+

ケーススタディ:Spotify
Spotify:コンテナ技術のアーリーアダプターであるSpotifyは自社製オーケストレーションツールからKubernetesに移行しています + +

+ +
+ +
+ 企業名  Spotify     所在地  グローバル     業界  エンターテイメント +
+ +
+
+
+
+

課題

+ 2008年から始まったオーディオストリーミングプラットフォームは、アクティブユーザーが世界中で毎月2億人を超えるまでに成長しました。「私たちの目標は、クリエイターたちに力を与え、私たちが現在抱えるすべての消費者、そして願わくば将来抱える消費者が真に没入できる音楽体験を実現することです」、エンジニアリング、インフラおよびオペレーション担当ディレクターのJai Chakrabartiは、こう言います。マイクロサービスとDockerのアーリーアダプターであるSpotifyは、Heliosという自社開発のコンテナオーケストレーションシステムを使い、自社のVM全体にわたり実行されるマイクロサービスをコンテナ化していました。2017年末までには、「こういった機能開発に自社の小さなチームで取り組むことは効率的ではなく、大きなコミュニティで支持されているものを採用したほうがよい」ことがはっきりしてきました。 + +

+

ソリューション

+ 「Kubernetesを中心に成長した素晴らしいコミュニティを見て、その一部になりたかったのです。」とChakrabartiは言います。KubernetesはHeliosよりも豊富な機能を有していました。さらに、「スピードの向上とコスト削減のメリットを享受し、ベストプラクティスとツールについて業界の他の企業と連携したいとも思いました。」また彼のチームは、活発なKubernetesコミュニティにその知見でコントリビュートし、影響を与えることも望みました。Heliosの稼働と並行して行われたマイグレーションは、スムーズにすすめることができました。それは「KubernetesがHeliosを補完するものとして、そして今はHeliosを代替するものとして非常にフィットしたものだったからです」とChakrabartiは言います。 + +

インパクト

+ 2018年の後半に始まり、2019年に向けて大きな注力点となる本マイグレーションにおいて必要となる主要な技術の問題に対応するため、チームは2018年の大半を費やしました。「ほんの一部をKubernetesに移行したのですが、社内チームから聞こえてきたのは、手作業でのキャパシティプロビジョニングを意識する必要性が少なくなり、Spotifyとしての機能の提供に集中できる時間がより多くなってきたということです」とChakrabartiは言います。Kubernetesで現在実行されている最も大きなサービスはアグリゲーションサービスで、1秒あたり約1000万リクエストを受け取り、オートスケールによる大きな恩恵を受けている、とサイト・リライアビリティ・エンジニアのJames Wenは言います。さらに、「以前はチームが新しいサービスを作り、運用ホストを本番環境で稼働させるために1時間待たなければなりませんでしたが、Kubernetesでは秒・分のオーダーでそれを実現できます」と付け加えます。さらに、Kubernetesのビンパッキング(組み合わせ最適化)機能やマルチテナント機能により、CPU使用率が平均して2〜3倍向上しました。 + +
+
+
+
+
+ 「Kubernetesを中心に成長した素晴らしいコミュニティを見て、その一部になりたかったのです。スピードの向上とコスト削減のメリットを享受し、ベストプラクティスとツールについて業界の他の企業と連携したいとも思いました。」

- Spotify エンジニアリング、インフラおよびオペレーション担当ディレクター、Jai Chakrabarti
+
+
+
+
+

「私たちのゴールは、クリエイターたちに力を与え、今・これからの消費者が真に没入できる音楽体験を実現することです。」Spotifyのエンジニアリング、インフラストラクチャおよびオペレーション担当ディレクター、Jai Chakrabartiは、このように述べています。 +2008年から始まったオーディオストリーミングプラットフォームは、アクティブユーザーが世界中で毎月2億人を超えるまでに成長しました。Chakrabartiのチームにとってのゴールは、将来のすべての消費者もサポートするべくSpotifyのインフラを強固なものにしていくことです。

+ +

+ マイクロサービスとDockerのアーリーアダプターであるSpotifyは、自社のVM全体にわたり実行されるマイクロサービスをコンテナ化していました。同社は「Helios」というオープンソースの自社製コンテナオーケストレーションシステムを使用し、2016年から17年にかけてオンプレミスのデータセンターからGoogle Cloudへの移行を完了しました。こういった意思決定の「我々にはさまざまなピースに取り組む、すばやく繰り返す作業を必要とする自律的なエンジニアリングチームが200以上あり、彼らを中心とした文化があります」とChakrabartiは言います。「したがって、チームがすばやく動けるようになる開発者のベロシティツールを持つことが非常に大事です。」

しかし、2017年の終わりまでには、「小さなチームがHeliosの機能に取り組むのは、それよりもはるかに大きなコミュニティで支持されているものと比べると効率的ではない」ことが明らかになった、とChakrabartiは言います。「Kubernetesを取り巻き成長した驚くべきコミュニティを見ました。その一員になりたいと思いました。スピードの向上とコストの削減による恩恵を受けたかったですし、ベストプラクティスとツールをもつ他の業界と連携したいとも思いました。」同時にこのチームは、活発なKubernetesコミュニティにその知見でコントリビュートし、影響を与えることも望みました。 + + +
+
+
+
+ 「このコミュニティは、あらゆる技術への取り組みをより速く、より容易にしてくれることを強力に助けてくれました。そして、私たちの取り組みのすべてを検証することも助けてくれました。」

- Spotify ソフトウェアエンジニア、インフラおよびオペレーション担当、Dave Zolotusky
+ +
+
+
+
+ もう1つのプラス:「KubernetesがHeliosを補完するものとして、そして今はHeliosを代替するものとして非常にフィットしたものだったので、リスク軽減のためにHeliosと同時に稼働させることができました」とChakrabartiは言います。「マイグレーションの最中はサービスが両方の環境で実行されるので、さまざまな負荷・ストレス環境下でKubernetesの有効性を確認できるようになるまではすべての卵を1つのバスケットに入れる必要がありません。」 + +

+ +チームは、本マイグレーションにおいて必要となる主要な技術の問題に対応するため、チームは2018年の大半を費やしました。「レガシーのインフラをサポートしたり連携するKubernetes APIやKubernetesの拡張性機能を多く使うことができたので、インテグレーションはシンプルで簡単なものでした」とサイト・リライアビリティ・エンジニアのJames Wenは言います。 + +

+マイグレーションはその年の後半に始まり、2019年に加速しました。「私たちはステートレスなサービスに注力しています。最後に残る技術的課題を突破したら、それが上昇をもたらしてくれると期待しています」とChakrabartiは言います。「ステートフルサービスについては、より多くのやるべきことがあります。」 +

+今のところ、Spotifyの150を超えるサービスのごく一部がKubernetesに移行されています。 + +「社内のチームから聞こえてきたのは、手作業でのキャパシティプロビジョニングを意識する必要性が少なくなり、Spotifyとしての機能の提供に集中できる時間がより多くなってきたということです」とChakrabartiは言います。 + +Kubernetesで現在実行されている最も大きなサービスはアグリゲーションサービスで、1秒あたり約1000万リクエストを受け取り、オートスケールによる大きな恩恵を受けている、とWenは言います。さらに、「以前はチームが新しいサービスを作り、運用ホストを本番環境で稼働させるために1時間待たなければなりませんでしたが、Kubernetesでは秒・分のオーダーでそれを実現できます」と付け加えます。さらに、Kubernetesのビンパッキング(組み合わせ最適化)機能やマルチテナント機能により、CPU使用率が平均して2〜3倍向上しました。 + + +
+
+
+
+ 「レガシーのインフラをサポートしたり連携するKubernetes APIやKubernetesの拡張性機能をたくさん使うことができたので、インテグレーションはシンプルで簡単なものでした」

- Spotify、Spotifyエンジニア、James Wen
+
+
+ +
+
+ Chakrabartiは、Spotifyが見ている4つのトップレベルのメトリック - リードタイム、デプロイ頻度、修復時間、そして運用負荷 - のすべてについて「Kubernetesがインパクトを与えている」と指摘します。 +

+Kubernetesが初期の頃に出てきたサクセスストーリーの1つに、SpotifyチームがKubernetesの上に構築したSlingshotというツールがあります。「プルリクエストを出すと、24時間後に自己消滅する一時的なステージング環境を生成します」とChakrabartiは言います。「これはすべてKubernetesがやってくれています。新しいテクノロジーが出てきて使えるようになったときに、自分のイメージを超えるようなソリューションをいかにしてこの環境上で作っていくか、そのやり方を示す刺激的な例だと思います。」 +

+またSpotifyはgRPCenvoyを使い、Kubernetesと同じように、既存の自社製ソリューションを置き換え始めました。「私たちはその時の自分たちの規模を理由にした開発をしていて、実際に他のソリューションはありませんでした」とインフラおよび運用担当のソフトウェアエンジニアであるDave Zolotuskyは言います。「しかし、そういった規模感のツールですらコミュニティは私たちに追いつき、追い越して行きました。」 + + +
+ +
+
+ 「私たちが取り組んでいることに関する専門知識を得るために、コンタクトしたい人と連絡を取るのは驚くほど簡単でした。そして、私たちが行っていたすべての検証で役立ちました」

- Spotify、サイト・リライアビリティ・エンジニア、James Wen
+
+ + +
+ どちらの技術も採用するには初期段階ではありますが、「gRPCはスキーマ管理、API設計、下位互換の問題など、初期の開発段階における多くの問題に対してより劇的な影響を与えると確信しています」とZolotuskyは言います。「そのため、そういった領域でgRPCに傾倒しています。」 + +

+チームはSpotifyのクラウドネイティブなスタックを拡大し続けており - この次にあるのはスタックトレーシングです - CNCFランドスケープを有用なガイドとして活用しています。「解決する必要があるものを見たときに、もし多数のプロジェクトがあればそれらを同じように評価しますが、そのプロジェクトがCNCFプロジェクトであることには間違いなく価値があります」とZolotuskyは言います。 + +

+SpotifyがKubernetesでこれまでに経験してきたことはそれを裏付けています。「あらゆる技術により速くより簡単に取り組めるようになる点で、このコミュニティは極めて有益です」とZolotuskyは言います。「私たちが取り組んでいることに関する専門知識を得るために、コンタクトしたい人と連絡を取るのは驚くほど簡単でした。そして、私たちが行っていたすべての検証で役立ちました。」 + + +
+
+ + diff --git a/content/ja/case-studies/spotify/spotify-featured.svg b/content/ja/case-studies/spotify/spotify-featured.svg new file mode 100644 index 0000000000000..fb7d8e750de98 --- /dev/null +++ b/content/ja/case-studies/spotify/spotify-featured.svg @@ -0,0 +1 @@ +kubernetes.io-logos \ No newline at end of file diff --git a/content/ja/case-studies/spotify/spotify_featured_logo.png b/content/ja/case-studies/spotify/spotify_featured_logo.png new file mode 100644 index 0000000000000000000000000000000000000000..def15c51bfc14be96be75f9064f8d6d7822aa601 GIT binary patch literal 6383 zcmcIpcQl;swimrb2_jk;C6XvJgV9?U^$SAuZj8~#U@!zBq6I;S9z7vy^xkU_y#>)l z^xj+U_|Es8bJtzx+&}JGch-96-S1xex1as&z1O>b&wk%%s4J0^GLqup;E<~*%Rk57 zC$N_P6X3~rBb1y~@gZ5^f9cj}th0k)P>?0Q0KU^OQw z!p2tlg$qLKg}OHUg*{x%l3hj`An75Fb$~>;!T=sf2S>EHhZOste#No!jT*!b_|wGI zUW)x6LFub$0H7!r1V9KV%m?R(2mwUIfc(OO5HT@cfB=|Z6a*Fp@eA_t3yVX*;sOGI zzkcl4)m$vC#GlJ6{5zXyhdtnI(BqW>#a9O{CAxuRUOQ7DJMtEXXuaz&wSP)-0S^rl)OfJbUDxUJ)j^6^cA z|B4skV(X5uRCGZh0e@yl-1gr@Aq-Uj3xlBw5Pmtx%?`xmh2)@OqJr{JSrI`|s0jOC zSj&I2`@gV?{}Btq1_Qb&?|&=z-zsbm-H88eUhKm^+aKYG?Rgh$bI%t3qQk+tYoa1A ztL-thX+q*bt5c~yY#=c+Iim6~#peM@lntF6EqVq5^Ju}>cia^85M;^v)maf{;9w7BSAVQ+HUG!W>dHG+Xl_{1lH>Mrm_I(dHIEk~ zwME98jLjX1k@sm+?=jiA06pCi)Hv$Yn^huyuZZmpkZnZ%op&lZ`2Z{aO4ZuddfG36!c=Hd68j}_Y3+Uc4yUq&Tf?3;2~|5 zCPcekPTgi}&AVy7xjz)mwZKZ#U6g@R`gZWEZp7~n8To=f^f+vy(UIl-@1M9lWV7g& zlOo~-3r8#UHbq{`tej^x!oI)bO0}SaKU5qSHk4!*9n>-)B~iCLzgV2b77_;p7j!hb z7foBlU+m4;?1lu2;QX%JT(s7k<_@kN0*?D{mGp81%tBr^QQp3uBq^9L*3jsT2DT*)6iZM#UBIh$?5D2E#MTbCzO1pcMPb9e`ra28N1wxs<<>!^{ODM$MSI z)#1%!w|9R1CRP)6aZH~p_+9Z~=7KtHDN!pniSYdVBCT)gW%_Zbok)*tDaZC>qNfrm z)^V;72)E^`>O-DJnYo!bYathdgc;q6{C(!Gm@`SHlESR~iouqi@uBEQ4mqCbS!qO0 zsgmvZ#Q4bM`1IuHjOoPKRK{jwtk>6fIzA?zFH4>+&MO;pqY5;Yr@iW$;TU_EUxTuZ z?S+K-LS%*BC@p?bg^nY)^!XpFyencUJ=^38!5H1_ieuO7P7?I2s`KIbP-M6U9o8gs zhk-xHsr>Z-y%Oss6E79Pu0n*aJ2NN@_-LbA^F_j(X;}4zojn1uczTeJwzp*lC+x`z zSWoxLbFv#wE;f}d_0<&8A?4DDG5*|qKylo`r!X~}sL&F)#%lXe{KZAAXKB(MJ2JAV z4T>?7F=>_^=EH#BALZlCTg=snPsKZj?j=vjE?4Sh^{NZ6q#}W!1v<**d%J0*7xZF)sc*dsW02|^}|aoMeVx^dS;QGMWsp?AA&je zz{eVlc_!t)i97g5IWs;DW;&@2ZJ9LsYd%8p<&gsi5rTXK2E|q7iF}P@Oa(LJrK2XX zwxwouv4P*l_bohq()sHG6OuRx#)1yoGTX@NX#mqZypfxTjZ3#n!JU<@V8`PUw8>pj zzHXuSk7@YZ5`Tf-heKuB25BoNDvn$uNxDN z%YX97GW0d}{$6TUqa z!po^F;o48(S{p2@JMB(k*sspQHrWo|M@N%BI zf<8W?l$B=VEy0bdisI7B>aF}?eRavNvvB$cL=#C#7Xw9g!2I#EcV9bUm8YGe2<5Pr z+imN8kD6OWe>HQ)M|E%CsV;3C=@DntdOkN6F;bS@2sfV`4a#iWAfwF1z;KTeNuBm$ zqL;k#LE8*fhAHsZ(9=>$9UqF;+}Kwq&4E|?$u;@=KNM`=*x5M0^nW^dl?u1^kT)`* zf5(GtNSV_?Fe9tuCJi%&oy_rgX~bbg6EjODQ2k|nJtwM>EYqRR4*^b~5Q?!nVY)b` zF6mGiU4*07;Zfe$@zmEq@H*<}OS7@b=QCrt@Rr-gfW3-jkufQk3{F|IrK$Qo!;yD#_U zp7Mw3i;_<(UU2*le{l+Y=<6f@8?!dR6h=`tN?>C$I~jm{b`&^75iNA*U~Cd^-UAv@ zDTd>dNRFq?Yq>)9YNFLD>!CB3{?}T$*K(b;{)WhblZcT-T)b{!+N*X|h}^WFe-tD^ zfy(@yTz7vDI`WK*NyqF1wZnsb!Whtl1^?dRQjww^70I)4d$Bs$C4`tQAEvn*&-v7= z0Tx{JZpWsl!<@3#Q!VtEV2w$i))IGUxb;vic)he-MlQU9b7{nPnHBt@q-jd`Z2pW~ zf;)&WA@I$u{6ovI{RGk~to-G-qL z7flS+_$sWs8pgI*rL(5>gTuAM|DF&exl5lI?sTcF3DAW?9Gbq7u3=Ydd*}^ z^c-KLf0IXgz_1|%x2Q5iQ`Wm&x~`_qlj({!eg7Iid0lw^ifxQ?)XHj2;4T$m5bJgcVI;xfwbm;w znMao@67>y__&S2V6DT@o8hw}CspgE=0J`G4Nz<3%|Do)&99>RpaR)cvR)}rLDs1QZ zgLQC<{+kBMF5nwCQgiO7-^deaxMJDVk2wpjtw-56S&Eu!FZXqia?gT)f5Q z9Y$U2F|(?$(ItuNV|)5-I3g%EaXPhkD&ycRffN-qFt+C0tWFm^Y!nL9HkZ+l*!;qF z(n7ahFDhqPV#V>cpHvI>&XPGsifZv2iQgT`K7F@0K%Xf7k_-yg!_dId)CGpVlE#1- zB|1aK+=#Q%%R|@5M>?4luj7ZFzo;n0RqL7yk#lvS7^-XUf6zY5AwSQ+y&LYle=-<* zzg;MeG@-Wrp)Gvq9UtQ1+u`^y(pxJbJE;qIuF`71B|rOoKyFMI)0ks+Hq_T1>~0rx zrWzhtvsfn*&uqOS6#A1qN=sE8{_)rPvo}TtyDMH@C|>bz;9#isz=j=P!Wu&##A8l7 zJD(<0X531^%OTH;Ghf}JIWN3dMKKHk){K5u_=fgHR#twUVYM+0F7TOdTLEBysV=)( zL6t_~EtB44E>slU@X6@q`S^Heyc@*FAT?CYatSBqI%L4(P4;XOPlQm9TnWES{>Tq?=SYnQ z8~~=~ngJ};vdJcJ*E1Wm%4h~GWyGWpCJs^maIUh7O&VJ3i|$CN(xs3DJeg4>Za`b= z*yZn~)$)duwd(|x2egm}W0`x^smX^Q?on1rNMeCcX>rWYRDA9wh1-7Ukzjh8uX4>d zhUqlV1G*nf)^VQ4kl<88_vF66b@7Ai<82Eg zrL9miE5_3FhN35?$s@XNB}UIm^_(<+W{Y(;4Rum0rp12n0Qt(WS63-Y**u9cu;~|9 zGEu?t_+E&}9k7wM*Igp(6K+GZai4%Ul!74wyg#xtvxvXDncm9T%B^jdj9wQ1D5R(H z>Y%~#eNSFwLHBg;`~5e-7L5aFyWH3nlNDQL9v?=EKfQoWTQfX{Rk$p9mh$;-ynp2aBA`H-LYIC9+jch6p3k>U3bxtYXBLCWk9jE4W#+RNS{i?oG`uhFUU8@eATVAla z=X1+^h@8>37H+DS%F*W0sD<^R7?ZZ!Y2X2gZ91BXkYm=k@aelJ-}XpA(0AW&*LOPl zwjpVOdj!N#Wo;*|119KWUsFEoKH3v$dur z?e2VlFy*~L` z8>wek`|)h6A|%JI)-b&V(O^Pf+~4RJyWYBDf!rCuP|}6S;indSghaHNw2r)DmEi7o zG3zgUA)igDXT(#HRPMnli7Do36b!&jdk)@ZE z!*TB}_3yEo8Nd4?|ti(RCRR|y1#JAAb3(NX6^d;zr8 z)%wxZtF2weMZ)7l6T6u-8nsgPAUzgNNsX8>IMloPc6i=D^;=*~> zY%EvGecy!KEhRJw)w=e3WaEyu1%IUe9v6L1vB|1c-xly~ciHkd{+x?JifD0DdH88MM^_$$ zXd`78ua1g%3WLzN-7VykqwNrrj6z*Aij*@MSXqrEykm1Jrr2=$w1G+hZeu}vIQE>? zeNItKS$j+h+Sf|;s;i+>{EF_DW<}Aw`F?j*hJ*Iq_q+Qw%(^yV$dr{3u|kShV#Z80 zyB_=bkr!%^;!K=ILeo=?VKPCfEOO2aX_mUpv0?-9GIe#`%X?63ZiB`hVRBVSVNmUQl;0rsHfK{f6lWzy1 zuYVDkO|0BE*JYLAWgrrCX#K+E6pjZ%f0njiuCFB9c2H~gegTWOz2!#H)Mfm!D_lzt zUzNY)kU1dB8?>15Zo~}BbmqQ#(Wnvy%oe>xdSD@8= zz;;4|QVsjU-M=1ltu@2(TVmNX#&-!&7V#yZ-@k};R?mpT}Map&U zVNbJdgzn`|J}Ax|0!EWztLy)(=Kg)@(>V!Qb8Nq7FT6NPD66>IKGk^zHG*fZZXnG+ zW)s9!VpcEgMRC`g^708VTXslFkoSctn_N>Zl>QbQ*A*_qLWFe&!xbY3jI&sS;2dGt zx?46F{_;~Ym|Fj{Ttvb8(juJ5UQn&IgZ^RDnCytw55*ykJ7h%Ra-K7KcWp21fI{Rw zO4EAf(WGE8sU@2fASIyHpiPRP7XwlNukEpj-&V2e6qWN**E+@Xl zg85*EHf=BJvRwT?4)tgnBZF=xi-b7bA?*J@^Tq!;#SHhpY{J3Ak)@f_T46WVyZHp6 MqM$BcBxmmbUj@3=i~s-t literal 0 HcmV?d00001 diff --git a/content/ja/docs/concepts/_index.md b/content/ja/docs/concepts/_index.md index 62cc04cb7a48f..51442fd391b81 100644 --- a/content/ja/docs/concepts/_index.md +++ b/content/ja/docs/concepts/_index.md @@ -7,7 +7,7 @@ weight: 40 {{% capture overview %}} -本セクションは、Kubernetesシステムの各パートと、クラスターを表現するためにKubernetesが使用する抽象概念について学習し、Kubernetesの仕組みをより深く理解するのに役立ちます。 +本セクションは、Kubernetesシステムの各パートと、{{< glossary_tooltip text="クラスター" term_id="cluster" length="all" >}}を表現するためにKubernetesが使用する抽象概念について学習し、Kubernetesの仕組みをより深く理解するのに役立ちます。 {{% /capture %}} @@ -17,7 +17,7 @@ weight: 40 Kubernetesを機能させるには、*Kubernetes API オブジェクト* を使用して、実行したいアプリケーションやその他のワークロード、使用するコンテナイメージ、レプリカ(複製)の数、どんなネットワークやディスクリソースを利用可能にするかなど、クラスターの *desired state* (望ましい状態)を記述します。desired sate (望ましい状態)をセットするには、Kubernetes APIを使用してオブジェクトを作成します。通常はコマンドラインインターフェイス `kubectl` を用いてKubernetes APIを操作しますが、Kubernetes APIを直接使用してクラスターと対話し、desired state (望ましい状態)を設定、または変更することもできます。 -一旦desired state (望ましい状態)を設定すると、*Kubernetes コントロールプレーン* が働き、クラスターの現在の状態をdesired state (望ましい状態)に一致させます。そのためにKubernetesはさまざまなタスク(たとえば、コンテナの起動または再起動、特定アプリケーションのレプリカ数のスケーリング等)を自動的に実行します。Kubernetesコントロールプレーンは、クラスターで実行されている以下のプロセスで構成されています。 +一旦desired state (望ましい状態)を設定すると、Pod Lifecycle Event Generator([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md))を使用した*Kubernetes コントロールプレーン*が機能し、クラスターの現在の状態をdesired state (望ましい状態)に一致させます。そのためにKubernetesはさまざまなタスク(たとえば、コンテナの起動または再起動、特定アプリケーションのレプリカ数のスケーリング等)を自動的に実行します。Kubernetesコントロールプレーンは、クラスターで実行されている以下のプロセスで構成されています。 * **Kubernetes Master** :[kube-apiserver](/docs/admin/kube-apiserver/)、[kube-controller-manager](/docs/admin/kube-controller-manager/)、[kube-scheduler](/docs/admin/kube-scheduler/) の3プロセスの集合です。これらのプロセスはクラスター内の一つのノード上で実行されます。実行ノードはマスターノードとして指定します。 * クラスター内の個々の非マスターノードは、それぞれ2つのプロセスを実行します。 @@ -26,7 +26,7 @@ Kubernetesを機能させるには、*Kubernetes API オブジェクト* を使 ## Kubernetesオブジェクト -Kubernetesには、デプロイ済みのコンテナ化されたアプリケーションやワークロード、関連するネットワークとディスクリソース、クラスターが何をしているかに関するその他の情報といった、システムの状態を表現する抽象が含まれています。これらの抽象は、Kubernetes APIのオブジェクトによって表現されます。詳細については、[Kubernetesオブジェクト概要](/docs/concepts/abstractions/overview/) をご覧ください。 +Kubernetesには、デプロイ済みのコンテナ化されたアプリケーションやワークロード、関連するネットワークとディスクリソース、クラスターが何をしているかに関するその他の情報といった、システムの状態を表現する抽象が含まれています。これらの抽象は、Kubernetes APIのオブジェクトによって表現されます。詳細については、[Kubernetesオブジェクトについて知る](/docs/concepts/overview/working-with-objects/kubernetes-objects/)をご覧ください。 基本的なKubernetesのオブジェクトは次のとおりです。 @@ -35,19 +35,19 @@ Kubernetesには、デプロイ済みのコンテナ化されたアプリケー * [Volume](/docs/concepts/storage/volumes/) * [Namespace](/ja/docs/concepts/overview/working-with-objects/namespaces/) -上記に加え、Kubernetesにはコントローラーと呼ばれる多くの高レベルの抽象概念が含まれています。コントローラーは基本オブジェクトに基づいて構築され、以下のような追加の機能と便利な機能を提供します。 +Kubernetesには、[コントローラー](/docs/concepts/architecture/controller/)に依存して基本オブジェクトを構築し、追加の機能と便利な機能を提供する高レベルの抽象化も含まれています。これらには以下のものを含みます: -* [ReplicaSet](/ja/docs/concepts/workloads/controllers/replicaset/) -* [Deployment](/docs/concepts/workloads/controllers/deployment/) -* [StatefulSet](/ja/docs/concepts/workloads/controllers/statefulset/) +* [Deployment](/ja/docs/concepts/workloads/controllers/deployment/) * [DaemonSet](/ja/docs/concepts/workloads/controllers/daemonset/) +* [StatefulSet](/ja/docs/concepts/workloads/controllers/statefulset/) +* [ReplicaSet](/ja/docs/concepts/workloads/controllers/replicaset/) * [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) ## Kubernetesコントロールプレーン Kubernetesマスターや kubeletプロセスといったKubernetesコントロールプレーンのさまざまなパーツは、Kubernetesがクラスターとどのように通信するかを統制します。コントロールプレーンはシステム内のすべてのKubernetesオブジェクトの記録を保持し、それらのオブジェクトの状態を管理するために継続的制御ループを実行します。コントロールプレーンの制御ループは常にクラスターの変更に反応し、システム内のすべてのオブジェクトの実際の状態が、指定した状態に一致するように動作します。 -たとえば、Kubernetes APIを使用してDeploymentオブジェクトを作成する場合、システムには新しいdesired state (望ましい状態)が提供されます。Kubernetesコントロールプレーンは、そのオブジェクトの作成を記録します。そして、要求されたアプリケーションの開始、およびクラスターノードへのスケジューリングにより指示を完遂します。このようにしてクラスターの実際の状態を望ましい状態に一致させます。 +たとえば、Kubernetes APIを使用してDeploymentを作成する場合、システムには新しいdesired state (望ましい状態)が提供されます。Kubernetesコントロールプレーンは、そのオブジェクトの作成を記録します。そして、要求されたアプリケーションの開始、およびクラスターノードへのスケジューリングにより指示を完遂します。このようにしてクラスターの実際の状態を望ましい状態に一致させます。 ### Kubernetesマスター @@ -59,11 +59,6 @@ Kubernetesのマスターは、クラスターの望ましい状態を維持す クラスターのノードは、アプリケーションとクラウドワークフローを実行するマシン(VM、物理サーバーなど)です。Kubernetesのマスターは各ノードを制御します。運用者自身がノードと直接対話することはほとんどありません。 -#### オブジェクトメタデータ - - -* [Annotations](/ja/docs/concepts/overview/working-with-objects/annotations/) - {{% /capture %}} {{% capture whatsnext %}} diff --git a/content/ja/docs/concepts/architecture/_index.md b/content/ja/docs/concepts/architecture/_index.md index 9a275dbb908bd..69fda32def077 100644 --- a/content/ja/docs/concepts/architecture/_index.md +++ b/content/ja/docs/concepts/architecture/_index.md @@ -1,4 +1,4 @@ --- -title: "Kubernetes アーキテクチャー" +title: "Kubernetesのアーキテクチャー" weight: 30 --- diff --git a/content/ja/docs/concepts/architecture/nodes.md b/content/ja/docs/concepts/architecture/nodes.md index fb8894ba0c6be..a35674840f49b 100644 --- a/content/ja/docs/concepts/architecture/nodes.md +++ b/content/ja/docs/concepts/architecture/nodes.md @@ -47,7 +47,7 @@ kubectl describe node <ノード名> | `Ready` | ノードの状態がHealthyでPodを配置可能な場合に`True`になります。ノードの状態に問題があり、Podが配置できない場合に`False`になります。ノードコントローラーが、`node-monitor-grace-period`で設定された時間内(デフォルトでは40秒)に該当ノードと疎通できない場合、`Unknown`になります。 | | `MemoryPressure` | ノードのメモリが圧迫されているときに`True`になります。圧迫とは、メモリの空き容量が少ないことを指します。それ以外のときは`False`です。 | | `PIDPressure` | プロセスが圧迫されているときに`True`になります。圧迫とは、プロセス数が多すぎることを指します。それ以外のときは`False`です。 | -| `DiskPressure` | ノードのディスク容量がが圧迫されているときに`True`になります。圧迫とは、ディスクの空き容量が少ないことを指します。それ以外のときは`False`です。 | +| `DiskPressure` | ノードのディスク容量が圧迫されているときに`True`になります。圧迫とは、ディスクの空き容量が少ないことを指します。それ以外のときは`False`です。 | | `NetworkUnavailable` | ノードのネットワークが適切に設定されていない場合に`True`になります。それ以外のときは`False`です。 | ノードのConditionはJSONオブジェクトで表現されます。例えば、正常なノードの場合は以下のようなレスポンスが表示されます。 diff --git a/content/ja/docs/concepts/cluster-administration/_index.md b/content/ja/docs/concepts/cluster-administration/_index.md new file mode 100755 index 0000000000000..39996efb33b67 --- /dev/null +++ b/content/ja/docs/concepts/cluster-administration/_index.md @@ -0,0 +1,5 @@ +--- +title: "クラスターの管理" +weight: 100 +--- + diff --git a/content/ja/docs/concepts/configuration/_index.md b/content/ja/docs/concepts/configuration/_index.md new file mode 100755 index 0000000000000..32113b0ea027c --- /dev/null +++ b/content/ja/docs/concepts/configuration/_index.md @@ -0,0 +1,5 @@ +--- +title: "設定" +weight: 80 +--- + diff --git a/content/ja/docs/concepts/containers/_index.md b/content/ja/docs/concepts/containers/_index.md index ad442f3ab36e4..3e1c30f9b8e6c 100755 --- a/content/ja/docs/concepts/containers/_index.md +++ b/content/ja/docs/concepts/containers/_index.md @@ -1,5 +1,4 @@ --- -title: "Containers" +title: "コンテナ" weight: 40 --- - diff --git a/content/ja/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/ja/docs/concepts/extend-kubernetes/api-extension/custom-resources.md new file mode 100644 index 0000000000000..f83bc9ebc57c5 --- /dev/null +++ b/content/ja/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -0,0 +1,223 @@ +--- +title: カスタムリソース +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +*カスタムリソース* はKubernetes APIの拡張です。このページでは、いつKubernetesのクラスターにカスタムリソースを追加するべきなのか、そしていつスタンドアローンのサービスを利用するべきなのかを議論します。カスタムリソースを追加する2つの方法と、それらの選択方法について説明します。 + +{{% /capture %}} + +{{% capture body %}} + +## カスタムリソース + +*リソース* は、[Kubernetes API](/docs/reference/using-api/api-overview/)のエンドポイントで、特定の[APIオブジェクト](/ja/docs/concepts/overview/working-with-objects/kubernetes-objects/)のコレクションを保持します。例えば、ビルトインの *Pods* リソースは、Podオブジェクトのコレクションを包含しています。 + +*カスタムリソース* は、Kubernetes APIの拡張で、デフォルトのKubernetesインストールでは、必ずしも利用できるとは限りません。つまりそれは、特定のKubernetesインストールのカスタマイズを表します。しかし、今現在、多数のKubernetesのコア機能は、カスタムリソースを用いて作られており、Kubernetesをモジュール化しています。 + +カスタムリソースは、稼働しているクラスターに動的に登録され、現れたり、消えたりし、クラスター管理者はクラスター自体とは無関係にカスタムリソースを更新できます。一度、カスタムリソースがインストールされると、ユーザーは[kubectl](/docs/user-guide/kubectl-overview/)を使い、ビルトインのリソースである *Pods* と同じように、オブジェクトを作成、アクセスすることが可能です。 + +## カスタムコントローラー + +カスタムリソースそれ自身は、単純に構造化データを格納、取り出す機能を提供します。カスタムリソースを *カスタムコントローラー* と組み合わせることで、カスタムリソースは真の _宣言的API_ を提供します。 + +[宣言的API](/ja/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetesオブジェクトを理解する)は、リソースのあるべき状態を _宣言_ または指定することを可能にし、Kubernetesオブジェクトの現在の状態を、あるべき状態に同期し続けるように動きます。 +コントローラーは、構造化データをユーザーが指定したあるべき状態と解釈し、その状態を管理し続けます。 + +稼働しているクラスターのライフサイクルとは無関係に、カスタムコントローラーをデプロイ、更新することが可能です。カスタムコントローラーはあらゆるリソースと連携できますが、カスタムリソースと組み合わせると特に効果を発揮します。[オペレーターパターン](https://coreos.com/blog/introducing-operators.html)は、カスタムリソースとカスタムコントローラーの組み合わせです。カスタムコントローラーにより、特定アプリケーションのドメイン知識を、Kubernetes APIの拡張に変換することができます。 + +## カスタムリソースをクラスターに追加するべきか? + +新しいAPIを作る場合、[APIをKubernetesクラスターAPIにアグリゲート(集約)する](/ja/docs/concepts/api-extension/apiserver-aggregation/)か、もしくはAPIをスタンドアローンで動かすかを検討します。 + +| APIアグリゲーションを使う場合: | スタンドアローンAPIを使う場合: | +| ------------------------------ | ---------------------------- | +| APIが[宣言的](#宣言的API) | APIが[宣言的](#宣言的API)モデルに適さない | +| 新しいリソースを`kubectl`を使い読み込み、書き込みしたい| `kubectl`のサポートは必要ない | +| 新しいリソースをダッシュボードのような、Kubernetes UIで他のビルトインリソースと同じように管理したい | Kubernetes UIのサポートは必要ない | +| 新しいAPIを開発している | APIを提供し、適切に機能するプログラムが既に存在している | +| APIグループ、名前空間というような、RESTリソースパスに割り当てられた、Kubernetesのフォーマット仕様の制限を許容できる([API概要](/ja/docs/concepts/overview/kubernetes-api/)を参照) | 既に定義済みのREST APIと互換性を持っていなければならない | +| リソースはクラスターごとか、クラスター内の名前空間に自然に分けることができる | クラスター、または名前空間による分割がリソース管理に適さない。特定のリソースパスに基づいて管理したい | +| [Kubernetes APIサポート機能](#一般的な機能)を再利用したい | これらの機能は必要ない | + +### 宣言的API + +宣言的APIは、通常、下記に該当します: + + - APIは、比較的少数の、比較的小さなオブジェクト(リソース)で構成されている + - オブジェクトは、アプリケーションの設定、インフラストラクチャーを定義する + - オブジェクトは、比較的更新頻度が低い + - 人は、オブジェクトの情報をよく読み書きする + - オブジェクトに対する主要な手続きは、CRUD(作成、読み込み、更新、削除)になる + - 複数オブジェクトをまたいだトランザクションは必要ない: APIは今現在の状態ではなく、あるべき状態を表現する + +命令的APIは、宣言的ではありません。 +APIが宣言的ではない兆候として、次のものがあります: + + - クライアントから"これを実行"と命令がきて、完了の返答を同期的に受け取る + - クライアントから"これを実行"と命令がきて、処理IDを取得する。そして処理が完了したかどうかを、処理IDを利用して別途問い合わせる + - リモートプロシージャコール(RPC)という言葉が飛び交っている + - 直接、大量のデータを格納している(例、1オブジェクトあたり数kBより大きい、または数千オブジェクトより多い) + - 高帯域アクセス(持続的に毎秒数十リクエスト)が必要 + - エンドユーザーのデータ(画像、PII、その他)を格納している、またはアプリケーションが処理する大量のデータを格納している + - オブジェクトに対する処理が、CRUDではない + - APIをオブジェクトとして簡単に表現できない + - 停止している処理を処理ID、もしくは処理オブジェクトで表現することを選択している + +## ConfigMapとカスタムリソースのどちらを使うべきか? + +下記のいずれかに該当する場合は、ConfigMapを使ってください: + +* `mysql.cnf`、`pom.xml`のような、十分に文書化された設定ファイルフォーマットが既に存在している +* 単一キーのConfigMapに、設定ファイルの内容の全てを格納している +* 設定ファイルの主な用途は、クラスター上のPodで実行されているプログラムがファイルを読み込み、それ自体を構成することである +* ファイルの利用者は、Kubernetes APIよりも、Pod内のファイルまたはPod内の環境変数を介して利用することを好む +* ファイルが更新されたときに、Deploymentなどを介してローリングアップデートを行いたい + +{{< note >}} +センシティブなデータには、ConfigMapに類似していますがよりセキュアな[secret](/docs/concepts/configuration/secret/)を使ってください +{{< /note >}} + +下記のほとんどに該当する場合、カスタムリソース(CRD、またはアグリゲートAPI)を使ってください: + +* 新しいリソースを作成、更新するために、Kubernetesのクライアントライブラリー、CLIを使いたい +* kubectlのトップレベルサポートが欲しい(例、`kubectl get my-object object-name`) +* 新しい自動化の仕組みを作り、新しいオブジェクトの更新をウォッチしたい、その更新を契機に他のオブジェクトのCRUDを実行したい、またはその逆を行いたい +* オブジェクトの更新を取り扱う、自動化の仕組みを書きたい +* `.spec`、`.status`、`.metadata`というような、Kubernetes APIの慣習を使いたい +* オブジェクトは、制御されたリソースコレクションの抽象化、または他のリソースのサマリーとしたい + +## カスタムリソースを追加する + +Kubernetesは、クラスターへカスタムリソースを追加する2つの方法を提供しています: + +- CRDはシンプルで、プログラミングなしに作成可能 +- [APIアグリゲーション](/ja/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)は、プログラミングが必要だが、データがどのように格納され、APIバージョン間でどのように変換されるかというような、より詳細なAPIの振る舞いを制御できる + +Kubernetesは、さまざまなユーザーのニーズを満たすためにこれら2つのオプションを提供しており、使いやすさや柔軟性が損なわれることはありません。 + +アグリゲートAPIは、プロキシーとして機能するプライマリAPIサーバーの背後にある、下位のAPIServerです。このような配置は[APIアグリゲーション](/ja/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA)と呼ばれています。ユーザーにとっては、単にAPIサーバーが拡張されているように見えます。 + +CRDでは、APIサーバーの追加なしに、ユーザーが新しい種類のリソースを作成できます。CRDを使うには、APIアグリゲーションを理解する必要はありません。 + +どのようにインストールされたかに関わらず、新しいリソースはカスタムリソースとして参照され、ビルトインのKubernetesリソース(Podなど)とは区別されます。 + +## CustomResourceDefinition + +[CustomResourceDefinition](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/)APIリソースは、カスタムリソースを定義します。CRDオブジェクトを定義することで、指定した名前、スキーマで新しいカスタムリソースが作成されます。Kubernetes APIは、作成したカスタムリソースのストレージを提供、および処理します。 + +これはカスタムリソースを処理するために、独自のAPIサーバーを書くことから解放してくれますが、一般的な性質として[APIサーバーアグリゲーション](#APIサーバーアグリゲーション)と比べると、柔軟性に欠けます。 + +新しいカスタムリソースをどのように登録するか、新しいリソースタイプとの連携、そしてコントローラーを使いイベントを処理する方法例について、[カスタムコントローラー例](https://github.com/kubernetes/sample-controller)を参照してください。 + +## APIサーバーアグリゲーション + +通常、Kubernetes APIの各リソースは、RESTリクエストとオブジェクトの永続的なストレージを管理するためのコードが必要です。メインのKubernetes APIサーバーは *Pod* や *Service* のようなビルトインのリソースを処理し、また[CRD](#customresourcedefinition)を通じて、同じ方法でカスタムリソースも管理できます。 + +[アグリゲーションレイヤー](/ja/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)は、独自のスタンドアローンAPIサーバーを書き、デプロイすることで、カスタムリソースに特化した実装の提供を可能にします。メインのAPIサーバーが、処理したいカスタムリソースへのリクエストを委譲することで、他のクライアントからも利用できるようにします。 + +## カスタムリソースの追加方法を選択する + +CRDは簡単に使えます。アグリゲートAPIはより柔軟です。ニーズに最も合う方法を選択してください。 + +通常、CRDは下記の場合に適しています: + +* 少数のフィールドしか必要ない +* そのリソースは社内のみで利用している、または小さいオープンソースプロジェクトの一部で利用している(商用プロダクトではない) + +### 使いやすさの比較 + +CRDは、アグリゲートAPIと比べ、簡単に作れます。 + +| CRD | アグリゲートAPI | +| -------------------------- | --------------- | +| プログラミングが不要で、ユーザーはCRDコントローラーとしてどの言語でも選択可能 | Go言語でプログラミングし、バイナリとイメージの作成が必要。ユーザーはCRDコントローラーとしてどの言語でも選択可能 | +| 追加のサービスは不要。カスタムリソースはAPIサーバーで処理される | 追加のサービス作成が必要で、障害が発生する可能性がある | +| CRDが作成されると、継続的なサポートは無い。バグ修正は通常のKubernetesマスターのアップグレードで行われる | 定期的にアップストリームからバグ修正の取り込み、リビルド、そしてアグリゲートAPIサーバーの更新が必要かもしれない | +| 複数バージョンのAPI管理は不要。例えば、あるリソースを操作するクライアントを管理していた場合、APIのアップグレードと一緒に更新される | 複数バージョンのAPIを管理しなければならない。例えば、世界中に共有されている拡張機能を開発している場合 | + +### 高度な機能、柔軟性 + +アグリゲートAPIは、例えばストレージレイヤーのカスタマイズのような、より高度なAPI機能と他の機能のカスタマイズを可能にします。 + +| 機能 | 詳細 | CRD | アグリゲートAPI | +| ---- | ---- | --- | --------------- | +| バリデーション | エラーを予防し、クライアントと無関係にAPIを発達させることができるようになる。これらの機能は多数のクライアントがおり、同時に全てを更新できないときに最も効果を発揮する | はい、ほとんどのバリデーションは[OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation)で、CRDに指定できる。その他のバリデーションは[Webhookのバリデーション](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook-alpha-in-1-8-beta-in-1-9)によりサポートされている | はい、任意のバリデーションが可能 | +| デフォルト設定 | 上記を参照 | はい、[OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#defaulting)の`default`キーワード(1.16でベータ)、または[Mutating Webhook](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook-beta-in-1-9)を通じて可能 | はい | +| 複数バージョニング | 同じオブジェクトを、違うAPIバージョンで利用可能にする。フィールドの名前を変更するなどのAPIの変更を簡単に行うのに役立つ。クライアントのバージョンを管理する場合、重要性は下がる | [はい](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning) | はい | +| カスタムストレージ | 異なる性能のストレージが必要な場合(例えば、キーバリューストアの代わりに時系列データベース)または、セキュリティの分離(例えば、機密情報の暗号化、その他)| いいえ | はい | +| カスタムビジネスロジック | オブジェクトが作成、読み込み、更新、また削除されるときに任意のチェック、アクションを実行する| はい、[Webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)を利用 | はい | +| サブリソースのスケール | HorizontalPodAutoscalerやPodDisruptionBudgetなどのシステムが、新しいリソースと連携できるようにする | [はい](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#scale-subresource) | はい | +| サブリソースの状態 |
  • より詳細なアクセスコントロール: ユーザーがspecセクションに書き込み、コントローラーがstatusセクションに書き込む
  • カスタムリソースのデータ変換時にオブジェクトの世代を上げられるようにする(リソースがspecと、statusでセクションが分離している必要がある)
| [はい](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#status-subresource) | はい | +| その他のサブリソース | "logs"や"exec"のような、CRUD以外の処理の追加 | いいえ | はい | +| strategic-merge-patch |`Content-Type: application/strategic-merge-patch+json`で、PATCHをサポートする新しいエンドポイント。ローカル、サーバー、どちらでも更新されうるオブジェクトに有用。さらなる情報は["APIオブジェクトをkubectl patchで決まった場所で更新"](/docs/tasks/run-application/update-api-object-kubectl-patch/)を参照 | いいえ | はい | +| プロトコルバッファ | プロトコルバッファを使用するクライアントをサポートする新しいリソース | いいえ | はい | +| OpenAPIスキーマ | サーバーから動的に取得できる型のOpenAPI(スワッガー)スキーマはあるか、許可されたフィールドのみが設定されるようにすることで、ユーザーはフィールド名のスペルミスから保護されているか、型は強制されているか(言い換えると、「文字列」フィールドに「int」を入れさせない) | はい、[OpenAPI v3.0 validation](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation) スキーマがベース(1.16でGA) | はい | + +### 一般的な機能 + +CRD、またはアグリゲートAPI、どちらを使ってカスタムリソースを作った場合でも、Kubernetesプラットフォーム外でAPIを実装するのに比べ、多数の機能が提供されます: + +| 機能 | 何を実現するか | +| ---- | -------------- | +| CRUD | 新しいエンドポイントが、HTTP、`kubectl`を通じて、基本的なCRUD処理をサポート | +| Watch | 新しいエンドポイントが、HTTPを通じて、KubernetesのWatch処理をサポート | +| Discovery | kubectlやダッシュボードのようなクライアントが、自動的にリソースの一覧表示、個別表示、フィールドの編集処理を提供 | +| json-patch | 新しいエンドポイントが`Content-Type: application/json-patch+json`を用いたPATCHをサポート | +| merge-patch | 新しいエンドポイントが`Content-Type: application/merge-patch+json`を用いたPATCHをサポート | +| HTTPS | 新しいエンドポイントがHTTPSを利用 | +| ビルトイン認証 | 拡張機能へのアクセスに認証のため、コアAPIサーバー(アグリゲーションレイヤー)を利用 | +| ビルトイン認可 | 拡張機能へのアクセスにコアAPIサーバーで使われている認可機構を再利用(例、RBAC) | +| ファイナライザー | 外部リソースの削除が終わるまで、拡張リソースの削除をブロック | +| Admission Webhooks | 拡張リソースの作成/更新/削除処理時に、デフォルト値の設定、バリデーションを実施 | +| UI/CLI 表示 | kubectl、ダッシュボードで拡張リソースを表示 | +| 未設定 vs 空設定 | クライアントは、フィールドの未設定とゼロ値を区別することができる | +| クライアントライブラリーの生成 | Kubernetesは、一般的なクライアントライブラリーと、タイプ固有のクライアントライブラリーを生成するツールを提供 | +| ラベルとアノテーション | ツールがコアリソースとカスタムリソースの編集方法を知っているオブジェクト間で、共通のメタデータを提供 | + +## カスタムリソースのインストール準備 + +クラスターにカスタムリソースを追加する前に、いくつか認識しておくべき事項があります。 + +### サードパーティのコードと新しい障害点 + +CRDを作成しても、勝手に新しい障害点が追加されてしまうことはありませんが(たとえば、サードパーティのコードをAPIサーバーで実行することによって)、パッケージ(たとえば、チャート)またはその他のインストールバンドルには、多くの場合、CRDと新しいカスタムリソースのビジネスロジックを実装するサードパーティコードが入ったDeploymentが含まれます。 + +アグリゲートAPIサーバーのインストールすると、常に新しいDeploymentが付いてきます。 + +### ストレージ + +カスタムリソースは、ConfigMapと同じ方法でストレージの容量を消費します。多数のカスタムリソースを作成すると、APIサーバーのストレージ容量を超えてしまうかもしれません。 + +アグリゲートAPIサーバーも、メインのAPIサーバーと同じストレージを利用するかもしれません。その場合、同じ問題が発生しえます。 + +### 認証、認可、そして監査 + +CRDでは、APIサーバーのビルトインリソースと同じ認証、認可、そして監査ロギングの仕組みを利用します。 + +もしRBACを使っている場合、ほとんどのRBACのロールは新しいリソースへのアクセスを許可しません。(クラスター管理者ロール、もしくはワイルドカードで作成されたロールを除く)新しいリソースには、明示的にアクセスを許可する必要があります。多くの場合、CRDおよびアグリゲートAPIには、追加するタイプの新しいロール定義がバンドルされています。 + +アグリゲートAPIサーバーでは、APIサーバーのビルトインリソースと同じ認証、認可、そして監査の仕組みを使う場合と使わない場合があります。 + +## カスタムリソースへのアクセス + +Kubernetesの[クライアントライブラリー](/docs/reference/using-api/client-libraries/)を使い、カスタムリソースにアクセスすることが可能です。全てのクライアントライブラリーがカスタムリソースをサポートしているわけでは無いですが、GoとPythonのライブラリーはサポートしています。 + +カスタムリソースは、下記のような方法で操作できます: + +- kubectl +- kubernetesの動的クライアント +- 自作のRESTクライアント +- [Kubernetesクライアント生成ツール](https://github.com/kubernetes/code-generator)を使い生成したクライアント(生成は高度な作業ですが、一部のプロジェクトは、CRDまたはAAとともにクライアントを提供する場合があります) + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [Kubernetes APIをアグリゲーションレイヤーで拡張する方法](/ja/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)について学ぶ +* [Kubernetes APIをCustomResourceDefinitionで拡張する方法](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/)について学ぶ + +{{% /capture %}} diff --git a/content/ja/docs/concepts/overview/_index.md b/content/ja/docs/concepts/overview/_index.md index 93a6320fa5da2..5bcc15f96bbf8 100755 --- a/content/ja/docs/concepts/overview/_index.md +++ b/content/ja/docs/concepts/overview/_index.md @@ -1,5 +1,4 @@ --- -title: "Overview" +title: "概要" weight: 20 --- - diff --git a/content/ja/docs/concepts/overview/components.md b/content/ja/docs/concepts/overview/components.md index 52f644b22f4cd..3824bde0c6d8c 100644 --- a/content/ja/docs/concepts/overview/components.md +++ b/content/ja/docs/concepts/overview/components.md @@ -8,7 +8,15 @@ card: --- {{% capture overview %}} +Kubernetesをデプロイすると、クラスターが展開されます。 +{{< glossary_definition term_id="cluster" length="all" prepend="クラスターは、">}} + このドキュメントでは、Kubernetesクラスターが機能するために必要となるさまざまなコンポーネントの概要を説明します。 + +すべてのコンポーネントが結び付けられたKubernetesクラスターの図を次に示します。 + +![Kubernetesのコンポーネント](/images/docs/components-of-kubernetes.png) + {{% /capture %}} {{% capture body %}} @@ -106,7 +114,8 @@ Kubernetesによって開始されたコンテナは、DNS検索にこのDNSサ {{% /capture %}} {{% capture whatsnext %}} -* [ノード](/docs/concepts/architecture/nodes/) について学ぶ -* [kube-scheduler](/docs/concepts/scheduling/kube-scheduler/) について学ぶ -* etcdの公式 [ドキュメント](https://etcd.io/docs/) を読む +* [ノード](/ja/docs/concepts/architecture/nodes/)について学ぶ +* [コントローラー](/docs/concepts/architecture/controller/)について学ぶ +* [kube-scheduler](/ja/docs/concepts/scheduling/kube-scheduler/)について学ぶ +* etcdの公式 [ドキュメント](https://etcd.io/docs/)を読む {{% /capture %}} diff --git a/content/ja/docs/concepts/overview/kubernetes-api.md b/content/ja/docs/concepts/overview/kubernetes-api.md index 43fe3feb9dad4..01a41309de61d 100644 --- a/content/ja/docs/concepts/overview/kubernetes-api.md +++ b/content/ja/docs/concepts/overview/kubernetes-api.md @@ -67,7 +67,7 @@ APIが、システムリソースと動作について明確かつ一貫した APIとソフトウエアのバージョニングは、間接的にしか関連していないことに注意してください。[APIとリリースバージョニング提案](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)で、APIとソフトウェアのバージョニングの関連について記載しています。 -異なるバージョンのAPIは、異なるレベル(版)の安定性とサポートを持っています。それぞれのレベル(版)の基準は、[API変更ドキュメント](https://git.k8s.io/community/contributors/devel/api_changes.md#alpha-beta-and-stable-versions)に詳細が記載されています。下記に簡潔にまとめます: +異なるバージョンのAPIは、異なるレベル(版)の安定性とサポートを持っています。それぞれのレベル(版)の基準は、[API変更ドキュメント](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)に詳細が記載されています。下記に簡潔にまとめます: - アルファレベル(版): - バージョン名に`alpha`を含みます(例、`v1alpha1`)。 diff --git a/content/ja/docs/concepts/overview/working-with-objects/_index.md b/content/ja/docs/concepts/overview/working-with-objects/_index.md index 8661349a3fbc4..d4a9f2e6b6d07 100755 --- a/content/ja/docs/concepts/overview/working-with-objects/_index.md +++ b/content/ja/docs/concepts/overview/working-with-objects/_index.md @@ -1,5 +1,5 @@ --- -title: "Working with Kubernetes Objects" +title: "Kubernetesのオブジェクトについて" weight: 40 --- diff --git a/content/ja/docs/concepts/scheduling/scheduler-perf-tuning.md b/content/ja/docs/concepts/scheduling/scheduler-perf-tuning.md new file mode 100644 index 0000000000000..8843138e73243 --- /dev/null +++ b/content/ja/docs/concepts/scheduling/scheduler-perf-tuning.md @@ -0,0 +1,74 @@ +--- +title: スケジューラーのパフォーマンスチューニング +content_template: templates/concept +weight: 70 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="1.14" state="beta" >}} + +[kube-scheduler](/docs/concepts/scheduling/kube-scheduler/#kube-scheduler)はKubernetesのデフォルトのスケジューラーです。クラスター内のノード上にPodを割り当てる責務があります。 + +クラスター内に存在するノードで、Podのスケジューリング要求を満たすものはPodに対して_割り当て可能_ なノードと呼ばれます。スケジューラーはPodに対する割り当て可能なノードをみつけ、それらの割り当て可能なノードにスコアをつけます。その中から最も高いスコアのノードを選択し、Podに割り当てるためのいくつかの関数を実行します。スケジューラーは_Binding_ と呼ばれる処理中において、APIサーバーに対して割り当てが決まったノードの情報を通知します。 + +このページでは、大規模のKubernetesクラスターにおけるパフォーマンス最適化のためのチューニングについて説明します。 + +{{% /capture %}} + +{{% capture body %}} + +## スコア付けするノードの割合 + +Kubernetes 1.12以前では、Kube-schedulerがクラスター内の全てのノードに対して割り当て可能かをチェックし、実際に割り当て可能なノードのスコア付けをしていました。Kubernetes 1.12では新機能を追加し、ある数の割り当て可能なノードが見つかった時点で、割り当て可能なノードの探索を止めれるようになりました。これにより大規模なクラスターにおけるスケジューラーのパフォーマンスが向上しました。その数はクラスターのサイズの割合(%)として指定されます。この割合は`percentageOfNodesToScore`というオプションの設定項目によって指定可能です。この値の範囲は1から100までです。100より大きい値は100%として扱われます。0を指定したときは、この設定オプションを指定しないものとして扱われます。Kubernetes 1.14では、この値が指定されていないときは、スコア付けするノードの割合をクラスターのサイズに基づいて決定するための機構があります。この機構では100ノードのクラスターに対しては50%の割合とするような線形な式を使用します。5000ノードのクラスターに対しては10%となります。自動で算出される割合の最低値は5%となります。言い換えると、クラスターの規模がどれだけ大きくても、ユーザーがこの値を5未満に設定しない限りスケジューラーは少なくても5%のクラスター内のノードをスコア付けすることになります。 + +`percentageOfNodesToScore`の値を50%に設定する例は下記のとおりです。 + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1alpha1 +kind: KubeSchedulerConfiguration +algorithmSource: + provider: DefaultProvider + +... + +percentageOfNodesToScore: 50 +``` + +{{< note >}} +割り当て可能なノードが50未満のクラスターにおいては、割り当て可能なノードの探索を止めるほどノードが多くないため、スケジューラーは全てのノードをチェックします。 +{{< /note >}} + +**この機能を無効にするためには**、`percentageOfNodesToScore`を100に設定してください。 + + +### percentageOfNodesToScoreのチューニング + +`percentageOfNodesToScore`は1から100の間の範囲である必要があり、デフォルト値はクラスターのサイズに基づいて計算されます。また、クラスターのサイズの最小値は50ノードとハードコードされています。これは数百のノードを持つようなクラスターにおいてこの値を50より低い値に変更しても、スケジューラーが検出する割り当て可能なノードの数に大きな影響を与えないことを意味します。このオプションは意図的なものです。その理由としては、小規模のクラスターにおいてパフォーマンスを著しく改善する可能性が低いためです。1000ノードを超える大規模なクラスターでこの値を低く設定すると、パフォーマンスが著しく改善される可能性があります。 + +この値を設定する際に考慮するべき重要な注意事項として、割り当て可能ノードのチェック対象のノードが少ないと、一部のノードはPodの割り当てのためにスコアリングされなくなります。結果として、高いスコアをつけられる可能性のあるノードがスコアリングフェーズに渡されることがありません。これにより、Podの配置が理想的なものでなくなります。したがって、この値をかなり低い割合に設定すべきではありません。一般的な経験則として、この値を10未満に設定しないことです。スケジューラーのスループットがアプリケーションにとって致命的で、ノードのスコアリングが重要でないときのみ、この値を低く設定するべきです。言いかえると、割り当て可能な限り、Podは任意のノード上で稼働させるのが好ましいです。 + +クラスターが数百のノードを持つ場合やそれに満たない場合でも、この設定オプションのデフォルト値を低くするのを推奨しません。デフォルト値を低くしてもスケジューラーのパフォーマンスを大幅に改善することはありません。 + +### スケジューラーはどのようにノードを探索するか + +このセクションでは、この機能の内部の詳細を理解したい人向けになります。 + +クラスター内の全てのノードに対して平等にPodの割り当ての可能性を持たせるため、スケジューラーはラウンドロビン方式でノードを探索します。複数のノードの配列になっているイメージです。スケジューラーはその配列の先頭から探索を開始し、`percentageOfNodesToScore`によって指定された数のノードを検出するまで、割り当て可能かどうかをチェックしていきます。次のPodでは、スケジューラーは前のPodの割り当て処理でチェックしたところから探索を再開します。 + +ノードが複数のゾーンに存在するとき、スケジューラーは様々なゾーンのノードを探索して、異なるゾーンのノードが割り当て可能かどうかのチェック対象になるようにします。例えば2つのゾーンに6つのノードがある場合を考えます。 + +``` +Zone 1: Node 1, Node 2, Node 3, Node 4 +Zone 2: Node 5, Node 6 +``` + +スケジューラーは、下記の順番でノードの割り当て可能性を評価します。 + +``` +Node 1, Node 5, Node 2, Node 6, Node 3, Node 4 +``` + +全てのノードのチェックを終えたら、1番目のノードに戻ってチェックをします。 + +{{% /capture %}} diff --git a/content/ja/docs/concepts/services-networking/connect-applications-service.md b/content/ja/docs/concepts/services-networking/connect-applications-service.md new file mode 100644 index 0000000000000..1b3ac2e810ff5 --- /dev/null +++ b/content/ja/docs/concepts/services-networking/connect-applications-service.md @@ -0,0 +1,420 @@ +--- +title: サービスとアプリケーションの接続 +content_template: templates/concept +weight: 30 +--- + + +{{% capture overview %}} + +## コンテナを接続するためのKubernetesモデル + +継続的に実行され、複製されたアプリケーションの準備ができたので、ネットワーク上で公開することが可能になります。 +Kubernetesのネットワークのアプローチについて説明する前に、Dockerの「通常の」ネットワーク手法と比較することが重要です。 + +デフォルトでは、Dockerはホストプライベートネットワーキングを使用するため、コンテナは同じマシン上にある場合にのみ他のコンテナと通信できます。 +Dockerコンテナがノード間で通信するには、マシンのIPアドレスにポートを割り当ててから、コンテナに転送またはプロキシする必要があります。 +これは明らかに、コンテナが使用するポートを非常に慎重に調整するか、ポートを動的に割り当てる必要があることを意味します。 + +複数の開発者間でポートを調整することは大規模に行うことは非常に難しく、ユーザーが制御できないクラスターレベルの問題にさらされます。 +Kubernetesでは、どのホストで稼働するかに関わらず、Podが他のPodと通信できると想定しています。 +すべてのPodに独自のクラスタープライベートIPアドレスを付与するため、Pod間のリンクを明示的に作成したり、コンテナポートをホストポートにマップしたりする必要はありません。 +これは、Pod内のコンテナがすべてlocalhostの相互のポートに到達でき、クラスター内のすべてのPodがNATなしで相互に認識できることを意味します。 +このドキュメントの残りの部分では、このようなネットワークモデルで信頼できるサービスを実行する方法について詳しく説明します。 + +このガイドでは、シンプルなnginxサーバーを使用して概念実証を示します。 +同じ原則が、より完全な[Jenkins CIアプリケーション](https://kubernetes.io/blog/2015/07/strong-simple-ssl-for-kubernetes)で具体化されています。 + +{{% /capture %}} + +{{% capture body %}} + +## Podをクラスターに公開する + +前の例でネットワークモデルを紹介しましたが、再度ネットワークの観点に焦点を当てましょう。 +nginx Podを作成し、コンテナポートの仕様を指定していることに注意してください。 + +{{< codenew file="service/networking/run-my-nginx.yaml" >}} + +これにより、クラスター内のどのノードからでもアクセスできるようになります。 +Podが実行されているノードを確認します: + +```shell +kubectl apply -f ./run-my-nginx.yaml +kubectl get pods -l run=my-nginx -o wide +``` +``` +NAME READY STATUS RESTARTS AGE IP NODE +my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m +my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd +``` + +PodのIPを確認します: + +```shell +kubectl get pods -l run=my-nginx -o yaml | grep podIP + podIP: 10.244.3.4 + podIP: 10.244.2.5 +``` + +クラスター内の任意のノードにSSH接続し、両方のIPにcurl接続できるはずです。 +コンテナはノードでポート80を使用**していない**ことに注意してください。 +また、Podにトラフィックをルーティングする特別なNATルールもありません。 +つまり、同じcontainerPortを使用して同じノードで複数のnginx Podを実行し、IPを使用してクラスター内の他のPodやノードからそれらにアクセスできます。 +Dockerと同様に、ポートは引き続きホストノードのインターフェイスに公開できますが、ネットワークモデルにより、この必要性は根本的に減少します。 + +興味があれば、これを[どのように達成するか](/docs/concepts/cluster-administration/networking/#how-to-achieve-this)について詳しく読むことができます。 + +## Serviceを作成する + +そのため、フラットでクラスター全体のアドレス空間でnginxを実行するPodがあります。 +理論的には、これらのPodと直接通信することができますが、ノードが停止するとどうなりますか? +Podはそれで死に、Deploymentは異なるIPを持つ新しいものを作成します。 +これは、Serviceが解決する問題です。 + +Kubernetes Serviceは、クラスター内のどこかで実行されるPodの論理セットを定義する抽象化であり、すべて同じ機能を提供します。 +作成されると、各Serviceには一意のIPアドレス(clusterIPとも呼ばれます)が割り当てられます。 +このアドレスはServiceの有効期間に関連付けられており、Serviceが動作している間は変更されません。 +Podは、Serviceと通信するように構成でき、Serviceへの通信は、ServiceのメンバーであるPodに自動的に負荷分散されることを認識できます。 + +2つのnginxレプリカのサービスを`kubectl exposed`で作成できます: + +```shell +kubectl expose deployment/my-nginx +``` +``` +service/my-nginx exposed +``` + +これは次のyamlを`kubectl apply -f`することと同等です: + +{{< codenew file="service/networking/nginx-svc.yaml" >}} + +この仕様は、`run:my-nginx`ラベルを持つ任意のPodのTCPポート80をターゲットとするサービスを作成し、抽象化されたサービスポートでPodを公開します(`targetPort`:はコンテナがトラフィックを受信するポート、`port`:は抽象化されたServiceのポートであり、他のPodがServiceへのアクセスに使用する任意のポートにすることができます)。 +サービス定義でサポートされているフィールドのリストは[Service](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core) APIオブジェクトを参照してください。 + +Serviceを確認します: + +```shell +kubectl get svc my-nginx +``` +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-nginx ClusterIP 10.0.162.149 80/TCP 21s +``` + +前述のように、ServiceはPodのグループによってサポートされています。 +これらのPodはエンドポイントを通じて公開されます。 +Serviceのセレクターは継続的に評価され、結果は`my-nginx`という名前のEndpointオブジェクトにPOSTされます。 +Podが終了すると、エンドポイントから自動的に削除され、Serviceのセレクターに一致する新しいPodが自動的にエンドポイントに追加されます。 +エンドポイントを確認し、IPが最初のステップで作成されたPodと同じであることを確認します: + +```shell +kubectl describe svc my-nginx +``` +``` +Name: my-nginx +Namespace: default +Labels: run=my-nginx +Annotations: +Selector: run=my-nginx +Type: ClusterIP +IP: 10.0.162.149 +Port: 80/TCP +Endpoints: 10.244.2.5:80,10.244.3.4:80 +Session Affinity: None +Events: +``` +```shell +kubectl get ep my-nginx +``` +``` +NAME ENDPOINTS AGE +my-nginx 10.244.2.5:80,10.244.3.4:80 1m +``` + +クラスター内の任意のノードから、`:`でnginx Serviceにcurl接続できるようになりました。 +Service IPは完全に仮想的なもので、ホスト側のネットワークには接続できないことに注意してください。 +この仕組みに興味がある場合は、[サービスプロキシー](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)の詳細をお読みください。 + +## Serviceにアクセスする + +Kubernetesは、環境変数とDNSの2つの主要なService検索モードをサポートしています。 +前者はそのまま使用でき、後者は[CoreDNSクラスタアドオン](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns)を必要とします。 +{{< note >}} +サービス環境変数が望ましくない場合(予想されるプログラム変数と衝突する可能性がある、処理する変数が多すぎる、DNSのみを使用するなど)、[Pod仕様](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)で`enableServiceLinks`フラグを`false`に設定することでこのモードを無効にできます。 +{{< /note >}} + + +### 環境変数 + +ノードでPodが実行されると、kubeletはアクティブな各サービスの環境変数のセットを追加します。 +これにより、順序付けの問題が発生します。 +理由を確認するには、実行中のnginx Podの環境を調べます(Pod名は環境によって異なります): + +```shell +kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE +``` +``` +KUBERNETES_SERVICE_HOST=10.0.0.1 +KUBERNETES_SERVICE_PORT=443 +KUBERNETES_SERVICE_PORT_HTTPS=443 +``` + +サービスに言及がないことに注意してください。これは、サービスの前にレプリカを作成したためです。 +これのもう1つの欠点は、スケジューラーが両方のPodを同じマシンに配置し、サービスが停止した場合にサービス全体がダウンする可能性があることです。 +2つのPodを強制終了し、Deploymentがそれらを再作成するのを待つことで、これを正しい方法で実行できます。 +今回は、サービスはレプリカの「前」に存在します。 +これにより、スケジューラーレベルのサービスがPodに広がり(すべてのノードの容量が等しい場合)、適切な環境変数が提供されます: + +```shell +kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2; + +kubectl get pods -l run=my-nginx -o wide +``` +``` +NAME READY STATUS RESTARTS AGE IP NODE +my-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd +my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m +``` + +Podは強制終了されて再作成されるため、異なる名前が付いていることに気付くでしょう。 + +```shell +kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE +``` +``` +KUBERNETES_SERVICE_PORT=443 +MY_NGINX_SERVICE_HOST=10.0.162.149 +KUBERNETES_SERVICE_HOST=10.0.0.1 +MY_NGINX_SERVICE_PORT=80 +KUBERNETES_SERVICE_PORT_HTTPS=443 +``` + +### DNS + +Kubernetesは、DNS名を他のServiceに自動的に割り当てるDNSクラスターアドオンサービスを提供します。 +クラスターで実行されているかどうかを確認できます: + +```shell +kubectl get services kube-dns --namespace=kube-system +``` +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 8m +``` + +実行されていない場合は、[有効にする](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/README.md#how-do-i-configure-it)ことができます。 +このセクションの残りの部分では、寿命の長いIP(my-nginx)を持つServiceと、そのIPに名前を割り当てたDNSサーバー(CoreDNSクラスターアドオン)があることを前提としているため、標準的な方法(gethostbynameなど)を使用してクラスター内の任意のPodからServiceに通信できます。 +curlアプリケーションを実行して、これをテストしてみましょう: + +```shell +kubectl run curl --image=radial/busyboxplus:curl -i --tty +``` +``` +Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false +Hit enter for command prompt +``` + +次に、Enterキーを押して`nslookup my-nginx`を実行します: + +```shell +[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx +Server: 10.0.0.10 +Address 1: 10.0.0.10 + +Name: my-nginx +Address 1: 10.0.162.149 +``` + +## Serviceを安全にする + +これまでは、クラスター内からnginxサーバーにアクセスしただけでした。 +サービスをインターネットに公開する前に、通信チャネルが安全であることを確認する必要があります。 +これには、次のものが必要です: + +* https用の自己署名証明書(既にID証明書を持っている場合を除く) +* 証明書を使用するように構成されたnginxサーバー +* Podが証明書にアクセスできるようにする[Secret](/docs/concepts/configuration/secret/) + +これらはすべて[nginx httpsの例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/)から取得できます。 +これにはツールをインストールする必要があります。 +これらをインストールしたくない場合は、後で手動の手順に従ってください。つまり: + +```shell +make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt +kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt +``` +``` +secret/nginxsecret created +``` +```shell +kubectl get secrets +``` +``` +NAME TYPE DATA AGE +default-token-il9rc kubernetes.io/service-account-token 1 1d +nginxsecret Opaque 2 1m +``` +以下は、(Windows上など)makeの実行で問題が発生した場合に実行する手動の手順です: + +```shell +# 公開秘密鍵ペアを作成します +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx" +# キーをbase64エンコードに変換します +cat /d/tmp/nginx.crt | base64 +cat /d/tmp/nginx.key | base64 +``` +前のコマンドの出力を使用して、次のようにyamlファイルを作成します。 +base64でエンコードされた値はすべて1行である必要があります。 + +```yaml +apiVersion: "v1" +kind: "Secret" +metadata: + name: "nginxsecret" + namespace: "default" +data: + nginx.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lKQUp5M3lQK0pzMlpJTUEwR0NTcUdTSWIzRFFFQkJRVUFNQ1l4RVRBUEJnTlYKQkFNVENHNW5hVzU0YzNaak1SRXdEd1lEVlFRS0V3aHVaMmx1ZUhOMll6QWVGdzB4TnpFd01qWXdOekEzTVRKYQpGdzB4T0RFd01qWXdOekEzTVRKYU1DWXhFVEFQQmdOVkJBTVRDRzVuYVc1NGMzWmpNUkV3RHdZRFZRUUtFd2h1CloybHVlSE4yWXpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjFxSU1SOVdWM0IKMlZIQlRMRmtobDRONXljMEJxYUhIQktMSnJMcy8vdzZhU3hRS29GbHlJSU94NGUrMlN5ajBFcndCLzlYTnBwbQppeW1CL3JkRldkOXg5UWhBQUxCZkVaTmNiV3NsTVFVcnhBZW50VWt1dk1vLzgvMHRpbGhjc3paenJEYVJ4NEo5Ci82UVRtVVI3a0ZTWUpOWTVQZkR3cGc3dlVvaDZmZ1Voam92VG42eHNVR0M2QURVODBpNXFlZWhNeVI1N2lmU2YKNHZpaXdIY3hnL3lZR1JBRS9mRTRqakxCdmdONjc2SU90S01rZXV3R0ljNDFhd05tNnNTSzRqYUNGeGpYSnZaZQp2by9kTlEybHhHWCtKT2l3SEhXbXNhdGp4WTRaNVk3R1ZoK0QrWnYvcW1mMFgvbVY0Rmo1NzV3ajFMWVBocWtsCmdhSXZYRyt4U1FVQ0F3RUFBYU5RTUU0d0hRWURWUjBPQkJZRUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjcKTUI4R0ExVWRJd1FZTUJhQUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjdNQXdHQTFVZEV3UUZNQU1CQWY4dwpEUVlKS29aSWh2Y05BUUVGQlFBRGdnRUJBRVhTMW9FU0lFaXdyMDhWcVA0K2NwTHI3TW5FMTducDBvMm14alFvCjRGb0RvRjdRZnZqeE04Tzd2TjB0clcxb2pGSW0vWDE4ZnZaL3k4ZzVaWG40Vm8zc3hKVmRBcStNZC9jTStzUGEKNmJjTkNUekZqeFpUV0UrKzE5NS9zb2dmOUZ3VDVDK3U2Q3B5N0M3MTZvUXRUakViV05VdEt4cXI0Nk1OZWNCMApwRFhWZmdWQTRadkR4NFo3S2RiZDY5eXM3OVFHYmg5ZW1PZ05NZFlsSUswSGt0ejF5WU4vbVpmK3FqTkJqbWZjCkNnMnlwbGQ0Wi8rUUNQZjl3SkoybFIrY2FnT0R4elBWcGxNSEcybzgvTHFDdnh6elZPUDUxeXdLZEtxaUMwSVEKQ0I5T2wwWW5scE9UNEh1b2hSUzBPOStlMm9KdFZsNUIyczRpbDlhZ3RTVXFxUlU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" + nginx.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2RhaURFZlZsZHdkbFIKd1V5eFpJWmVEZWNuTkFhbWh4d1NpeWF5N1AvOE9ta3NVQ3FCWmNpQ0RzZUh2dGtzbzlCSzhBZi9WemFhWm9zcApnZjYzUlZuZmNmVUlRQUN3WHhHVFhHMXJKVEVGSzhRSHA3VkpMcnpLUC9QOUxZcFlYTE0yYzZ3MmtjZUNmZitrCkU1bEVlNUJVbUNUV09UM3c4S1lPNzFLSWVuNEZJWTZMMDUrc2JGQmd1Z0ExUE5JdWFubm9UTWtlZTRuMG4rTDQKb3NCM01ZUDhtQmtRQlAzeE9JNHl3YjREZXUraURyU2pKSHJzQmlIT05Xc0RadXJFaXVJMmdoY1kxeWIyWHI2UAozVFVOcGNSbC9pVG9zQngxcHJHclk4V09HZVdPeGxZZmcvbWIvNnBuOUYvNWxlQlkrZStjSTlTMkQ0YXBKWUdpCkwxeHZzVWtGQWdNQkFBRUNnZ0VBZFhCK0xkbk8ySElOTGo5bWRsb25IUGlHWWVzZ294RGQwci9hQ1Zkank4dlEKTjIwL3FQWkUxek1yall6Ry9kVGhTMmMwc0QxaTBXSjdwR1lGb0xtdXlWTjltY0FXUTM5SjM0VHZaU2FFSWZWNgo5TE1jUHhNTmFsNjRLMFRVbUFQZytGam9QSFlhUUxLOERLOUtnNXNrSE5pOWNzMlY5ckd6VWlVZWtBL0RBUlBTClI3L2ZjUFBacDRuRWVBZmI3WTk1R1llb1p5V21SU3VKdlNyblBESGtUdW1vVlVWdkxMRHRzaG9reUxiTWVtN3oKMmJzVmpwSW1GTHJqbGtmQXlpNHg0WjJrV3YyMFRrdWtsZU1jaVlMbjk4QWxiRi9DSmRLM3QraTRoMTVlR2ZQegpoTnh3bk9QdlVTaDR2Q0o3c2Q5TmtEUGJvS2JneVVHOXBYamZhRGR2UVFLQmdRRFFLM01nUkhkQ1pKNVFqZWFKClFGdXF4cHdnNzhZTjQyL1NwenlUYmtGcVFoQWtyczJxWGx1MDZBRzhrZzIzQkswaHkzaE9zSGgxcXRVK3NHZVAKOWRERHBsUWV0ODZsY2FlR3hoc0V0L1R6cEdtNGFKSm5oNzVVaTVGZk9QTDhPTm1FZ3MxMVRhUldhNzZxelRyMgphRlpjQ2pWV1g0YnRSTHVwSkgrMjZnY0FhUUtCZ1FEQmxVSUUzTnNVOFBBZEYvL25sQVB5VWs1T3lDdWc3dmVyClUycXlrdXFzYnBkSi9hODViT1JhM05IVmpVM25uRGpHVHBWaE9JeXg5TEFrc2RwZEFjVmxvcG9HODhXYk9lMTAKMUdqbnkySmdDK3JVWUZiRGtpUGx1K09IYnRnOXFYcGJMSHBzUVpsMGhucDBYSFNYVm9CMUliQndnMGEyOFVadApCbFBtWmc2d1BRS0JnRHVIUVV2SDZHYTNDVUsxNFdmOFhIcFFnMU16M2VvWTBPQm5iSDRvZUZKZmcraEppSXlnCm9RN3hqWldVR3BIc3AyblRtcHErQWlSNzdyRVhsdlhtOElVU2FsbkNiRGlKY01Pc29RdFBZNS9NczJMRm5LQTQKaENmL0pWb2FtZm1nZEN0ZGtFMXNINE9MR2lJVHdEbTRpb0dWZGIwMllnbzFyb2htNUpLMUI3MkpBb0dBUW01UQpHNDhXOTVhL0w1eSt5dCsyZ3YvUHM2VnBvMjZlTzRNQ3lJazJVem9ZWE9IYnNkODJkaC8xT2sybGdHZlI2K3VuCnc1YytZUXRSTHlhQmd3MUtpbGhFZDBKTWU3cGpUSVpnQWJ0LzVPbnlDak9OVXN2aDJjS2lrQ1Z2dTZsZlBjNkQKckliT2ZIaHhxV0RZK2Q1TGN1YSt2NzJ0RkxhenJsSlBsRzlOZHhrQ2dZRUF5elIzT3UyMDNRVVV6bUlCRkwzZAp4Wm5XZ0JLSEo3TnNxcGFWb2RjL0d5aGVycjFDZzE2MmJaSjJDV2RsZkI0VEdtUjZZdmxTZEFOOFRwUWhFbUtKCnFBLzVzdHdxNWd0WGVLOVJmMWxXK29xNThRNTBxMmk1NVdUTThoSDZhTjlaMTltZ0FGdE5VdGNqQUx2dFYxdEYKWSs4WFJkSHJaRnBIWll2NWkwVW1VbGc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K" +``` +ファイルを使用してSecretを作成します: + +```shell +kubectl apply -f nginxsecrets.yaml +kubectl get secrets +``` +``` +NAME TYPE DATA AGE +default-token-il9rc kubernetes.io/service-account-token 1 1d +nginxsecret Opaque 2 1m +``` + +次に、nginxレプリカを変更して、シークレットの証明書とServiceを使用してhttpsサーバーを起動し、両方のポート(80と443)を公開します: + +{{< codenew file="service/networking/nginx-secure-app.yaml" >}} + +nginx-secure-appマニフェストに関する注目すべき点: + +- 同じファイルにDeploymentとServiceの両方が含まれています。 +- [nginxサーバー](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/default.conf)はポート80のHTTPトラフィックと443のHTTPSトラフィックを処理し、nginx Serviceは両方のポートを公開します。 +- 各コンテナは`/etc/nginx/ssl`にマウントされたボリュームを介してキーにアクセスできます。 + これは、nginxサーバーが起動する*前に*セットアップされます。 + +```shell +kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml +``` + +この時点で、任意のノードからnginxサーバーに到達できます。 + +```shell +kubectl get pods -o yaml | grep -i podip + podIP: 10.244.3.5 +node $ curl -k https://10.244.3.5 +... +

Welcome to nginx!

+``` + +最後の手順でcurlに`-k`パラメーターを指定したことに注意してください。 +これは、証明書の生成時にnginxを実行しているPodについて何も知らないためです。 +CNameの不一致を無視するようcurlに指示する必要があります。 +Serviceを作成することにより、証明書で使用されるCNameを、Service検索中にPodで使用される実際のDNS名にリンクしました。 +これをPodからテストしましょう(簡単にするために同じシークレットを再利用しています。PodはServiceにアクセスするためにnginx.crtのみを必要とします): + +{{< codenew file="service/networking/curlpod.yaml" >}} + +```shell +kubectl apply -f ./curlpod.yaml +kubectl get pods -l app=curlpod +``` +``` +NAME READY STATUS RESTARTS AGE +curl-deployment-1515033274-1410r 1/1 Running 0 1m +``` +```shell +kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/nginx.crt +... +Welcome to nginx! +... +``` + +## Serviceを公開する + +アプリケーションの一部では、Serviceを外部IPアドレスに公開したい場合があります。 +Kubernetesは、NodePortとLoadBalancerの2つの方法をサポートしています。 +前のセクションで作成したServiceはすでに`NodePort`を使用しているため、ノードにパブリックIPがあれば、nginx HTTPSレプリカはインターネット上のトラフィックを処理する準備ができています。 + +```shell +kubectl get svc my-nginx -o yaml | grep nodePort -C 5 + uid: 07191fb3-f61a-11e5-8ae5-42010af00002 +spec: + clusterIP: 10.0.162.149 + ports: + - name: http + nodePort: 31704 + port: 8080 + protocol: TCP + targetPort: 80 + - name: https + nodePort: 32453 + port: 443 + protocol: TCP + targetPort: 443 + selector: + run: my-nginx +``` +```shell +kubectl get nodes -o yaml | grep ExternalIP -C 1 + - address: 104.197.41.11 + type: ExternalIP + allocatable: +-- + - address: 23.251.152.56 + type: ExternalIP + allocatable: +... + +$ curl https://: -k +... +

Welcome to nginx!

+``` + +クラウドロードバランサーを使用するようにサービスを再作成しましょう。 +`my-nginx`サービスの`Type`を`NodePort`から`LoadBalancer`に変更するだけです: + +```shell +kubectl edit svc my-nginx +kubectl get svc my-nginx +``` +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-nginx ClusterIP 10.0.162.149 162.222.184.144 80/TCP,81/TCP,82/TCP 21s +``` +``` +curl https:// -k +... +Welcome to nginx! +``` + +`EXTERNAL-IP`列のIPアドレスは、パブリックインターネットで利用可能なものです。 +`CLUSTER-IP`は、クラスター/プライベートクラウドネットワーク内でのみ使用できます。 + +AWSでは、type `LoadBalancer`はIPではなく(長い)ホスト名を使用するELBが作成されます。 +実際、標準の`kubectl get svc`の出力に収まるには長すぎるので、それを確認するには`kubectl describe service my-nginx`を実行する必要があります。 +次のようなものが表示されます: + +```shell +kubectl describe service my-nginx +... +LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com +... +``` + +{{% /capture %}} + +{{% capture whatsnext %}} + +Kubernetesは、複数のクラスターおよびクラウドプロバイダーにまたがるフェデレーションサービスもサポートし、可用性の向上、フォールトトレランスの向上、サービスのスケーラビリティの向上を実現します。 +詳細については[フェデレーションサービスユーザーガイド](/docs/concepts/cluster-administration/federation-service-discovery/)を参照してください。 + +{{% /capture %}} diff --git a/content/ja/docs/concepts/services-networking/ingress.md b/content/ja/docs/concepts/services-networking/ingress.md new file mode 100644 index 0000000000000..d71b87f3e25f9 --- /dev/null +++ b/content/ja/docs/concepts/services-networking/ingress.md @@ -0,0 +1,403 @@ +--- +title: Ingress +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} +{{< feature-state for_k8s_version="v1.1" state="beta" >}} +{{< glossary_definition term_id="ingress" length="all" >}} +{{% /capture %}} + +{{% capture body %}} + +## 用語 + +まずわかりやすくするために、このガイドでは次の用語を定義します。 + +- ノード: Kubernetes内のワーカーマシンで、クラスターの一部です。 + +- クラスター: Kubernetesによって管理されているコンテナ化されたアプリケーションを実行させるノードのセットです。この例や、多くのKubernetesによるデプロイでは、クラスター内のノードはパブリックインターネットとして公開されていません。 + +- エッジルーター: クラスターでファイアウォールのポリシーを強制するルーターです。エッジルーターはクラウドプロバイダーやハードウェアの物理的な一部として管理されたゲートウェイとなります。 + +- クラスターネットワーク: 物理的または論理的なリンクのセットで、Kubernetesの[ネットワークモデル](/docs/concepts/cluster-administration/networking/)によって、クラスター内でのコミュニケーションを司るものです。 + +- Service: {{< glossary_tooltip text="ラベル" term_id="label" >}}セレクターを使ったPodのセットを特定するKubernetes {{< glossary_tooltip term_id="service" >}}です。特に言及がない限り、Serviceはクラスターネットワーク内でのみ疎通可能な仮想IPを持つと想定されます。 + +## Ingressとは何か + +Ingressはクラスター外からクラスター内{{< link text="Service" url="/docs/concepts/services-networking/service/" >}}へのHTTPとHTTPSのルートを公開します。トラフィックのルーティングはIngressリソース上で定義されるルールによって制御されます。 + +```none + internet + | + [ Ingress ] + --|-----|-- + [ Services ] +``` + +IngressはServiceに対して、外部疎通できるURL、負荷分散トラフィック、SSL/TLS終端の機能や、名前ベースの仮想ホスティングを提供するように構成できます。[Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers)は通常はロードバランサーを使用してIngressの機能を実現しますが、エッジルーターや、追加のフロントエンドを構成してトラフィックの処理を支援することもできます。 + +Ingressは任意のポートやプロトコルを公開しません。HTTPやHTTPS以外のServiceをインターネットに公開するときは、[Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport)や[Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer)のServiceタイプを使用することが多いです。 + +## Ingressを使用する上での前提条件 + +Ingressの機能を提供するために[Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers)を用意する必要があります。Ingressリソースを作成するのみでは何の効果もありません。 + +[ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/)のようなIngressコントローラーのデプロイが必要な場合があります。ユーザーはいくつかの[Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers)の中から選択できます。 + +理想的には、全てのIngressコントローラーはリファレンスの仕様を満たすべきです。しかし実際には、いくつかのIngressコントローラーは微妙に異なる動作をします。 + +{{< note >}} +Ingressコントローラーのドキュメントを確認して、選択する際の注意点について理解してください。 +{{< /note >}} + +## Ingressリソース + +Ingressリソースの最小構成の例は下記のとおりです。 + +```yaml +apiVersion: networking.k8s.io/v1beta1 +kind: Ingress +metadata: + name: test-ingress + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / +spec: + rules: + - http: + paths: + - path: /testpath + backend: + serviceName: test + servicePort: 80 +``` + +他の全てのKubernetesリソースと同様に、Ingressは`apiVersion`、`kind`や`metadata`フィールドが必要です。設定ファイルの利用に関する一般的な情報は、[アプリケーションのデプロイ](/docs/tasks/run-application/run-stateless-application-deployment/)、[コンテナーの設定](/docs/tasks/configure-pod-container/configure-pod-configmap/)、[リソースの管理](/docs/concepts/cluster-administration/manage-deployment/)を参照してください。 +Ingressでは、Ingressコントローラーに依存しているいくつかのオプションの設定をするためにアノテーションを使うことが多いです。その例としては、[rewrite-targetアノテーション](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md)などがあります。 +[Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers)の種類が異なれば、サポートするアノテーションも異なります。サポートされているアノテーションについて学ぶために、ユーザーが使用するIngressコントローラーのドキュメントを確認してください。 + +Ingress [Spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)は、ロードバランサーやプロキシーサーバーを設定するために必要な全ての情報を持っています。最も重要なものとして、外部からくる全てのリクエストに対して一致したルールのリストを含みます。IngressリソースはHTTPトラフィックに対してのルールのみサポートしています。 + +### Ingressのルール + +各HTTPルールは下記の情報を含みます。 + +* オプションで設定可能なホスト名。上記のリソースの例では、ホスト名が指定されていないと、そのルールは指定されたIPアドレスを経由する全てのインバウンドHTTPトラフィックに適用されます。ホスト名が指定されていると(例: foo.bar.com)、そのルールはホストに対して適用されます。 +* パスのリスト(例: `/testpath`)。各パスには`serviceName`と`servicePort`で定義されるバックエンドが関連づけられます。ロードバランサーがトラフィックを関連づけられたServiceに転送するために、外部からくるリクエストのホスト名とパスが条件と一致させる必要があります。 +* [Serviceドキュメント](/docs/concepts/services-networking/service/)に書かれているように、バックエンドはServiceとポート名の組み合わせとなります。Ingressで設定されたホスト名とパスのルールに一致するHTTP(とHTTPS)のリクエストは、リスト内のバックエンドに対して送信されます。 + +Ingressコントローラーでは、デフォルトのバックエンドが設定されていることがあります。これはSpec内で指定されているパスに一致しないようなリクエストのためのバックエンドです。 + +### デフォルトのバックエンド + +ルールが設定されていないIngressは、全てのトラフィックをデフォルトのバックエンドに転送します。このデフォルトのバックエンドは、[Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers)のオプション設定であり、Ingressリソースでは指定されていません。 + +IngressオブジェクトでHTTPリクエストが1つもホスト名とパスの条件に一致しない時、そのトラフィックはデフォルトのバックエンドに転送されます。 + +## Ingressのタイプ + +### 単一ServiceのIngress +ユーザーは単一のServiceを公開できるという、Kubernetesのコンセプトがあります([Ingressの代替案](#alternatives)を参照してください)。 +また、Ingressでこれを実現できます。それはルールを設定せずに*デフォルトのバックエンド* を指定することにより可能です。 + +{{< codenew file="service/networking/ingress.yaml" >}} + +`kubectl apply -f`を実行してIngressを作成し、その作成したIngressの状態を確認することができます。 + +```shell +kubectl get ingress test-ingress +``` + +``` +NAME HOSTS ADDRESS PORTS AGE +test-ingress * 107.178.254.228 80 59s +``` + +`107.178.254.228`はIngressコントローラーによって割り当てられたIPで、このIngressを利用するためのものです。 + +{{< note >}} +IngressコントローラーとロードバランサーがIPアドレス割り当てるのに1、2分ほどかかります。この間、ADDRESSの情報は``となっているのを確認できます。 +{{< /note >}} + +### リクエストのシンプルなルーティング + +ファンアウト設定では単一のIPアドレスのトラフィックを、リクエストされたHTTP URIに基づいて1つ以上のServiceに転送します。Ingressによって、ユーザーはロードバランサーの数を少なくできます。例えば、下記のように設定します。 + +```none +foo.bar.com -> 178.91.123.132 -> / foo service1:4200 + / bar service2:8080 +``` + +Ingressを下記のように設定します。 + +```yaml +apiVersion: networking.k8s.io/v1beta1 +kind: Ingress +metadata: + name: simple-fanout-example + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / +spec: + rules: + - host: foo.bar.com + http: + paths: + - path: /foo + backend: + serviceName: service1 + servicePort: 4200 + - path: /bar + backend: + serviceName: service2 + servicePort: 8080 +``` + +Ingressを`kubectl apply -f`によって作成したとき: + +```shell +kubectl describe ingress simple-fanout-example +``` + +``` +Name: simple-fanout-example +Namespace: default +Address: 178.91.123.132 +Default backend: default-http-backend:80 (10.8.2.3:8080) +Rules: + Host Path Backends + ---- ---- -------- + foo.bar.com + /foo service1:4200 (10.8.0.90:4200) + /bar service2:8080 (10.8.0.91:8080) +Annotations: + nginx.ingress.kubernetes.io/rewrite-target: / +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ADD 22s loadbalancer-controller default/test +``` + +IngressコントローラーはService(`service1`、`service2`)が存在する限り、Ingressの条件を満たす実装固有のロードバランサーを構築します。 +構築が完了すると、ADDRESSフィールドでロードバランサーのアドレスを確認できます。 + +{{< note >}} +ユーザーが使用している[Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers)に依存しますが、ユーザーはdefault-http-backend[Service](/docs/concepts/services-networking/service/)の作成が必要な場合があります。 +{{< /note >}} + +### 名前ベースの仮想ホスティング + +名前ベースの仮想ホストは、HTTPトラフィックを同一のIPアドレスの複数のホスト名に転送することをサポートしています。 + +```none +foo.bar.com --| |-> foo.bar.com service1:80 + | 178.91.123.132 | +bar.foo.com --| |-> bar.foo.com service2:80 +``` + +下記のIngress設定は、ロードバランサーに対して、[Hostヘッダー](https://tools.ietf.org/html/rfc7230#section-5.4)に基づいてリクエストを転送するように指示するものです。 + +```yaml +apiVersion: networking.k8s.io/v1beta1 +kind: Ingress +metadata: + name: name-virtual-host-ingress +spec: + rules: + - host: foo.bar.com + http: + paths: + - backend: + serviceName: service1 + servicePort: 80 + - host: bar.foo.com + http: + paths: + - backend: + serviceName: service2 + servicePort: 80 +``` + +rules項目でのホストの設定がないIngressを作成すると、IngressコントローラーのIPアドレスに対するwebトラフィックは、要求されている名前ベースの仮想ホストなしにマッチさせることができます。 + +例えば、下記のIngressリソースは`first.bar.com`に対するトラフィックを`service1`へ、`second.foo.com`に対するトラフィックを`service2`へ、リクエストにおいてホスト名が指定されていない(リクエストヘッダーがないことを意味します)トラフィックは`service3`へ転送します。 + +```yaml +apiVersion: networking.k8s.io/v1beta1 +kind: Ingress +metadata: + name: name-virtual-host-ingress +spec: + rules: + - host: first.bar.com + http: + paths: + - backend: + serviceName: service1 + servicePort: 80 + - host: second.foo.com + http: + paths: + - backend: + serviceName: service2 + servicePort: 80 + - http: + paths: + - backend: + serviceName: service3 + servicePort: 80 +``` + +### TLS + +TLSの秘密鍵と証明書を含んだ{{< glossary_tooltip term_id="secret" >}}を指定することにより、Ingressをセキュアにできます。現在Ingressは単一のTLSポートである443番ポートのみサポートし、TLS終端を行うことを想定しています。IngressのTLS設定のセクションで異なるホストを指定すると、それらのホストはSNI TLSエクステンション(IngressコントローラーがSNIをサポートしている場合)を介して指定されたホスト名に対し、同じポート上で多重化されます。TLSのSecretは`tls.crt`と`tls.key`というキーを含む必要があり、TLSを使用するための証明書と秘密鍵を含む値となります。下記が例となります。 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: testsecret-tls + namespace: default +data: + tls.crt: base64 encoded cert + tls.key: base64 encoded key +type: kubernetes.io/tls +``` + +IngressでこのSecretを参照すると、クライアントとロードバランサー間の通信にTLSを使用するようIngressコントローラーに指示することになります。作成したTLS Secretは、`sslexample.foo.com`の完全修飾ドメイン名(FQDN)とも呼ばれる共通名(CN)を含む証明書から作成したものであることを確認する必要があります。 + +```yaml +apiVersion: networking.k8s.io/v1beta1 +kind: Ingress +metadata: + name: tls-example-ingress +spec: + tls: + - hosts: + - sslexample.foo.com + secretName: testsecret-tls + rules: + - host: sslexample.foo.com + http: + paths: + - path: / + backend: + serviceName: service1 + servicePort: 80 +``` + +{{< note >}} +Ingressコントローラーによって、サポートされるTLSの機能に違いがあります。利用する環境でTLSがどのように動作するかを理解するために、[nginx](https://git.k8s.io/ingress-nginx/README.md#https)や、[GCE](https://git.k8s.io/ingress-gce/README.md#frontend-https)、他のプラットフォーム固有のIngressコントローラーのドキュメントを確認してください。 +{{< /note >}} + +### 負荷分散 + +Ingressコントローラーは、負荷分散アルゴリズムやバックエンドの重みスキームなど、すべてのIngressに適用されるいくつかの負荷分散ポリシーの設定とともにブートストラップされます。発展した負荷分散のコンセプト(例: セッションの永続化、動的重み付けなど)はIngressによってサポートされていません。代わりに、それらの機能はService用のロードバランサーを介して利用できます。 + +Ingressによってヘルスチェックの機能が直接に公開されていない場合でも、Kubernetesにおいて、同等の機能を提供する[Readiness Probe](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)のようなコンセプトが存在することは注目に値します。コントローラーがどのようにヘルスチェックを行うかについては、コントローラーのドキュメントを参照してください([nginx](https://git.k8s.io/ingress-nginx/README.md)、[GCE](https://git.k8s.io/ingress-gce/README.md#health-checks))。 + +## Ingressの更新 + +リソースを編集することで、既存のIngressに対して新しいホストを追加することができます。 + +```shell +kubectl describe ingress test +``` + +``` +Name: test +Namespace: default +Address: 178.91.123.132 +Default backend: default-http-backend:80 (10.8.2.3:8080) +Rules: + Host Path Backends + ---- ---- -------- + foo.bar.com + /foo service1:80 (10.8.0.90:80) +Annotations: + nginx.ingress.kubernetes.io/rewrite-target: / +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ADD 35s loadbalancer-controller default/test +``` + +```shell +kubectl edit ingress test +``` + +このコマンドを実行すると既存の設定をYAMLフォーマットで編集するエディターが表示されます。新しいホストを追加するために、リソースを修正してください。 + +```yaml +spec: + rules: + - host: foo.bar.com + http: + paths: + - backend: + serviceName: service1 + servicePort: 80 + path: /foo + - host: bar.baz.com + http: + paths: + - backend: + serviceName: service2 + servicePort: 80 + path: /foo +.. +``` + +変更を保存した後、kubectlはAPIサーバー内のリソースを更新し、Ingressコントローラーに対してロードバランサーの再設定を指示します。 + +変更内容を確認してください。 + +```shell +kubectl describe ingress test +``` + +``` +Name: test +Namespace: default +Address: 178.91.123.132 +Default backend: default-http-backend:80 (10.8.2.3:8080) +Rules: + Host Path Backends + ---- ---- -------- + foo.bar.com + /foo service1:80 (10.8.0.90:80) + bar.baz.com + /foo service2:80 (10.8.0.91:80) +Annotations: + nginx.ingress.kubernetes.io/rewrite-target: / +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ADD 45s loadbalancer-controller default/test +``` + +修正されたIngressのYAMLファイルに対して`kubectl replace -f`を実行することで、同様の結果を得られます。 + +## アベイラビリティーゾーンをまたいだ障害について + +障害のあるドメインをまたいでトラフィックを分散する手法は、クラウドプロバイダーによって異なります。詳細に関して、[Ingress コントローラー](/docs/concepts/services-networking/ingress-controllers)のドキュメントを参照してください。複数のクラスターにおいてIngressをデプロイする方法の詳細に関しては[Kubernetes Cluster Federationのドキュメント](https://github.com/kubernetes-sigs/federation-v2)を参照してください。 + +## 将来追加予定の内容 + +Ingressと関連するリソースの今後の開発については[SIG Network](https://github.com/kubernetes/community/tree/master/sig-network)で行われている議論を確認してください。様々なIngressコントローラーの開発については[Ingress リポジトリー](https://github.com/kubernetes/ingress/tree/master)を確認してください。 + +## Ingressの代替案 {#alternatives} + +Ingressリソースに直接関与しない複数の方法でServiceを公開できます。 + +下記の2つの使用を検討してください。 +* [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer) +* [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) + +{{% /capture %}} + +{{% capture whatsnext %}} +* [Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers/)について学ぶ +* [MinikubeとNGINXコントローラーでIngressのセットアップを行う](/docs/tasks/access-application-cluster/ingress-minikube) +{{% /capture %}} diff --git a/content/ja/docs/concepts/services-networking/service.md b/content/ja/docs/concepts/services-networking/service.md index a7f6447d29f1a..71c8795733b8d 100644 --- a/content/ja/docs/concepts/services-networking/service.md +++ b/content/ja/docs/concepts/services-networking/service.md @@ -134,6 +134,13 @@ link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6)に設 ExternalName Serviceはセレクターの代わりにDNS名を使用する特殊なケースのServiceです。さらなる情報は、このドキュメントの後で紹介する[ExternalName](#externalname)を参照ください。 +### エンドポイントスライス +{{< feature-state for_k8s_version="v1.16" state="alpha" >}} + +エンドポイントスライスは、Endpointsに対してよりスケーラブルな代替手段を提供できるAPIリソースです。概念的にはEndpointsに非常に似ていますが、エンドポイントスライスを使用すると、ネットワークエンドポイントを複数のリソースに分割できます。デフォルトでは、エンドポイントスライスは、100個のエンドポイントに到達すると「いっぱいである」と見なされ、その時点で追加のエンドポイントスライスが作成され、追加のエンドポイントが保存されます。 + +エンドポイントスライスは、[エンドポイントスライスのドキュメント](/docs/concepts/services-networking/endpoint-slices/)にて詳しく説明されている追加の属性と機能を提供します。 + ## 仮想IPとサービスプロキシー {#virtual-ips-and-service-proxies} Kubernetesクラスターの各Nodeは`kube-proxy`を稼働させています。`kube-proxy`は[`ExternalName`](#externalname)タイプ以外の`Service`用に仮想IPを実装する責務があります。 @@ -149,12 +156,6 @@ Serviceにおいてプロキシーを使う理由はいくつかあります。 * いくつかのアプリケーションではDNSルックアップを1度だけ行い、その結果を無期限にキャッシュする。 * アプリケーションとライブラリーが適切なDNS名の再解決を行ったとしても、DNSレコード上の0もしくは低い値のTTLがDNSに負荷をかけることがあり、管理が難しい。 -### バージョン互換性 - -Kubernetes v1.0から、[user-spaceプロキシーモード](#proxy-mode-userspace)を利用できるようになっています。 -v1.1ではiptablesモードでのプロキシーを追加し、v1.2では、kube-proxyにおいてiptablesモードがデフォルトとなりました。 -v1.8では、ipvsプロキシーモードが追加されました。 - ### user-spaceプロキシーモード {#proxy-mode-userspace} このモードでは、kube-proxyはServiceやEndpointオブジェクトの追加・削除をチェックするために、Kubernetes Masterを監視します。 @@ -389,12 +390,11 @@ spec: port: 80 targetPort: 9376 clusterIP: 10.0.171.239 - loadBalancerIP: 78.11.24.19 type: LoadBalancer status: loadBalancer: ingress: - - ip: 146.148.47.155 + - ip: 192.0.2.127 ``` 外部のロードバランサーからのトラフィックはバックエンドのPodに直接転送されます。クラウドプロバイダーはどのようにそのリクエストをバランシングするかを決めます。 @@ -437,9 +437,6 @@ metadata: cloud.google.com/load-balancer-type: "Internal" [...] ``` - -Kubernetes1.7.0から1.7.3のMasterに対しては、`cloud.google.com/load-balancer-type: "internal"`を使用します。 -さらなる情報については、[docs](https://cloud.google.com/kubernetes-engine/docs/internal-load-balancing)を参照してください。 {{% /tab %}} {{% tab name="AWS" %}} ```yaml @@ -481,6 +478,15 @@ metadata: [...] ``` {{% /tab %}} +{{% tab name="Tencent Cloud" %}} +```yaml +[...] +metadata: + annotations: + service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx +[...] +``` +{{% /tab %}} {{< /tabs >}} @@ -630,13 +636,11 @@ AWS上でのELB Service用のアクセスログを管理するためにはいく # ELBに追加される予定のセキュリティーグループのリスト ``` -#### AWSでのNetwork Load Balancerのサポート [α版] {#aws-nlb-support} +#### AWSでのNetwork Load Balancerのサポート {#aws-nlb-support} -{{< warning >}} -これはα版の機能で、プロダクション環境でのクラスターでの使用はまだ推奨しません。 -{{< /warning >}} +{{< feature-state for_k8s_version="v1.15" state="beta" >}} -Kubernetes v1.9.0から、ServiceとAWS Network Load Balancer(NLB)を組み合わせることができます。AWSでのネットワークロードバランサーを使用するためには、`service.beta.kubernetes.io/aws-load-balancer-type`というアノテーションの値を`nlb`に設定してください。 +AWSでNetwork Load Balancerを使用するには、値を`nlb`に設定してアノテーション`service.beta.kubernetes.io/aws-load-balancer-type`を付与します。 ```yaml metadata: @@ -681,6 +685,38 @@ spec: {{< /note >}} +#### Tencent Kubernetes Engine(TKE)におけるその他のCLBアノテーション + +以下に示すように、TKEでCloud Load Balancerを管理するためのその他のアノテーションがあります。 + +```yaml + metadata: + name: my-service + annotations: + # 指定したノードでロードバランサーをバインドします + service.kubernetes.io/qcloud-loadbalancer-backends-label: key in (value1, value2) + # 既存のロードバランサーのID + service.kubernetes.io/tke-existed-lbid:lb-6swtxxxx + + # ロードバランサー(LB)のカスタムパラメーターは、LBタイプの変更をまだサポートしていません + service.kubernetes.io/service.extensiveParameters: "" + + # LBリスナーのカスタムパラメーター + service.kubernetes.io/service.listenerParameters: "" + + # ロードバランサーのタイプを指定します + # 有効な値: classic(Classic Cloud Load Balancer)またはapplication(Application Cloud Load Balancer) + service.kubernetes.io/loadbalance-type: xxxxx + # パブリックネットワーク帯域幅の課金方法を指定します + # 有効な値: TRAFFIC_POSTPAID_BY_HOUR(bill-by-traffic)およびBANDWIDTH_POSTPAID_BY_HOUR(bill-by-bandwidth) + service.kubernetes.io/qcloud-loadbalancer-internet-charge-type: xxxxxx + # 帯域幅の値を指定します(値の範囲:[1-2000] Mbps)。 + service.kubernetes.io/qcloud-loadbalancer-internet-max-bandwidth-out: "10" + # この注釈が設定されている場合、ロードバランサーはポッドが実行されているノードのみを登録します + # そうでない場合、すべてのノードが登録されます + service.kubernetes.io/local-svc-only-bind-node-with-pod: true +``` + ### ExternalName タイプ {#externalname} ExternalNameタイプのServiceは、ServiceをDNS名とマッピングし、`my-service`や`cassandra`というような従来のラベルセレクターとはマッピングしません。 @@ -708,6 +744,12 @@ IPアドレスをハードコードする場合、[Headless Service](#headless-s `my-service`へのアクセスは、他のServiceと同じ方法ですが、再接続する際はプロキシーや転送を介して行うよりも、DNSレベルで行われることが決定的に異なる点となります。 後にユーザーが使用しているデータベースをクラスター内に移行することになった後は、Podを起動させ、適切なラベルセレクターやEndpointを追加し、Serviceの`type`を変更します。 +{{< warning >}} +HTTPやHTTPSなどの一般的なプロトコルでExternalNameを使用する際に問題が発生する場合があります。ExternalNameを使用する場合、クラスター内のクライアントが使用するホスト名は、ExternalNameが参照する名前とは異なります。 + +ホスト名を使用するプロトコルの場合、この違いによりエラーまたは予期しない応答が発生する場合があります。HTTPリクエストには、オリジンサーバーが認識しない`Host:`ヘッダーがあります。TLSサーバーは、クライアントが接続したホスト名に一致する証明書を提供できません。 +{{< /warning >}} + {{< note >}} このセクションは、[Alen Komljen](https://akomljen.com/)による[Kubernetes Tips - Part1](https://akomljen.com/kubernetes-tips-part-1/)というブログポストを参考にしています。 @@ -905,5 +947,6 @@ Kubernetesプロジェクトは、現在利用可能なClusterIP、NodePortやLo * [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)を参照してください。 * [Ingress](/docs/concepts/services-networking/ingress/)を参照してください。 +* [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/)を参照してください。 {{% /capture %}} diff --git a/content/ja/docs/concepts/storage/_index.md b/content/ja/docs/concepts/storage/_index.md index 7e0dd19b1251a..4b70c7d04f556 100755 --- a/content/ja/docs/concepts/storage/_index.md +++ b/content/ja/docs/concepts/storage/_index.md @@ -1,5 +1,5 @@ --- -title: "Storage" +title: "ストレージ" weight: 70 --- diff --git a/content/ja/docs/concepts/workloads/_index.md b/content/ja/docs/concepts/workloads/_index.md index ca394ebd0029d..41bb9d33d23d2 100644 --- a/content/ja/docs/concepts/workloads/_index.md +++ b/content/ja/docs/concepts/workloads/_index.md @@ -1,5 +1,5 @@ --- -title: "Workloads" +title: "ワークロード" weight: 50 --- diff --git a/content/ja/docs/concepts/workloads/controllers/_index.md b/content/ja/docs/concepts/workloads/controllers/_index.md index 6aaa7405b532c..65c91d6280e5b 100644 --- a/content/ja/docs/concepts/workloads/controllers/_index.md +++ b/content/ja/docs/concepts/workloads/controllers/_index.md @@ -1,5 +1,4 @@ --- -title: "Controllers" +title: "コントローラー" weight: 20 --- - diff --git a/content/ja/docs/concepts/workloads/controllers/deployment.md b/content/ja/docs/concepts/workloads/controllers/deployment.md new file mode 100644 index 0000000000000..1d465cb70dafb --- /dev/null +++ b/content/ja/docs/concepts/workloads/controllers/deployment.md @@ -0,0 +1,999 @@ +--- +title: Deployment +feature: + title: 自動化されたロールアウトとロールバック + description: > + Kubernetesはアプリケーションや設定への変更を段階的に行い、アプリケーションの状態を監視しながら、全てのインスタンスが同時停止しないようにします。更新に問題が起きたとき、Kubernetesは変更のロールバックを行います。進化を続けるDeploymnetのエコシステムを活用してください。 + +content_template: templates/concept +weight: 30 +--- + +{{% capture overview %}} + +_Deployment_ コントローラーは[Pod](/docs/concepts/workloads/pods/pod/)と[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)の宣言的なアップデート機能を提供します。 + +ユーザーはDeploymentにおいて_理想的な状態_ を定義し、Deploymentコントローラーは指定された頻度で現在の状態を理想的な状態に変更させます。ユーザーはDeploymentを定義して、新しいReplicaSetを作成したり、既存のDeploymentを削除して新しいDeploymentで全てのリソースを適用できます。 + +{{< note >}} +Deploymentによって作成されたReplicaSetを管理しないでください。ユーザーのユースケースが下記の項目をカバーできていない場合はメインのKubernetesリポジトリーにイシューを作成することを検討してください。 +{{< /note >}} + +{{% /capture %}} + + +{{% capture body %}} + +## ユースケース + +下記の項目はDeploymentの典型的なユースケースです。 + +* ReplicaSetをロールアウトするために[Deploymentの作成](#creating-a-deployment)を行う: ReplicaSetはバックグラウンドでPodを作成します。Podの作成が完了したかどうかは、ロールアウトのステータスを確認してください。 +* DeploymentのPodTemplateSpecを更新することにより[Podの新しい状態を宣言する](#updating-a-deployment): 新しいReplicaSetが作成され、Deploymentは指定された頻度で古いReplicaSetから新しいReplicaSetへのPodの移行を管理します。新しいReplicaSetはDeploymentのリビジョンを更新します。 +* Deploymentの現在の状態が不安定な場合、[Deploymentのロールバック](#rolling-back-a-deployment)をする: ロールバックによる各更新作業は、Deploymentのリビジョンを更新します。 +* より多くの負荷をさばけるように、[Deploymentをスケールアップ](#scaling-a-deployment)する +* PodTemplateSpecに対する複数の修正を適用するために[Deploymentを停止(Pause)し](#pausing-and-resuming-a-deployment)、それを再開して新しいロールアウトを開始する。 +* 今後必要としない[古いReplicaSetのクリーンアップ](#clean-up-policy) + +## Deploymentの作成 {#creating-a-deployment} + +下記の内容はDeploymentの例です。これは`nginx`Podのレプリカを3つ持つReplicaSetを作成します。 + +{{< codenew file="controllers/nginx-deployment.yaml" >}} + +この例において、 + +* `nginx-deployment`という名前のDeploymentが作成され、`.metadata.name`フィールドで名前を指定します。 +* Deploymentは3つのレプリカPodを作成し、`replicas`フィールドによってレプリカ数を指定します。 +* `selector`フィールドは、Deploymentが管理するPodのラベルを定義します。このケースにおいて、ユーザーはPodテンプレートにて定義されたラベル(`app: nginx`)を選択します。しかし、PodTemplate自体がそのルールを満たす限り、さらに洗練された方法でセレクターを指定することができます。 + {{< note >}} + `matchLabels`フィールドは、キーとバリューのペアのマップとなります。`matchLabels`マップにおいて、{key, value}というペアは、keyというフィールドの値が"key"で、その演算子が"In"で、値の配列が"value"のみ含むような`matchExpressions`の要素と等しいです。 + `matchLabels`と`matchExpressions`の両方が設定された場合、条件に一致するには両方とも満たす必要があります。 + {{< /note >}} +* `template`フィールドは、下記のサブフィールドを持ちます。: + * Podは`labels`フィールドによって指定された`app: nginx`というラベルがつけられる + * PodTemplateの仕様もしくは、`.template.spec`フィールドは、このPodは`nginx`という名前のコンテナーを1つ稼働させ、それは`nginx`というさせ、[Docker Hub](https://hub.docker.com/)にある`nginx`のバージョン1.7.9を使うことを示します + * 1つのコンテナを作成し、`name`フィールドを使って`nginx`という名前をつけます + + 上記のDeploymentを作成するために、以下に示すステップにしたがってください。 + 作成を始める前に、ユーザーのKubernetesクラスターが稼働していることを確認してください。 + + 1. 下記のコマンドを実行してDeploymentを作成してください。 + + {{< note >}} + 実行したコマンドを`kubernetes.io/change-cause`というアノテーションに記録するために`--record`フラグを指定できます。これは将来的な問題の調査のために有効です。例えば、各Deploymentのリビジョンにおいて実行されたコマンドを見るときに便利です。 + {{< /note >}} + + ```shell + kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml + ``` + + 2. Deploymentが作成されたことを確認するために、`kubectl get deployment`を実行してください。Deploymentがまだ作成中の場合、コマンドの実行結果は下記のとおりです。 + ```shell + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx-deployment 3 0 0 0 1s + ``` + ユーザーのクラスターにおいてDeploymentを調査するとき、下記のフィールドが出力されます。 + + * `NAME` クラスター内のDeploymentの名前を表示する + * `DESIRED` アプリケーションの理想的な_replicas_ の値を表示する: これはDeploymentを作成したときに定義したもので、これが_理想的な状態_ と呼ばれるものです。 + * `CURRENT` 現在稼働中のレプリカ数 + * `UP-TO-DATE` 理想的な状態にするために、アップデートが完了したレプリカ数 + * `AVAILABLE` ユーザーが利用可能なレプリカ数 + * `AGE` アプリケーションが稼働してからの時間 + + 上記のyamlの例だと、`.spec.replicas`フィールドの値によると、理想的なレプリカ数は3です。 + + 3. Deploymentのロールアウトステータスを確認するために、`kubectl rollout status deployment.v1.apps/nginx-deployment`を実行してください。コマンドの実行結果は下記のとおりです。 + ```shell + Waiting for rollout to finish: 2 out of 3 new replicas have been updated... + deployment.apps/nginx-deployment successfully rolled out + ``` + + 4. 数秒後、再度`kubectl get deployments`を実行してください。コマンドの実行結果は下記のとおりです。 + ```shell + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx-deployment 3 3 3 3 18s + ``` + Deploymentが3つ全てのレプリカを作成して、全てのレプリカが最新(Podが最新のPodテンプレートを含んでいる)になり、利用可能となっていることを確認してください。 + + 5. Deploymentによって作成されたReplicaSet (`rs`)を確認するには`kubectl get rs`を実行してください。コマンドの実行結果は下記のとおりです。 + + ```shell + NAME DESIRED CURRENT READY AGE + nginx-deployment-75675f5897 3 3 3 18s + ``` + ReplicaSetの名前は`[Deployment名]-[ランダム文字列]`という形式になることに注意してください。ランダム文字列はランダムに生成され、pod-template-hashをシードとして使用します。 + + 6. 各Podにラベルが自動的に付けられるのを確認するには`kubectl get pods --show-labels`を実行してください。コマンドの実行結果は下記のとおりです。 + ```shell + NAME READY STATUS RESTARTS AGE LABELS + nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 + nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 + nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 + ``` + 作成されたReplicaSetは`nginx`Podを3つ作成することを保証します。 + + {{< note >}} + Deploymentに対して適切なセレクターとPodテンプレートのラベルを設定する必要があります(このケースでは`app: nginx`)。ラベルやセレクターを他のコントローラーと重複させないでください(他のDeploymentやStatefulSetを含む)。Kubernetesはユーザがラベルを重複させることを止めないため、複数のコントローラーでセレクターの重複が発生すると、コントローラー間で衝突し予期せぬふるまいをすることになります。 + {{< /note >}} + +### pod-template-hashラベル + +{{< note >}} +このラベルを変更しないでください。 +{{< /note >}} + +`pod-template-hash`ラベルはDeploymentコントローラーによってDeploymentが作成し適用した各ReplicaSetに対して追加されます。 + +このラベルはDeploymentが管理するReplicaSetが重複しないことを保証します。このラベルはReplicaSetの`PodTemplate`をハッシュ化することにより生成され、生成されたハッシュ値はラベル値としてReplicaSetセレクター、Podテンプレートラベル、ReplicaSetが作成した全てのPodに対して追加されます。 + +## Deploymentの更新 + +{{< note >}} +Deploymentのロールアウトは、DeploymentのPodテンプレート(この場合`.spec.template`)が変更された場合にのみトリガーされます。例えばテンプレートのラベルもしくはコンテナーイメージが更新された場合です。Deploymentのスケールのような更新では、ロールアウトはトリガーされません。 +{{< /note >}} + +Deploymentを更新するには下記のステップに従ってください。 + +1. nginxのPodで、`nginx:1.7.9`イメージの代わりに`nginx:1.9.1`を使うように更新します。 + + ```shell + kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + ``` + + 実行結果は下記のとおりです。 + ``` + deployment.apps/nginx-deployment image updated + ``` + + また、Deploymentを`編集`して、`.spec.template.spec.containers[0].image`を`nginx:1.7.9`から`nginx:1.9.1`に変更することができます。 + + ```shell + kubectl edit deployment.v1.apps/nginx-deployment + ``` + + 実行結果は下記のとおりです。 + ``` + deployment.apps/nginx-deployment edited + ``` + +2. ロールアウトのステータスを確認するには、下記のコマンドを実行してください。 + + ```shell + kubectl rollout status deployment.v1.apps/nginx-deployment + ``` + + 実行結果は下記のとおりです。 + ``` + Waiting for rollout to finish: 2 out of 3 new replicas have been updated... + ``` + もしくは + ``` + deployment.apps/nginx-deployment successfully rolled out + ``` + +更新されたDeploymentのさらなる情報を取得するには、下記を確認してください。 + +* ロールアウトが成功したあと、`kubectl get deployments`を実行してDeploymentを確認できます。 + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx-deployment 3 3 3 3 36s + ``` + +* Deploymentが新しいReplicaSetを作成してPodを更新させたり、新しいReplicaSetのレプリカを3にスケールアップさせたり、古いReplicaSetのレプリカを0にスケールダウンさせるのを確認するには`kubectl get rs`を実行してください。 + + ```shell + kubectl get rs + ``` + + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1564180365 3 3 3 6s + nginx-deployment-2035384211 0 0 0 36s + ``` + +* `get pods`を実行させると、新しいPodのみ確認できます。 + + ```shell + kubectl get pods + ``` + + 実行結果は下記のとおりです。 + ``` + NAME READY STATUS RESTARTS AGE + nginx-deployment-1564180365-khku8 1/1 Running 0 14s + nginx-deployment-1564180365-nacti 1/1 Running 0 14s + nginx-deployment-1564180365-z9gth 1/1 Running 0 14s + ``` + + 次にPodを更新させたいときは、DeploymentのPodテンプレートを再度更新するだけです。 + + Deploymentは、Podが更新されている間に特定の数のPodのみ停止状態になることを保証します。デフォルトでは、目標とするPod数の少なくとも25%が停止状態になることを保証します(25% max unavailable)。 + + また、DeploymentはPodが更新されている間に、目標とするPod数を特定の数まで超えてPodを稼働させることを保証します。デフォルトでは、目標とするPod数に対して最大でも25%を超えてPodを稼働させることを保証します(25% max surge)。 + + 例えば、上記で説明したDeploymentの状態を注意深く見ると、最初に新しいPodが作成され、次に古いPodが削除されるのを確認できます。十分な数の新しいPodが稼働するまでは、Deploymentは古いPodを削除しません。また十分な数の古いPodが削除しない限り新しいPodは作成されません。少なくとも2つのPodが利用可能で、最大でもトータルで4つのPodが利用可能になっていることを保証します。 + +* Deploymentの詳細情報を取得します。 + ```shell + kubectl describe deployments + ``` + 実行結果は下記のとおりです。 + ``` + Name: nginx-deployment + Namespace: default + CreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000 + Labels: app=nginx + Annotations: deployment.kubernetes.io/revision=2 + Selector: app=nginx + Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.9.1 + Port: 80/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable + OldReplicaSets: + NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created) + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3 + Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1 + Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2 + Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2 + Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1 + Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3 + Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0 + ``` + 最初にDeploymentを作成した時、ReplicaSet(nginx-deployment-2035384211)を作成してすぐにレプリカ数を3にスケールするのを確認できます。Deploymentを更新すると新しいReplicaSet(nginx-deployment-1564180365)を作成してレプリカ数を1にスケールアップし、古いReplicaSeetを2にスケールダウンさせます。これは常に最低でも2つのPodが利用可能で、かつ最大4つのPodが作成されている状態にするためです。Deploymentは同じローリングアップ戦略に従って新しいReplicaSetのスケールアップと古いReplicaSetのスケールダウンを続けます。最終的に新しいReplicaSetを3にスケールアップさせ、古いReplicaSetを0にスケールダウンさせます。 + +### ロールオーバー (リアルタイムでの複数のPodの更新) + +Deploymentコントローラーにより、新しいDeploymentが観測される度にReplicaSetが作成され、理想とするレプリカ数のPodを作成します。Deploymentが更新されると、既存のReplicaSetが管理するPodのラベルが`.spec.selector`にマッチするが、テンプレートが`.spec.template`にマッチしない場合はスケールダウンされます。最終的に、新しいReplicaSetは`.spec.replicas`の値にスケールアップされ、古いReplicaSetは0にスケールダウンされます。 + +Deploymentのロールアウトが進行中にDeploymentを更新すると、Deploymentは更新する毎に新しいReplicaSetを作成してスケールアップさせ、以前にスケールアップしたReplicaSetのロールオーバーを行います。Deploymentは更新前のReplicaSetを古いReplicaSetのリストに追加し、スケールダウンを開始します。 + +例えば、5つのレプリカを持つ`nginx:1.7.9`のDeploymentを作成し、`nginx:1.7.9`の3つのレプリカが作成されているときに5つのレプリカを持つ`nginx:1.9.1`に更新します。このケースではDeploymentは作成済みの`nginx:1.7.9`の3つのPodをすぐに削除し、`nginx:1.9.1`のPodの作成を開始します。`nginx:1.7.9`の5つのレプリカを全て作成するのを待つことはありません。 + +### ラベルセレクターの更新 + +通常、ラベルセレクターを更新することは推奨されません。事前にラベルセレクターの使い方を計画しておきましょう。いかなる場合であっても更新が必要なときは十分に注意を払い、変更時の影響範囲を把握しておきましょう。 + +{{< note >}} +`apps/v1`API バージョンにおいて、Deploymentのラベルセレクターは作成後に不変となります。 +{{< /note >}} + +* セレクターの追加は、Deployment Specのテンプレートラベルも新しいラベルで更新する必要があります。そうでない場合はバリデーションエラーが返されます。この変更は重複がない更新となります。これは新しいセレクターは古いセレクターを持つReplicaSetとPodを選択せず、結果として古い全てのReplicaSetがみなし子状態になり、新しいReplicaSetを作成することを意味します。 +* セレクターの更新により、セレクターキー内の既存の値が変更されます。これにより、セレクターの追加と同じふるまいをします。 +* セレクターの削除により、Deploymentのセレクターから存在している値を削除します。これはPodテンプレートのラベルに関する変更を要求しません。既存のReplicaSetはみなし子状態にならず、新しいReplicaSetは作成されませんが、削除されたラベルは既存のPodとReplicaSetでは残り続けます。 + +## Deploymentのロールバック {#rolling-back-a-deployment} + +Deploymentのロールバックを行いたい場合があります。例えば、Deploymentがクラッシュ状態になりそれがループしたりする不安定なときです。デフォルトではユーザーがいつでもロールバックできるようにDeploymentの全てのロールアウト履歴がシステムに保持されます(リビジョン履歴の上限は設定することで変更可能です)。 + +{{< note >}} +Deploymentのリビジョンは、Deploymentのロールアウトがトリガーされた時に作成されます。これはDeploymentのPodテンプレート(`.spec.template`)が変更されたときのみ新しいリビジョンが作成されることを意味します。Deploymentのスケーリングなど、他の種類の更新においてはDeploymentのリビジョンは作成されません。これは手動もしくはオートスケーリングを同時に行うことができるようにするためです。これは過去のリビジョンにロールバックするとき、DeploymentのPodテンプレートの箇所のみロールバックされることを意味します。 +{{< /note >}} + +* `nginx:1.9.1`の代わりに`nginx:1.91`というイメージに更新して、Deploymentの更新中にタイプミスをしたと仮定します。 + + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + ``` + + 実行結果は下記のとおりです。 + ``` + deployment.apps/nginx-deployment image updated + ``` + +* このロールアウトはうまくいきません。ユーザーロールアウトのステータスを見ることでロールアウトがうまくいくか確認できます。 + + ```shell + kubectl rollout status deployment.v1.apps/nginx-deployment + ``` + + 実行結果は下記のとおりです。 + ``` + Waiting for rollout to finish: 1 out of 3 new replicas have been updated... + ``` + +* ロールアウトのステータスの確認は、Ctrl-Cを押すことで停止できます。ロールアウトがうまく行かないときは、[Deploymentのステータス](#deployment-status)を読んでさらなる情報を得てください。 + +* 古いレプリカ数(`nginx-deployment-1564180365` and `nginx-deployment-2035384211`)が2になっていることを確認でき、新しいレプリカ数(nginx-deployment-3066724191)は1になっています。 + + ```shell + kubectl get rs + ``` + + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1564180365 3 3 3 25s + nginx-deployment-2035384211 0 0 0 36s + nginx-deployment-3066724191 1 1 0 6s + ``` + +* 作成されたPodを確認していると、新しいReplicaSetによって作成された1つのPodはコンテナイメージのpullに失敗し続けているのがわかります。 + + ```shell + kubectl get pods + ``` + + 実行結果は下記のとおりです。 + ``` + NAME READY STATUS RESTARTS AGE + nginx-deployment-1564180365-70iae 1/1 Running 0 25s + nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s + nginx-deployment-1564180365-hysrc 1/1 Running 0 25s + nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s + ``` + + {{< note >}} + Deploymentコントローラーは、この悪い状態のロールアウトを自動的に停止し、新しいReplicaSetのスケールアップを止めます。これはユーザーが指定したローリングアップデートに関するパラメータ(特に`maxUnavailable`)に依存します。デフォルトではKubernetesがこの値を25%に設定します。 + {{< /note >}} + +* Deploymentの詳細情報を取得します。 + ```shell + kubectl describe deployment + ``` + + 実行結果は下記のとおりです。 + ``` + Name: nginx-deployment + Namespace: default + CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700 + Labels: app=nginx + Selector: app=nginx + Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.91 + Port: 80/TCP + Host Port: 0/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True ReplicaSetUpdated + OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created) + NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created) + Events: + FirstSeen LastSeen Count From SubobjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2 + 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1 + 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3 + 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0 + 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1 + ``` + + これを修正するために、Deploymentを安定した状態の過去のリビジョンに更新する必要があります。 + +### Deploymentのロールアウト履歴の確認 + +ロールアウトの履歴を確認するには、下記の手順に従って下さい。 + +1. 最初に、Deploymentのリビジョンを確認します。 + ```shell + kubectl rollout history deployment.v1.apps/nginx-deployment + ``` + 実行結果は下記のとおりです。 + ``` + deployments "nginx-deployment" + REVISION CHANGE-CAUSE + 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true + 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + ``` + + `CHANGE-CAUSE`はリビジョンの作成時にDeploymentの`kubernetes.io/change-cause`アノテーションからリビジョンにコピーされます。下記の手段により`CHANGE-CAUSE`メッセージを指定できます。 + + * `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"`の実行によりアノテーションを追加する。 + * リソースの変更時に`kubectl`コマンドの内容を記録するために`--record`フラグを追加する。 + * リソースのマニフェストを手動で編集する。 + +2. 各リビジョンの詳細を確認するためには下記のコマンドを実行してください。 + ```shell + kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 + ``` + + 実行結果は下記のとおりです。 + ``` + deployments "nginx-deployment" revision 2 + Labels: app=nginx + pod-template-hash=1159050644 + Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + Containers: + nginx: + Image: nginx:1.9.1 + Port: 80/TCP + QoS Tier: + cpu: BestEffort + memory: BestEffort + Environment Variables: + No volumes. + ``` + +### 過去のリビジョンにロールバックする {#rolling-back-to-a-previous-revision} +現在のリビジョンから過去のリビジョン(リビジョン番号2)にロールバックさせるには、下記の手順に従ってください。 + +1. 現在のリビジョンから過去のリビジョンにロールバックします。 + ```shell + kubectl rollout undo deployment.v1.apps/nginx-deployment + ``` + + 実行結果は下記のとおりです。 + ``` + deployment.apps/nginx-deployment + ``` + その他に、`--to-revision`を指定することにより特定のリビジョンにロールバックできます。 + + ```shell + kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 + ``` + + 実行結果は下記のとおりです。 + ``` + deployment.apps/nginx-deployment + ``` + + ロールアウトに関連したコマンドのさらなる情報は[`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout)を参照してください。 + + Deploymentが過去の安定したリビジョンにロールバックされました。Deploymentコントローラーによって、リビジョン番号2にロールバックする`DeploymentRollback`イベントが作成されたのを確認できます。 + +2. ロールバックが成功し、Deploymentが正常に稼働していることを確認するために、下記のコマンドを実行してください。 + ```shell + kubectl get deployment nginx-deployment + ``` + + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx-deployment 3 3 3 3 30m + ``` +3. Deploymentの詳細情報を取得します。 + ```shell + kubectl describe deployment nginx-deployment + ``` + 実行結果は下記のとおりです。 + ``` + Name: nginx-deployment + Namespace: default + CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 + Labels: app=nginx + Annotations: deployment.kubernetes.io/revision=4 + kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + Selector: app=nginx + Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.9.1 + Port: 80/TCP + Host Port: 0/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable + OldReplicaSets: + NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created) + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1 + Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2 + Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0 + ``` + +## Deploymentのスケーリング {#scaling-a-deployment} +下記のコマンドを実行させてDeploymentをスケールできます。 + +```shell +kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 +``` + +実行結果は下記のとおりです。 +``` +deployment.apps/nginx-deployment scaled +``` + +クラスター内で[水平Podオートスケーラー](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)が有効になっていると仮定します。ここでDeploymentのオートスケーラーを設定し、稼働しているPodのCPU使用量に基づいて、ユーザーが稼働させたいPodのレプリカ数の最小値と最大値を設定できます。 + +```shell +kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 +``` +実行結果は下記のとおりです。 +``` +deployment.apps/nginx-deployment scaled +``` + +### 比例スケーリング + +Deploymentのローリングアップデートは、同時に複数のバージョンのアプリケーションの稼働をサポートします。ユーザーやオートスケーラーがロールアウト中(更新中もしくは一時停止中)のDeploymentのローリングアップデートを行うとき、Deploymentコントローラーはリスクを削減するために既存のアクティブなReplicaSetのレプリカのバランシングを行います。これを*比例スケーリング* と呼びます。 + +レプリカ数が10、[maxSurge](#max-surge)=3、[maxUnavailable](#max-unavailable)=2であるDeploymentが稼働している例です。 + +* Deployment内で10のレプリカが稼働していることを確認します。 + ```shell + kubectl get deploy + ``` + 実行結果は下記のとおりです。 + + ``` + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx-deployment 10 10 10 10 50s + ``` + +* クラスター内で、解決できない新しいイメージに更新します。 +* You update to a new image which happens to be unresolvable from inside the cluster. + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag + ``` + + 実行結果は下記のとおりです。 + The output is similar to this: + ``` + deployment.apps/nginx-deployment image updated + ``` + +* イメージの更新は新しいReplicaSet nginx-deployment-1989198191へのロールアウトを開始させます。しかしロールアウトは、上述した`maxUnavailable`の要求によりブロックされます。ここでロールアウトのステータスを確認します。 + ```shell + kubectl get rs + ``` + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1989198191 5 5 0 9s + nginx-deployment-618515232 8 8 8 1m + ``` + +* 次にDeploymentのスケーリングをするための新しい要求が発生します。オートスケーラーはDeploymentのレプリカ数を15に増やします。Deploymentコントローラーは新しい5つのレプリカをどこに追加するか決める必要がでてきます。比例スケーリングを使用していない場合、5つのレプリカは全て新しいReplicaSetに追加されます。比例スケーリングでは、追加されるレプリカは全てのReplicaSetに分散されます。比例割合が大きいものはレプリカ数の大きいReplicaSetとなり、比例割合が低いときはレプリカ数の小さいReplicaSetとなります。残っているレプリカはもっとも大きいレプリカ数を持つReplicaSetに追加されます。レプリカ数が0のReplicaSetはスケールアップされません。 + +上記の例では、3つのレプリカが古いReplicaSetに追加され、2つのレプリカが新しいReplicaSetに追加されました。ロールアウトの処理では、新しいレプリカ数のPodが正常になったと仮定すると、最終的に新しいReplicaSetに全てのレプリカを移動させます。これを確認するためには下記のコマンドを実行して下さい。 + + ```shell + kubectl get deploy + ``` + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx-deployment 15 18 7 8 7m + ``` +  ロールアウトのステータスでレプリカがどのように各ReplicaSetに追加されるか確認できます。 + ```shell + kubectl get rs + ``` + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1989198191 7 7 0 7m + nginx-deployment-618515232 11 11 11 7m + ``` + +## Deployment更新の一時停止と再開 {#pausing-and-resuming-a-deployment} + +ユーザーは1つ以上の更新処理をトリガーする前に更新の一時停止と再開ができます。これにより、不必要なロールアウトを実行することなく一時停止と再開を行う間に複数の修正を反映できます。 + +* 例えば、作成直後のDeploymentを考えます。 + Deploymentの詳細情報を確認します。 + ```shell + kubectl get deploy + ``` + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE + nginx 3 3 3 3 1m + ``` + ロールアウトのステータスを確認します。 + ```shell + kubectl get rs + ``` + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 3 3 3 1m + ``` + +* 下記のコマンドを実行して更新処理の一時停止を行います。 + ```shell + kubectl rollout pause deployment.v1.apps/nginx-deployment + ``` + + 実行結果は下記のとおりです。 + ``` + deployment.apps/nginx-deployment paused + ``` + +* 次にDeploymentのイメージを更新します。 + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + ``` + + 実行結果は下記のとおりです。 + ``` + deployment.apps/nginx-deployment image updated + ``` + +* 新しいロールアウトが開始されていないことを確認します。 + ```shell + kubectl rollout history deployment.v1.apps/nginx-deployment + ``` + + 実行結果は下記のとおりです。 + ``` + deployments "nginx" + REVISION CHANGE-CAUSE + 1 + ``` +* Deploymentの更新に成功したことを確認するためにロールアウトのステータスを確認します。 + ```shell + kubectl get rs + ``` + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 3 3 3 2m + ``` + +* ユーザーは何度も更新を行えます。例えばDeploymentが使用するリソースを更新します。 + ```shell + kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi + ``` + + 実行結果は下記のとおりです。 + ``` + deployment.apps/nginx-deployment resource requirements updated + ``` + 一時停止する前の初期状態では更新処理は機能しますが、Deploymentが一時停止されている間は新しい更新処理は反映されません。 + +* 最後に、Deploymentの稼働を再開させ、新しいReplicaSetが更新内容を全て反映させているのを確認します。 + ```shell + kubectl rollout resume deployment.v1.apps/nginx-deployment + ``` + 実行結果は下記のとおりです。 + ``` + deployment.apps/nginx-deployment resumed + ``` +* 更新処理が完了するまでロールアウトのステータスを確認します。 + ```shell + kubectl get rs -w + ``` + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 2 2 2 2m + nginx-3926361531 2 2 0 6s + nginx-3926361531 2 2 1 18s + nginx-2142116321 1 2 2 2m + nginx-2142116321 1 2 2 2m + nginx-3926361531 3 2 1 18s + nginx-3926361531 3 2 1 18s + nginx-2142116321 1 1 1 2m + nginx-3926361531 3 3 1 18s + nginx-3926361531 3 3 2 19s + nginx-2142116321 0 1 1 2m + nginx-2142116321 0 1 1 2m + nginx-2142116321 0 0 0 2m + nginx-3926361531 3 3 3 20s + ``` +* 最新のロールアウトのステータスを確認します。 + ```shell + kubectl get rs + ``` + + 実行結果は下記のとおりです。 + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 0 0 0 2m + nginx-3926361531 3 3 3 28s + ``` +{{< note >}} +一時停止したDeploymentの稼働を再開させない限り、ユーザーはDeploymentのロールバックはできません。 +{{< /note >}} + +## Deploymentのステータス {#deployment-status} + +Deploymentは、そのライフサイクルの間に様々な状態に遷移します。新しいReplicaSetへのロールアウト中は[進行中](#progressing-deployment)になり、その後は[完了](#complete-deployment)し、また[失敗](#failed-deployment)にもなります。 + +### Deploymentの更新処理 {#progressing-deployment} + +下記のタスクが実行中のとき、KubernetesはDeploymentの状態を_progressing_ にします。 + +* Deploymentが新しいReplicaSetを作成する。 +* Deploymentが新しいReplicaSetをスケールアップさせている。 +* Deploymentが古いReplicaSetをスケールダウンさせている。 +* 新しいPodが準備中もしくは利用可能な状態になる(少なくとも[MinReadySeconds](#min-ready-seconds)の間は準備中になります)。 + +ユーザーは`kubectl rollout status`を実行してDeploymentの進行状態を確認できます。 + +### Deploymentの更新処理の完了 {#complete-deployment} + +Deploymentが下記の状態になったとき、KubernetesはDeploymentのステータスを_complete_ にします。 + +* Deploymentの全てのレプリカが、指定された最新のバージョンに更新される。これはユーザーが指定した更新処理が完了したことを意味します。 +* Deploymentの全てのレプリカが利用可能になる。 +* Deploymentの古いレプリカが1つも稼働していない。 + +`kubectl rollout status`を実行して、Deploymentの更新が完了したことを確認できます。ロールアウトが正常に完了すると`kubectl rollout status`の終了コードが0で返されます。 + +```shell +kubectl rollout status deployment.v1.apps/nginx-deployment +``` +実行結果は下記のとおりです。 +``` +Waiting for rollout to finish: 2 of 3 updated replicas are available... +deployment.apps/nginx-deployment successfully rolled out +$ echo $? +0 +``` + +### Deploymentの更新処理の失敗 {#failed-deployment} + +新しいReplicaSetのデプロイが完了せず、更新処理が止まる場合があります。これは主に下記の要因によるものです。 + +* 不十分なリソースの割り当て +* ReadinessProbeの失敗 +* コンテナイメージの取得ができない +* 不十分なパーミッション +* リソースリミットのレンジ +* アプリケーションランタイムの設定の不備 + +このような状況を検知する1つの方法として、Deploymentのリソース定義でデッドラインのパラメータを指定します([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds))。`.spec.progressDeadlineSeconds`はDeploymentの更新が停止したことを示す前にDeploymentコントローラーが待つ秒数を示します。 + +下記の`kubectl`コマンドでリソース定義に`progressDeadlineSeconds`を設定します。これはDeploymentの更新が止まってから10分後に、コントローラーが失敗を通知させるためです。 + +```shell +kubectl patch deployment.v1.apps/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}' +``` +実行結果は下記のとおりです。 +``` +deployment.apps/nginx-deployment patched +``` +一度デッドラインを超過すると、DeploymentコントローラーはDeploymentの`.status.conditions`に下記のDeploymentConditionを追加します。 + +* Type=Progressing +* Status=False +* Reason=ProgressDeadlineExceeded + +ステータスの状態に関するさらなる情報は[Kubernetes APIの規則](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties)を参照してください。 + +{{< note >}} +Kubernetesは停止状態のDeploymentに対して、ステータス状態を報告する以外のアクションを実行しません。高レベルのオーケストレーターはこれを利用して、状態に応じて行動できます。例えば、前のバージョンへのDeploymentのロールバックが挙げられます。 +{{< /note >}} + +{{< note >}} +Deploymentを停止すると、Kubernetesはユーザーが指定したデッドラインを超えたかどうかチェックしません。ユーザーはロールアウトのとゆうでもDeploymentを安全に一時停止でき、デッドラインを超えたイベントをトリガーすることなく再開できます。 +{{< /note >}} + +設定したタイムアウトの秒数が小さかったり、一時的なエラーとして扱える他の種類のエラーが原因となり、Deploymentで一時的なエラーが出る場合があります。例えば、リソースの割り当てが不十分な場合を考えます。Deploymentの詳細情報を確認すると、下記のセクションが表示されます。 + +```shell +kubectl describe deployment nginx-deployment +``` +実行結果は下記のとおりです。 +``` +<...> +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True ReplicaSetUpdated + ReplicaFailure True FailedCreate +<...> +``` + +`kubectl get deployment nginx-deployment -o yaml`を実行すると、Deploymentのステータスは下記のようになります。 + +``` +status: + availableReplicas: 2 + conditions: + - lastTransitionTime: 2016-10-04T12:25:39Z + lastUpdateTime: 2016-10-04T12:25:39Z + message: Replica set "nginx-deployment-4262182780" is progressing. + reason: ReplicaSetUpdated + status: "True" + type: Progressing + - lastTransitionTime: 2016-10-04T12:25:42Z + lastUpdateTime: 2016-10-04T12:25:42Z + message: Deployment has minimum availability. + reason: MinimumReplicasAvailable + status: "True" + type: Available + - lastTransitionTime: 2016-10-04T12:25:39Z + lastUpdateTime: 2016-10-04T12:25:39Z + message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota: + object-counts, requested: pods=1, used: pods=3, limited: pods=2' + reason: FailedCreate + status: "True" + type: ReplicaFailure + observedGeneration: 3 + replicas: 2 + unavailableReplicas: 2 +``` + +最後に、一度Deploymentの更新処理のデッドラインを越えると、KubernetesはDeploymentのステータスと進行中の状態を更新します。 + +``` +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing False ProgressDeadlineExceeded + ReplicaFailure True FailedCreate +``` + +Deploymentか他のリソースコントローラーのスケールダウンを行うか、使用している名前空間内でリソースの割り当てを増やすことで、リソースの割り当て不足の問題に対処できます。割り当て条件を満たすと、DeploymentコントローラーはDeploymentのロールアウトを完了させ、Deploymentのステータスが成功状態になるのを確認できます(`Status=True`と`Reason=NewReplicaSetAvailable`)。 + +``` +Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable +``` + +`Status=True`の`Type=Available`は、Deploymentが最小可用性の状態であることを意味します。最小可用性は、Deploymentの更新戦略において指定されているパラメータにより決定されます。`Status=True`の`Type=Progressing`は、Deploymentのロールアウトの途中で、更新処理が進行中であるか、更新処理が完了し、必要な最小数のレプリカが利用可能であることを意味します(各TypeのReason項目を確認してください。このケースでは、`Reason=NewReplicaSetAvailable`はDeploymentの更新が完了したことを意味します)。 + +`kubectl rollout status`を実行してDeploymentが更新に失敗したかどうかを確認できます。`kubectl rollout status`はDeploymentが更新処理のデッドラインを超えたときに0以外の終了コードを返します。 + +```shell +kubectl rollout status deployment.v1.apps/nginx-deployment +``` +実行結果は下記のとおりです。 +``` +Waiting for rollout to finish: 2 out of 3 new replicas have been updated... +error: deployment "nginx" exceeded its progress deadline +$ echo $? +1 +``` + +### 失敗したDeploymentの操作 + +更新完了したDeploymentに適用した全てのアクションは、更新失敗したDeploymentに対しても適用されます。スケールアップ、スケールダウンができ、前のリビジョンへのロールバックや、Deploymentのテンプレートに複数の更新を適用させる必要があるときは一時停止もできます。 + +## 古いリビジョンのクリーンアップポリシー {#clean-up-policy} + +Deploymentが管理する古いReplicaSetをいくつ保持するかを指定するために、`.spec.revisionHistoryLimit`フィールドを設定できます。この値を超えた古いReplicaSetはバックグラウンドでガーベージコレクションの対象となって削除されます。デフォルトではこの値は10です。 + +{{< note >}} +このフィールドを明示的に0に設定すると、Deploymentの全ての履歴を削除します。従って、Deploymentはロールバックできません。 +{{< /note >}} + +## カナリアパターンによるデプロイ + +Deploymentを使って一部のユーザーやサーバーに対してリリースのロールアウトをしたいとき、[リソースの管理](/docs/concepts/cluster-administration/manage-deployment/#canary-deployments)に記載されているカナリアパターンに従って、リリース毎に1つずつ、複数のDeploymentを作成できます。 + +## Deployment Specの記述 + +他の全てのKubernetesの設定と同様に、Deploymentは`apiVersion`、`kind`や`metadata`フィールドを必要とします。設定ファイルの利用に関する情報は[アプリケーションのデプロイ](/docs/tutorials/stateless-application/run-stateless-application-deployment/)を参照してください。コンテナーの設定に関しては[リソースを管理するためのkubectlの使用](/docs/concepts/overview/working-with-objects/object-management/)を参照してください。 + +Deploymentは[`.spec`セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必要とします。 + +### Podテンプレート + +`.spec.template`と`.spec.selector`は`.spec`における必須のフィールドです。 + +`.spec.template`は[Podテンプレート](/docs/concepts/workloads/pods/pod-overview/#pod-templates)です。これは.spec内でネストされていないことと、`apiVersion`や`kind`を持たないことを除いては[Pod](/docs/concepts/workloads/pods/pod/)と同じスキーマとなります。 + +Podの必須フィールドに加えて、Deployment内のPodテンプレートでは適切なラベルと再起動ポリシーを設定しなくてはなりません。ラベルは他のコントローラーと重複しないようにしてください。ラベルについては、[セレクター](#selector)を参照してください。 + +[`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)が`Always`に等しいときのみ許可されます。これはテンプレートで指定されていない場合のデフォルト値です。 + +### レプリカ数 + +`.spec.replias`は理想的なPodの数を指定するオプションのフィールドです。デフォルトは1です。 + +### セレクター {#selector} + +`.spec.selector`は必須フィールドで、Deploymentによって対象とされるPodの[ラベルセレクター](/docs/concepts/overview/working-with-objects/labels/)を指定します。 + +`.spec.selector`は`.spec.template.metadata.labels`と一致している必要があり、一致しない場合はAPIによって拒否されます。 + +`apps/v1`バージョンにおいて、`.spec.selector`と`.metadata.labels`が指定されていない場合、`.spec.template.metadata.labels`の値に初期化されません。そのため`.spec.selector`と`.metadata.labels`を明示的に指定する必要があります。また`apps/v1`のDeploymentにおいて`.spec.selector`は作成後に不変になります。 + +Deploymentのテンプレートが`.spec.template`と異なる場合や、`.spec.replicas`の値を超えてPodが稼働している場合、Deploymentはセレクターに一致するラベルを持つPodを削除します。Podの数が理想状態より少ない場合Deploymentは`.spec.template`をもとに新しいPodを作成します。 + +{{< note >}} +ユーザーは、Deploymentのセレクターに一致するラベルを持つPodを、直接作成したり他のDeploymentやReplicaSetやReplicationControllerによって作成するべきではありません。作成した場合は最初のDeploymentが、ラベルに一致する新しいPodを作成したとみなしてしまいます。Kubernetesはユーザーがこれを行ってもエラーなどを出さず、処理を止めません。 +{{< /note >}} + +セレクターが重複する複数のコントローラーを持つとき、そのコントローラーは互いに競合状態となり、正しくふるまいません。 + +### 更新戦略 + +`.spec.strategy`は古いPodから新しいPodに置き換える際の更新戦略を指定します。`.spec.strategy.type`は"Recreate"もしくは"RollingUpdate"を指定できます。デフォルトは"RollingUpdate"です。 + +#### Deploymentの再作成 + +`.spec.strategy.type==Recreate`と指定されているとき、既存の全てのPodは新しいPodが作成される前に削除されます。 + +#### Deploymentのローリングアップデート + +`.spec.strategy.type==RollingUpdate`と指定されているとき、Deploymentは[ローリングアップデート](/docs/tasks/run-application/rolling-update-replication-controller/)によりPodを更新します。ローリングアップデートの処理をコントロールするために`maxUnavailable`と`maxSurge`を指定できます。 + +##### maxUnavailable + +`.spec.strategy.rollingUpdate.maxUnavailable`はオプションのフィールドで、更新処理において利用不可となる最大のPod数を指定します。値は絶対値(例: 5)を指定するか、理想状態のPodのパーセンテージを指定します(例: 10%)。パーセンテージを指定した場合、絶対値は小数切り捨てされて計算されます。`.spec.strategy.rollingUpdate.maxSurge`が0に指定されている場合、この値を0にできません。デフォルトでは25%です。 + +例えば、この値が30%と指定されているとき、ローリングアップデートが開始すると古いReplicaSetはすぐに理想状態の70%にスケールダウンされます。一度新しいPodが稼働できる状態になると、古いReplicaSetはさらにスケールダウンされ、続いて新しいReplicaSetがスケールアップされます。この間、利用可能なPodの総数は理想状態のPodの少なくとも70%以上になるように保証されます。 + +##### maxSurge + +`.spec.strategy.rollingUpdate.maxSurge`はオプションのフィールドで、理想状態のPod数を超えて作成できる最大のPod数を指定します。値は絶対値(例: 5)を指定するか、理想状態のPodのパーセンテージを指定します(例: 10%)。パーセンテージを指定した場合、絶対値は小数切り上げで計算されます。`MaxUnavailable`が0に指定されている場合、この値を0にできません。デフォルトでは25%です。 + +例えば、この値が30%と指定されているとき、ローリングアップデートが開始すると新しいReplicaSetはすぐに更新されます。このとき古いPodと新しいPodの総数は理想状態の130%を超えないように更新されます。一度古いPodが削除されると、新しいReplicaSetはさらにスケールアップされます。この間、利用可能なPodの総数は理想状態のPodに対して最大130%になるように保証されます。 + +### progressDeadlineSeconds + +`.spec.progressDeadlineSeconds`はオプションのフィールドで、システムがDeploymentの[更新に失敗](#failed-deployment)したと判断するまでに待つ秒数を指定します。更新に失敗したと判断されたとき、リソースのステータスは`Type=Progressing`、`Status=False`かつ`Reason=ProgressDeadlineExceeded`となるのを確認できます。DeploymentコントローラーはDeploymentの更新のリトライし続けます。今後、自動的なロールバックが実装されたとき、更新失敗状態になるとすぐにDeploymentコントローラーがロールバックを行うようになります。 + +この値が指定されているとき、`.spec.minReadySeconds`より大きい値を指定する必要があります。 + +### minReadySeconds {#min-ready-seconds} + +`.spec.minReadySeconds`はオプションのフィールドで、新しく作成されたPodが利用可能となるために、最低どれくらいの秒数コンテナーがクラッシュすることなく稼働し続ければよいかを指定するものです。デフォルトでは0です(Podは作成されるとすぐに利用可能と判断されます)。Podが利用可能と判断された場合についてさらに学ぶために[Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)を参照してください。 + +### rollbackTo + +`.spec.rollbackTo`は、`extensions/v1beta1`と`apps/v1beta1`のAPIバージョンにおいて非推奨で、`apps/v1beta2`以降のAPIバージョンではサポートされません。かわりに、[前のリビジョンへのロールバック](#rolling-back-to-a-previous-revision)で説明されているように`kubectl rollout undo`を使用するべきです。 + +### リビジョン履歴の保持上限 + +Deploymentのリビジョン履歴は、Deploymentが管理するReplicaSetに保持されています。 + +`.spec.revisionHistoryLimit`はオプションのフィールドで、ロールバック可能な古いReplicaSetの数を指定します。この古いReplicaSetは`etcd`内のリソースを消費し、`kubectl get rs`の出力結果を見にくくします。Deploymentの各リビジョンの設定はReplicaSetに保持されます。このため一度古いReplicaSetが削除されると、そのリビジョンのDeploymentにロールバックすることができなくなります。デフォルトでは10もの古いReplicaSetが保持されます。しかし、この値の最適値は新しいDeploymnetの更新頻度と安定性に依存します。 + +さらに詳しく言うと、この値を0にすると、0のレプリカを持つ古い全てのReplicaSetが削除されます。このケースでは、リビジョン履歴が完全に削除されているため新しいDeploymentのロールアウトを完了することができません。 + +### paused + +`.spec.paused`はオプションのboolean値で、Deploymentの一時停止と再開のための値です。一時停止されているものと、そうでないものとの違いは、一時停止されているDeploymentはPodTemplateSpecのいかなる変更があってもロールアウトがトリガーされないことです。デフォルトではDeploymentは一時停止していない状態で作成されます。 + +## Deploymentの代替案 +### kubectl rolling update + +[`kubectl rolling update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update)によって、同様の形式でPodとReplicationControllerを更新できます。しかしDeploymentの使用が推奨されます。なぜならDeploymentの作成は宣言的であり、ローリングアップデートが更新された後に過去のリビジョンにロールバックできるなど、いくつかの追加機能があります。 + +{{% /capture %}} diff --git a/content/ja/docs/concepts/workloads/pods/_index.md b/content/ja/docs/concepts/workloads/pods/_index.md index a105f18fb3327..7f62f167e8e7a 100755 --- a/content/ja/docs/concepts/workloads/pods/_index.md +++ b/content/ja/docs/concepts/workloads/pods/_index.md @@ -1,5 +1,4 @@ --- -title: "Pods" +title: "Pod" weight: 10 --- - diff --git a/content/ja/docs/concepts/workloads/pods/init-containers.md b/content/ja/docs/concepts/workloads/pods/init-containers.md index 6fd5f321b5ba9..9dde5bc7ebbb2 100644 --- a/content/ja/docs/concepts/workloads/pods/init-containers.md +++ b/content/ja/docs/concepts/workloads/pods/init-containers.md @@ -91,7 +91,7 @@ spec: command: ['sh', '-c', 'echo The app is running! && sleep 3600'] ``` -古いアノテーション構文がKubernetes1.6と1.7において有効ですが、1.6では新しい構文にも対応しています。Kubernetes1.8以降では新しい構文はを使用する必要があります。KubernetesではInitコンテナの宣言を`spec`に移行させました。 +古いアノテーション構文がKubernetes1.6と1.7において有効ですが、1.6では新しい構文にも対応しています。Kubernetes1.8以降では新しい構文を使用する必要があります。KubernetesではInitコンテナの宣言を`spec`に移行させました。 ```yaml apiVersion: v1 diff --git a/content/ja/docs/concepts/workloads/pods/pod-overview.md b/content/ja/docs/concepts/workloads/pods/pod-overview.md index f46b8f25003c7..f1f48a57b64c4 100644 --- a/content/ja/docs/concepts/workloads/pods/pod-overview.md +++ b/content/ja/docs/concepts/workloads/pods/pod-overview.md @@ -17,11 +17,11 @@ card: {{% capture body %}} ## Podについて理解する -*Pod* は、Kubernetesの基本的なビルディングブロックとなります。Kubernetesオブジェクトモデルの中で、ユーザーが作成し、デプロイ可能なシンプルで最も最小のユニットです。単一のPodはクラスター上で稼働する単一のプロセスを表現します。 +*Pod* は、Kubernetesアプリケーションの基本的な実行単位です。これは、作成またはデプロイするKubernetesオブジェクトモデルの中で最小かつ最も単純な単位です。Podは、{{< glossary_tooltip term_id="cluster" >}}で実行されているプロセスを表します。 -単一のPodは、アプリケーションコンテナ(いくつかの場合においては複数のコンテナ)や、ストレージリソース、ユニークなネットワークIPや、コンテナがどのように稼働すべきか統制するためのオプションをカプセル化します。単一のPodは、ある単一のDeploymentのユニット(単一のコンテナもしくはリソースを共有する、密接に連携された少数のコンテナ群を含むような*Kubernetes内でのアプリケーションの単一のインスタンス*) を表現します。 +Podは、アプリケーションのコンテナ(いくつかの場合においては複数のコンテナ)、ストレージリソース、ユニークなネットワークIP、およびコンテナの実行方法を管理するオプションをカプセル化します。Podはデプロイメントの単位、すなわち*Kubernetesのアプリケーションの単一インスタンス* で、単一の{{< glossary_tooltip term_id="container" >}}または密結合なリソースを共有する少数のコンテナで構成される場合があります。 -> [Docker](https://www.docker.com)はKubernetesのPod内で使われる最も一般的なコンテナランタイムですが、Podは他のコンテナランタイムも同様にサポートしています。 +[Docker](https://www.docker.com)はKubernetesのPod内で使われる最も一般的なコンテナランタイムですが、Podは他の[コンテナランタイム](/ja/docs/setup/production-environment/container-runtimes/)も同様にサポートしています。 Kubernetesクラスター内でのPodは2つの主な方法で使うことができます。 @@ -30,11 +30,10 @@ Kubernetesクラスター内でのPodは2つの主な方法で使うことがで * **協調して稼働させる必要がある複数のコンテナを稼働させるPod** : 単一のPodは、リソースを共有する必要があるような、密接に連携した複数の同じ環境にあるコンテナからなるアプリケーションをカプセル化することもできます。 これらの同じ環境にあるコンテナ群は、サービスの結合力の強いユニットを構成することができます。 -- 1つのコンテナが、共有されたボリュームからファイルをパブリックな場所に送信し、一方では分割された*サイドカー* コンテナがそれらのファイルを更新します。そのPodはそれらのコンテナとストレージリソースを、単一の管理可能なエンティティとしてまとめます。 -[Kubernetes Blog](http://kubernetes.io/blog)にて、Podのユースケースに関するいくつかの追加情報を見ることができます。 -さらなる情報を得たい場合は、下記のページを参照ください。 +[Kubernetes Blog](http://kubernetes.io/blog)にて、Podのユースケースに関するいくつかの追加情報を見ることができます。さらなる情報を得たい場合は、下記のページを参照ください。 -* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) -* [Container Design Patterns](https://kubernetes.io/blog/2016/06/container-design-patterns) + * [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) + * [Container Design Patterns](https://kubernetes.io/blog/2016/06/container-design-patterns) 各Podは、与えられたアプリケーションの単一のインスタンスを稼働するためのものです。もしユーザーのアプリケーションを水平にスケールさせたい場合(例: 複数インスタンスを稼働させる)、複数のPodを使うべきです。1つのPodは各インスタンスに対応しています。 Kubernetesにおいて、これは一般的に_レプリケーション_ と呼ばれます。 @@ -50,7 +49,6 @@ Podは凝集性の高いサービスのユニットを構成するような複 ユーザーは、コンテナ群が密接に連携するような、特定のインスタンスにおいてのみこのパターンを使用するべきです。 例えば、ユーザーが共有ボリューム内にあるファイル用のWebサーバとして稼働するコンテナと、下記のダイアグラムにあるような、リモートのソースからファイルを更新するような分離された*サイドカー* コンテナを持っているような場合です。 - {{< figure src="/images/docs/pod.svg" alt="Podのダイアグラム" width="50%" >}} Podは、Podによって構成されたコンテナ群のために2種類の共有リソースを提供します。 *ネットワーキング* と*ストレージ* です。 @@ -61,24 +59,24 @@ Podは、Podによって構成されたコンテナ群のために2種類の共 #### ストレージ -単一のPodは共有されたストレージ*ボリューム* のセットを指定できます。Pod内の全てのコンテナは、その共有されたボリュームにアクセスでき、コンテナ間でデータを共有することを可能にします。ボリュームもまた、もしPod内のコンテナの1つが再起動が必要になった場合に備えて、データを永続化できます。 +単一のPodは共有されたストレージ{{< glossary_tooltip term_id="volume" >}}のセットを指定できます。Pod内の全てのコンテナは、その共有されたボリュームにアクセスでき、コンテナ間でデータを共有することを可能にします。ボリュームもまた、もしPod内のコンテナの1つが再起動が必要になった場合に備えて、データを永続化できます。 単一のPod内での共有ストレージをKubernetesがどう実装しているかについてのさらなる情報については、[Volumes](/docs/concepts/storage/volumes/)を参照してください。 ## Podを利用する ユーザーはまれに、Kubenetes内で独立したPodを直接作成する場合があります(シングルトンPodなど)。 -これはPodが比較的、一時的な使い捨てエンティティとしてデザインされているためです。Podが作成された時(ユーザーによって直接的、またはコントローラーによって間接的に作成された場合)、ユーザーのクラスター内の単一のNode上で稼働するようにスケジューリングされます。そのPodはプロセスが停止されたり、Podオブジェクトが削除されたり、Podがリソースの欠如のために*追い出され* たり、Nodeが故障するまでNode上に残り続けます。 +これはPodが比較的、一時的な使い捨てエンティティとしてデザインされているためです。Podが作成された時(ユーザーによって直接的、またはコントローラーによって間接的に作成された場合)、ユーザーのクラスター内の単一の{{< glossary_tooltip term_id="node" >}}上で稼働するようにスケジューリングされます。そのPodはプロセスが停止されたり、Podオブジェクトが削除されたり、Podがリソースの欠如のために*追い出され* たり、ノードが故障するまでノード上に残り続けます。 {{< note >}} 単一のPod内でのコンテナを再起動することと、そのPodを再起動することを混同しないでください。Podはそれ自体は実行されませんが、コンテナが実行される環境であり、削除されるまで存在し続けます。 {{< /note >}} -Podは、Podそれ自体によって自己修復しません。もし、稼働されていないNode上にPodがスケジュールされた場合や、スケジューリング操作自体が失敗した場合、Podが削除されます。同様に、Podはリソースの欠如や、Nodeのメンテナンスによる追い出しがあった場合はそこで停止します。Kubernetesは*コントローラー* と呼ばれる高レベルの抽象概念を使用し、それは比較的使い捨て可能なPodインスタンスの管理を行います。 +Podは、Podそれ自体によって自己修復しません。もし、稼働されていないノード上にPodがスケジュールされた場合や、スケジューリング操作自体が失敗した場合、Podが削除されます。同様に、Podはリソースの欠如や、ノードのメンテナンスによる追い出しがあった場合はそこで停止します。Kubernetesは*コントローラー* と呼ばれる高レベルの抽象概念を使用し、それは比較的使い捨て可能なPodインスタンスの管理を行います。 このように、Podを直接使うのは可能ですが、コントローラーを使用したPodを管理する方がより一般的です。KubernetesがPodのスケーリングと修復機能を実現するためにコントローラーをどのように使うかに関する情報は[Podとコントローラー](#pods-and-controllers)を参照してください。 ### Podとコントローラー -単一のコントローラーは、ユーザーのために複数のPodを作成・管理し、レプリケーションやロールアウト、クラスターのスコープ内で自己修復の機能をハンドリングします。例えば、もしNodeが故障した場合、コントローラーは異なるNode上にPodを置き換えるようにスケジューリングすることで、自動的にリプレース可能となります。 +単一のコントローラーは、ユーザーのために複数のPodを作成・管理し、レプリケーションやロールアウト、クラスターのスコープ内で自己修復の機能をハンドリングします。例えば、もしノードが故障した場合、コントローラーは異なるノード上にPodを置き換えるようにスケジューリングすることで、自動的にリプレース可能となります。 1つまたはそれ以上のPodを含むコントローラーの例は下記の通りです。 @@ -115,7 +113,8 @@ spec: {{% /capture %}} {{% capture whatsnext %}} -* Podの振る舞いに関して学ぶには下記を参照してください。 - * [Podの停止](/docs/concepts/workloads/pods/pod/#termination-of-pods) - * [Podのライフサイクル](/docs/concepts/workloads/pods/pod-lifecycle/) +* [Pod](/ja/docs/concepts/workloads/pods/pod/)について更に学びましょう +* Podの振る舞いに関して学ぶには下記を参照してください + * [Podの停止](/ja/docs/concepts/workloads/pods/pod/#podの終了) + * [Podのライフサイクル](/ja/docs/concepts/workloads/pods/pod-lifecycle/) {{% /capture %}} diff --git a/content/ja/docs/reference/_index.md b/content/ja/docs/reference/_index.md index 25c9a1d73e2a9..d5f8120dbdb48 100644 --- a/content/ja/docs/reference/_index.md +++ b/content/ja/docs/reference/_index.md @@ -1,6 +1,6 @@ --- title: リファレンス -linkTitle: "Reference" +linkTitle: "リファレンス" main_menu: true weight: 70 content_template: templates/concept @@ -14,15 +14,15 @@ content_template: templates/concept {{% capture body %}} -## API Reference +## APIリファレンス * [Kubernetes API概要](/docs/reference/using-api/api-overview/) - Kubernetes APIの概要です。 * Kubernetes APIバージョン + * [1.16](/docs/reference/generated/kubernetes-api/v1.16/) * [1.15](/docs/reference/generated/kubernetes-api/v1.15/) * [1.14](/docs/reference/generated/kubernetes-api/v1.14/) * [1.13](/docs/reference/generated/kubernetes-api/v1.13/) * [1.12](/docs/reference/generated/kubernetes-api/v1.12/) - * [1.11](/docs/reference/generated/kubernetes-api/v1.11/) ## APIクライアントライブラリー @@ -47,8 +47,6 @@ content_template: templates/concept * [kube-controller-manager](/docs/admin/kube-controller-manager/) - Kubernetesに同梱された、コアのコントロールループを埋め込むデーモンです。 * [kube-proxy](/docs/admin/kube-proxy/) - 単純なTCP/UDPストリームのフォワーディングや、一連のバックエンド間でTCP/UDPのラウンドロビンでのフォワーディングを実行できます。 * [kube-scheduler](/docs/admin/kube-scheduler/) - 可用性、パフォーマンス、およびキャパシティを管理するスケジューラーです。 -* [federation-apiserver](/docs/admin/federation-apiserver/) - 連合クラスターのためのAPIサーバーです。 -* [federation-controller-manager](/docs/admin/federation-controller-manager/) - 連合Kubernetesクラスターに同梱された、コアのコントロールループを埋め込むデーモンです。 ## 設計のドキュメント diff --git a/content/ja/docs/reference/command-line-tools-reference/_index.md b/content/ja/docs/reference/command-line-tools-reference/_index.md new file mode 100644 index 0000000000000..89d64ce646db7 --- /dev/null +++ b/content/ja/docs/reference/command-line-tools-reference/_index.md @@ -0,0 +1,5 @@ +--- +title: コマンドラインツールのリファレンス +weight: 60 +toc-hide: true +--- diff --git a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md index d97950f46220d..92fb9868df944 100644 --- a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md @@ -6,17 +6,16 @@ content_template: templates/concept {{% capture overview %}} このページでは管理者がそれぞれのKubernetesコンポーネントで指定できるさまざまなフィーチャーゲートの概要について説明しています。 + +各機能におけるステージの説明については、[機能のステージ](#feature-stages)を参照してください。 {{% /capture %}} {{% capture body %}} - ## 概要 -フィーチャーゲートはアルファ機能または実験的機能を記述するkey=valueのペアのセットです。 +フィーチャーゲートはアルファ機能または実験的機能を記述するkey=valueのペアのセットです。管理者は各コンポーネントで`--feature-gates`コマンドラインフラグを使用することで機能をオンまたはオフにできます。 -管理者は各コンポーネントで`--feature-gates`コマンドラインフラグを使用することで機能をオンまたはオフにできます。各コンポーネントはそれぞれのコンポーネント固有のフィーチャーゲートの設定をサポートします。 -すべてのコンポーネントのフィーチャーゲートの全リストを表示するには`-h`フラグを使用します。 -kubeletなどのコンポーネントにフィーチャーゲートを設定するには以下のようにリストの機能ペアを`--feature-gates`フラグを使用して割り当てます。 +各コンポーネントはそれぞれのコンポーネント固有のフィーチャーゲートの設定をサポートします。すべてのコンポーネントのフィーチャーゲートの全リストを表示するには`-h`フラグを使用します。kubeletなどのコンポーネントにフィーチャーゲートを設定するには以下のようにリストの機能ペアを`--feature-gates`フラグを使用して割り当てます。 ```shell --feature-gates="...,DynamicKubeletConfig=true" @@ -26,23 +25,24 @@ kubeletなどのコンポーネントにフィーチャーゲートを設定す - 「導入開始バージョン」列は機能が導入されたとき、またはリリース段階が変更されたときのKubernetesリリースバージョンとなります。 - 「最終利用可能バージョン」列は空ではない場合はフィーチャーゲートを使用できる最後のKubernetesリリースバージョンとなります。 +- アルファまたはベータ状態の機能は[AlphaまたはBetaのフィーチャーゲート](#feature-gates-for-alpha-or-beta-features)に載っています。 +- 安定している機能は、[graduatedまたはdeprecatedのフィーチャーゲート](#feature-gates-for-graduated-or-deprecated-features)に載っています。 +- graduatedまたはdeprecatedのフィーチャーゲートには、非推奨および廃止された機能もリストされています。 + +### AlphaまたはBetaのフィーチャーゲート {#feature-gates-for-alpha-or-beta-features} + +{{< table caption="AlphaまたはBetaのフィーチャーゲート" >}} | 機能名 | デフォルト値 | ステージ | 導入開始バージョン | 最終利用可能バージョン | |---------|---------|-------|-------|-------| -| `Accelerators` | `false` | Alpha | 1.6 | 1.10 | -| `AdvancedAuditing` | `false` | Alpha | 1.7 | 1.7 | -| `AdvancedAuditing` | `true` | Beta | 1.8 | 1.11 | -| `AdvancedAuditing` | `true` | GA | 1.12 | - | -| `AffinityInAnnotations` | `false` | Alpha | 1.6 | 1.7 | -| `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 | -| `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | - | | `APIListChunking` | `false` | Alpha | 1.8 | 1.8 | | `APIListChunking` | `true` | Beta | 1.9 | | | `APIResponseCompression` | `false` | Alpha | 1.7 | | | `AppArmor` | `true` | Beta | 1.4 | | | `AttachVolumeLimit` | `true` | Alpha | 1.11 | 1.11 | | `AttachVolumeLimit` | `true` | Beta | 1.12 | | -| `BlockVolume` | `false` | Alpha | 1.9 | | +| `BalanceAttachedNodeVolumes` | `false` | Alpha | 1.11 | | +| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 | | `BlockVolume` | `true` | Beta | 1.13 | - | | `BoundServiceAccountTokenVolume` | `false` | Alpha | 1.13 | | | `CPUManager` | `false` | Alpha | 1.8 | 1.9 | @@ -53,7 +53,8 @@ kubeletなどのコンポーネントにフィーチャーゲートを設定す | `CSIBlockVolume` | `true` | Beta | 1.14 | | | `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 | | `CSIDriverRegistry` | `true` | Beta | 1.14 | | -| `CSIInlineVolume` | `false` | Alpha | 1.15 | - | +| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 | +| `CSIInlineVolume` | `true` | Beta | 1.16 | - | | `CSIMigration` | `false` | Alpha | 1.14 | | | `CSIMigrationAWS` | `false` | Alpha | 1.14 | | | `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | | @@ -62,99 +63,67 @@ kubeletなどのコンポーネントにフィーチャーゲートを設定す | `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | | | `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 | | `CSINodeInfo` | `true` | Beta | 1.14 | | -| `CSIPersistentVolume` | `false` | Alpha | 1.9 | 1.9 | -| `CSIPersistentVolume` | `true` | Beta | 1.10 | 1.12 | -| `CSIPersistentVolume` | `true` | GA | 1.13 | - | | `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | | -| `CustomPodDNS` | `false` | Alpha | 1.9 | 1.9 | -| `CustomPodDNS` | `true` | Beta| 1.10 | 1.13 | -| `CustomPodDNS` | `true` | GA | 1.14 | - | -| `CustomResourcePublishOpenAPI` | `false` | Alpha| 1.14 | 1.14 | -| `CustomResourcePublishOpenAPI` | `true` | Beta| 1.15 | | -| `CustomResourceSubresources` | `false` | Alpha | 1.10 | 1.11 | -| `CustomResourceSubresources` | `true` | Beta | 1.11 | - | -| `CustomResourceValidation` | `false` | Alpha | 1.8 | 1.8 | -| `CustomResourceValidation` | `true` | Beta | 1.9 | | -| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 | -| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | | -| `DebugContainers` | `false` | Alpha | 1.10 | | +| `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 | +| `CustomResourceDefaulting` | `true` | Beta | 1.16 | | | `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 | | `DevicePlugins` | `true` | Beta | 1.10 | | +| `DryRun` | `false` | Alpha | 1.12 | 1.12 | | `DryRun` | `true` | Beta | 1.13 | | | `DynamicAuditing` | `false` | Alpha | 1.13 | | | `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 | | `DynamicKubeletConfig` | `true` | Beta | 1.11 | | -| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 | -| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | -| `DynamicVolumeProvisioning` | `true` | GA | 1.8 | | -| `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | | -| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | | +| `EndpointSlice` | `false` | Alpha | 1.16 | | +| `EphemeralContainers` | `false` | Alpha | 1.16 | | +| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 | +| `ExpandCSIVolumes` | `true` | Beta | 1.16 | | | `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | 1.14 | | `ExpandInUsePersistentVolumes` | `true` | Beta | 1.15 | | | `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 | | `ExpandPersistentVolumes` | `true` | Beta | 1.11 | | -| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | | | `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | | -| `GCERegionalPersistentDisk` | `true` | Beta | 1.10 | 1.12 | -| `GCERegionalPersistentDisk` | `true` | GA | 1.13 | - | -| `HugePages` | `false` | Alpha | 1.8 | 1.9 | -| `HugePages` | `true` | Beta| 1.10 | 1.13 | -| `HugePages` | `true` | GA | 1.14 | | +| `EvenPodsSpread` | `false` | Alpha | 1.16 | | +| `HPAScaleToZero` | `false` | Alpha | 1.16 | | | `HyperVContainer` | `false` | Alpha | 1.10 | | -| `Initializers` | `false` | Alpha | 1.7 | 1.13 | -| `Initializers` | - | Deprecated | 1.14 | | -| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 | -| `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 | -| `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 | -| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - | | `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 | | `KubeletPodResources` | `true` | Beta | 1.15 | | +| `LegacyNodeRoleBehavior` | `true` | Alpha | 1.16 | | | `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 | -| `LocalStorageCapacityIsolation` | `true` | Beta| 1.10 | | -| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha| 1.15 | | +| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | | +| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | | | `MountContainers` | `false` | Alpha | 1.9 | | -| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 | -| `MountPropagation` | `true` | Beta | 1.10 | 1.11 | -| `MountPropagation` | `true` | GA | 1.12 | | +| `NodeDisruptionExclusion` | `false` | Alpha | 1.16 | | | `NodeLease` | `false` | Alpha | 1.12 | 1.13 | | `NodeLease` | `true` | Beta | 1.14 | | | `NonPreemptingPriority` | `false` | Alpha | 1.15 | | -| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 | -| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 | -| `PersistentLocalVolumes` | `true` | GA | 1.14 | | -| `PodPriority` | `false` | Alpha | 1.8 | 1.10 | -| `PodPriority` | `true` | Beta | 1.11 | 1.13 | -| `PodPriority` | `true` | GA | 1.14 | | -| `PodReadinessGates` | `false` | Alpha | 1.11 | 1.11 | -| `PodReadinessGates` | `true` | Beta | 1.12 | 1.13 | -| `PodReadinessGates` | `true` | GA | 1.14 | - | -| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | | +| `PodOverhead` | `false` | Alpha | 1.16 | - | +| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 | | `PodShareProcessNamespace` | `true` | Beta | 1.12 | | | `ProcMountType` | `false` | Alpha | 1.12 | | -| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 | +| `QOSReserved` | `false` | Alpha | 1.11 | | | `RemainingItemCount` | `false` | Alpha | 1.15 | | -| `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | | | `RequestManagement` | `false` | Alpha | 1.15 | | +| `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | | | `ResourceQuotaScopeSelectors` | `false` | Alpha | 1.11 | 1.11 | | `ResourceQuotaScopeSelectors` | `true` | Beta | 1.12 | | | `RotateKubeletClientCertificate` | `true` | Beta | 1.8 | | | `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 | | `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | | | `RunAsGroup` | `true` | Beta | 1.14 | | +| `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 | | `RuntimeClass` | `true` | Beta | 1.14 | | +| `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | +| `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | | | `SCTPSupport` | `false` | Alpha | 1.12 | | -| `ServerSideApply` | `false` | Alpha | 1.14 | | +| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 | +| `ServerSideApply` | `true` | Beta | 1.16 | | | `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | | | `ServiceNodeExclusion` | `false` | Alpha | 1.8 | | -| `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 | -| `StorageObjectInUseProtection` | `true` | GA | 1.11 | | +| `StartupProbe` | `false` | Alpha | 1.16 | | | `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 | | `StorageVersionHash` | `true` | Beta | 1.15 | | -| `StreamingProxyRedirects` | `true` | Beta | 1.5 | | -| `SupportIPVSProxyMode` | `false` | Alpha | 1.8 | 1.8 | -| `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 | -| `SupportIPVSProxyMode` | `true` | Beta | 1.10 | 1.10 | -| `SupportIPVSProxyMode` | `true` | GA | 1.11 | | +| `StreamingProxyRedirects` | `false` | Beta | 1.5 | 1.5 | +| `StreamingProxyRedirects` | `true` | Beta | 1.6 | | | `SupportNodePidsLimit` | `false` | Alpha | 1.14 | 1.14 | | `SupportNodePidsLimit` | `true` | Beta | 1.15 | | | `SupportPodPidsLimit` | `false` | Alpha | 1.10 | 1.13 | @@ -169,24 +138,106 @@ kubeletなどのコンポーネントにフィーチャーゲートを設定す | `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 | | `TokenRequestProjection` | `true` | Beta | 1.12 | | | `TTLAfterFinished` | `false` | Alpha | 1.12 | | -| `VolumePVCDataSource` | `false` | Alpha | 1.15 | | -| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | -| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | -| `VolumeScheduling` | `true` | GA | 1.13 | | +| `TopologyManager` | `false` | Alpha | 1.16 | | +| `ValidateProxyRedirects` | `false` | Alpha | 1.10 | 1.13 | +| `ValidateProxyRedirects` | `true` | Beta | 1.14 | | +| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 | +| `VolumePVCDataSource` | `true` | Beta | 1.16 | | | `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 | | `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | | | `VolumeSnapshotDataSource` | `false` | Alpha | 1.12 | - | -| `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | -| `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | | -| `WatchBookmark` | `false` | Alpha | 1.15 | | +| `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 | +| `WatchBookmark` | `true` | Beta | 1.16 | | | `WindowsGMSA` | `false` | Alpha | 1.14 | | +| `WindowsGMSA` | `true` | Beta | 1.16 | | +| `WinDSR` | `false` | Alpha | 1.14 | | +| `WinOverlay` | `false` | Alpha | 1.14 | | +{{< /table >}} + +### GraduatedまたはDeprecatedのフィーチャーゲート {#feature-gates-for-graduated-or-deprecated-features} + +{{< table caption="GraduatedまたはDeprecatedのフィーチャーゲート" >}} + +| 機能名 | デフォルト値 | ステージ | 導入開始バージョン | 最終利用可能バージョン | +|---------|---------|-------|-------|-------| +| `Accelerators` | `false` | Alpha | 1.6 | 1.10 | +| `Accelerators` | - | Deprecated | 1.11 | - | +| `AdvancedAuditing` | `false` | Alpha | 1.7 | 1.7 | +| `AdvancedAuditing` | `true` | Beta | 1.8 | 1.11 | +| `AdvancedAuditing` | `true` | GA | 1.12 | - | +| `AffinityInAnnotations` | `false` | Alpha | 1.6 | 1.7 | +| `AffinityInAnnotations` | - | Deprecated | 1.8 | - | +| `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 | +| `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | - | +| `CSIPersistentVolume` | `false` | Alpha | 1.9 | 1.9 | +| `CSIPersistentVolume` | `true` | Beta | 1.10 | 1.12 | +| `CSIPersistentVolume` | `true` | GA | 1.13 | - | +| `CustomPodDNS` | `false` | Alpha | 1.9 | 1.9 | +| `CustomPodDNS` | `true` | Beta| 1.10 | 1.13 | +| `CustomPodDNS` | `true` | GA | 1.14 | - | +| `CustomResourcePublishOpenAPI` | `false` | Alpha| 1.14 | 1.14 | +| `CustomResourcePublishOpenAPI` | `true` | Beta| 1.15 | 1.15 | +| `CustomResourcePublishOpenAPI` | `true` | GA | 1.16 | - | +| `CustomResourceSubresources` | `false` | Alpha | 1.10 | 1.10 | +| `CustomResourceSubresources` | `true` | Beta | 1.11 | 1.15 | +| `CustomResourceSubresources` | `true` | GA | 1.16 | - | +| `CustomResourceValidation` | `false` | Alpha | 1.8 | 1.8 | +| `CustomResourceValidation` | `true` | Beta | 1.9 | 1.15 | +| `CustomResourceValidation` | `true` | GA | 1.16 | - | +| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 | +| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 | +| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - | +| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 | +| `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - | +| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 | +| `DynamicVolumeProvisioning` | `true` | GA | 1.8 | - | +| `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | 1.14 | +| `EnableEquivalenceClassCache` | - | Deprecated | 1.15 | - | +| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 | +| `ExperimentalCriticalPodAnnotation` | `false` | Deprecated | 1.13 | - | +| `GCERegionalPersistentDisk` | `true` | Beta | 1.10 | 1.12 | +| `GCERegionalPersistentDisk` | `true` | GA | 1.13 | - | +| `HugePages` | `false` | Alpha | 1.8 | 1.9 | +| `HugePages` | `true` | Beta| 1.10 | 1.13 | +| `HugePages` | `true` | GA | 1.14 | - | +| `Initializers` | `false` | Alpha | 1.7 | 1.13 | +| `Initializers` | - | Deprecated | 1.14 | - | +| `KubeletConfigFile` | `false` | Alpha | 1.8 | 1.9 | +| `KubeletConfigFile` | - | Deprecated | 1.10 | - | +| `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 | +| `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 | +| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - | +| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 | +| `MountPropagation` | `true` | Beta | 1.10 | 1.11 | +| `MountPropagation` | `true` | GA | 1.12 | - | +| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 | +| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 | +| `PersistentLocalVolumes` | `true` | GA | 1.14 | - | +| `PodPriority` | `false` | Alpha | 1.8 | 1.10 | +| `PodPriority` | `true` | Beta | 1.11 | 1.13 | +| `PodPriority` | `true` | GA | 1.14 | - | +| `PodReadinessGates` | `false` | Alpha | 1.11 | 1.11 | +| `PodReadinessGates` | `true` | Beta | 1.12 | 1.13 | +| `PodReadinessGates` | `true` | GA | 1.14 | - | +| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 | +| `PVCProtection` | - | Deprecated | 1.10 | - | +| `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 | +| `StorageObjectInUseProtection` | `true` | GA | 1.11 | - | +| `SupportIPVSProxyMode` | `false` | Alpha | 1.8 | 1.8 | +| `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 | +| `SupportIPVSProxyMode` | `true` | Beta | 1.10 | 1.10 | +| `SupportIPVSProxyMode` | `true` | GA | 1.11 | - | +| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | +| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | +| `VolumeScheduling` | `true` | GA | 1.13 | - | +| `VolumeSubpath` | `true` | GA | 1.13 | - | +{{< /table >}} ## 機能を使用する -### 機能ステージ +### 機能のステージ {#feature-stages} -機能には *Alpha* 、 *Beta* 、 *GA* の段階があります。 -*Alpha* 機能とは: +機能には*Alpha* 、*Beta* 、*GA* の段階があります。*Alpha* 機能とは: * デフォルトでは無効になっています。 * バグがあるかもしれません。機能を有効にするとバグが発生する可能性があります。 @@ -207,8 +258,9 @@ kubeletなどのコンポーネントにフィーチャーゲートを設定す GAになってからさらなる変更を加えることは現実的ではない場合があります。 {{< /note >}} -*GA* 機能とは(*GA* 機能は *安定版* 機能とも呼ばれます): +*GA* 機能とは(*GA* 機能は*安定版* 機能とも呼ばれます): +* 機能は常に有効となり、無効にすることはできません。 * フィーチャーゲートの設定は不要になります。 * 機能の安定版は後続バージョンでリリースされたソフトウェアで使用されます。 @@ -224,6 +276,7 @@ GAになってからさらなる変更を加えることは現実的ではない - `APIResponseCompression`:`LIST`や`GET`リクエストのAPIレスポンスを圧縮します。 - `AppArmor`: Dockerを使用する場合にLinuxノードでAppArmorによる強制アクセスコントロールを有効にします。詳細は[AppArmorチュートリアル](/docs/tutorials/clusters/apparmor/)で確認できます。 - `AttachVolumeLimit`: ボリュームプラグインを有効にすることでノードにアタッチできるボリューム数の制限を設定できます。 +- `BalanceAttachedNodeVolumes`: スケジューリング中にバランスのとれたリソース割り当てを考慮するノードのボリュームカウントを含めます。判断を行う際に、CPU、メモリー使用率、およびボリュームカウントが近いノードがスケジューラーによって優先されます。 - `BlockVolume`: PodでRawブロックデバイスの定義と使用を有効にします。詳細は[Rawブロックボリュームのサポート](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)で確認できます。 - `BoundServiceAccountTokenVolume`: ServiceAccountTokenVolumeProjectionによって構成される計画ボリュームを使用するにはServiceAccountボリュームを移行します。詳細は[Service Account Token Volumes](https://git.k8s.io/community/contributors/design-proposals/storage/svcacct-token-volume-source.md)で確認できます。 - `CPUManager`: コンテナレベルのCPUアフィニティサポートを有効します。[CPUマネジメントポリシー](/docs/tasks/administer-cluster/cpu-management-policies/)を見てください。 @@ -242,39 +295,49 @@ GAになってからさらなる変更を加えることは現実的ではない 詳細については[`csi`ボリュームタイプ](/docs/concepts/storage/volumes/#csi)ドキュメントを確認してください。 - `CustomCPUCFSQuotaPeriod`: ノードがCPUCFSQuotaPeriodを変更できるようにします。 - `CustomPodDNS`: `dnsConfig`プロパティを使用したPodのDNS設定のカスタマイズを有効にします。詳細は[PodのDNS構成](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config)で確認できます。 +- `CustomResourceDefaulting`: OpenAPI v3バリデーションスキーマにおいて、デフォルト値のCRDサポートを有効にします。 - `CustomResourcePublishOpenAPI`: CRDのOpenAPI仕様での公開を有効にします。 - `CustomResourceSubresources`: [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/)から作成されたリソースの`/status`および`/scale`サブリソースを有効にします。 -- `CustomResourceValidation`: [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/)から作成されたリソースのスキーマによる検証を有効にする。 +- `CustomResourceValidation`: [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/)から作成されたリソースのスキーマによる検証を有効にします。 - `CustomResourceWebhookConversion`: [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/)から作成されたリソースのWebhookベースの変換を有効にします。 -- `DebugContainers`: Podのネームスペースで「デバッグ」コンテナを実行できるようにして実行中のPodのトラブルシューティングを行います。 - `DevicePlugins`: [device-plugins](/docs/concepts/cluster-administration/device-plugins/)によるノードでのリソースプロビジョニングを有効にします。 - `DryRun`: サーバーサイドでの[dry run](/docs/reference/using-api/api-concepts/#dry-run)リクエストを有効にします。 - `DynamicAuditing`: [動的監査](/docs/tasks/debug-application-cluster/audit/#dynamic-backend)を有効にします。 - `DynamicKubeletConfig`: kubeletの動的構成を有効にします。[kubeletの再設定](/docs/tasks/administer-cluster/reconfigure-kubelet/)を参照してください。 - `DynamicProvisioningScheduling`: デフォルトのスケジューラーを拡張してボリュームトポロジーを認識しPVプロビジョニングを処理します。この機能は、v1.12の`VolumeScheduling`機能に完全に置き換えられました。 - `DynamicVolumeProvisioning`(*非推奨*): Podへの永続ボリュームの[動的プロビジョニング](/docs/concepts/storage/dynamic-provisioning/)を有効にします。 +- `EnableAggregatedDiscoveryTimeout` (*非推奨*): 集約されたディスカバリーコールで5秒のタイムアウトを有効にします。 - `EnableEquivalenceClassCache`: Podをスケジュールするときにスケジューラーがノードの同等をキャッシュできるようにします。 +- `EphemeralContainers`: 稼働するPodに{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}を追加する機能を有効にします。 +- `EvenPodsSpread`: Podをトポロジードメイン全体で均等にスケジュールできるようにします。[Even Pods Spread](/docs/concepts/configuration/even-pods-spread)をご覧ください。 - `ExpandInUsePersistentVolumes`: 使用中のPVCのボリューム拡張を有効にします。[使用中のPersistentVolumeClaimのサイズ変更](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim)を参照してください。 - `ExpandPersistentVolumes`: 永続ボリュームの拡張を有効にします。[永続ボリューム要求の拡張](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)を参照してください。 - `ExperimentalCriticalPodAnnotation`: [スケジューリングが保証されるよう](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)に特定のpodへの *クリティカル* の注釈を加える設定を有効にします。 - `ExperimentalHostUserNamespaceDefaultingGate`: ホストするデフォルトのユーザー名前空間を有効にします。これは他のホストの名前空間やホストのマウントを使用しているコンテナ、特権を持つコンテナ、または名前空間のない特定の機能(たとえば`MKNODE`、`SYS_MODULE`など)を使用しているコンテナ用です。これはDockerデーモンでユーザー名前空間の再マッピングが有効になっている場合にのみ有効にすべきです。 +- `EndpointSlice`: よりスケーラブルで拡張可能なネットワークエンドポイントのエンドポイントスライスを有効にします。対応するAPIとコントローラーを有効にする必要があります。[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpoint-slices/)をご覧ください。 - `GCERegionalPersistentDisk`: GCEでリージョナルPD機能を有効にします。 - `HugePages`: 事前に割り当てられた[huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/)の割り当てと消費を有効にします。 - `HyperVContainer`: Windowsコンテナの[Hyper-Vによる分離](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)を有効にします。 +- `HPAScaleToZero`: カスタムメトリクスまたは外部メトリクスを使用するときに、`HorizontalPodAutoscaler`リソースの`minReplicas`を0に設定できるようにします。 - `KubeletConfigFile`: 設定ファイルを使用して指定されたファイルからのkubelet設定の読み込みを有効にします。詳細は[設定ファイルによるkubeletパラメーターの設定](/docs/tasks/administer-cluster/kubelet-config-file/)で確認できます。 - `KubeletPluginsWatcher`: 調査ベースのプラグイン監視ユーティリティを有効にしてkubeletが[CSIボリュームドライバー](/docs/concepts/storage/volumes/#csi)などのプラグインを検出できるようにします。 - `KubeletPodResources`: kubeletのpodのリソースgrpcエンドポイントを有効にします。詳細は[デバイスモニタリングのサポート](https://git.k8s.io/community/keps/sig-node/compute-device-assignment.md)で確認できます。 +- `LegacyNodeRoleBehavior`: 無効にすると、サービスロードバランサーの従来の動作とノードの中断により機能固有のラベルが優先され、`node-role.kubernetes.io/master`ラベルが無視されます。 - `LocalStorageCapacityIsolation`: [ローカルの一時ストレージ](/docs/concepts/configuration/manage-compute-resources-container/)の消費を有効にして、[emptyDirボリューム](/docs/concepts/storage/volumes/#emptydir)の`sizeLimit`プロパティも有効にします。 - `LocalStorageCapacityIsolationFSQuotaMonitoring`: `LocalStorageCapacityIsolation`が[ローカルの一時ストレージ](/docs/concepts/configuration/manage-compute-resources-container/)で有効になっていて、[emptyDirボリューム](/docs/concepts/storage/volumes/#emptydir)のbacking filesystemがプロジェクトクォータをサポートし有効になっている場合、プロジェクトクォータを使用して、パフォーマンスと精度を向上させるために、ファイルシステムへのアクセスではなく[emptyDirボリューム](/docs/concepts/storage/volumes/#emptydir)ストレージ消費を監視します。 - `MountContainers`: ホスト上のユーティリティコンテナをボリュームマウンターとして使用できるようにします。 - `MountPropagation`: あるコンテナによってマウントされたボリュームを他のコンテナまたはpodに共有できるようにします。詳細は[マウントの伝播](/docs/concepts/storage/volumes/#mount-propagation)で確認できます。 +- `NodeDisruptionExclusion`: ノードラベル`node.kubernetes.io/exclude-disruption`の使用を有効にします。これにより、ゾーン障害時にノードが退避するのを防ぎます。 - `NodeLease`: 新しいLease APIを有効にしてノードヘルスシグナルとして使用できるノードのハートビートをレポートします。 - `NonPreemptingPriority`: PriorityClassとPodのNonPreemptingオプションを有効にします。 - `PersistentLocalVolumes`: Podで`local`ボリュームタイプの使用を有効にします。`local`ボリュームを要求する場合、podアフィニティを指定する必要があります。 +- `PodOverhead`: [PodOverhead](/docs/concepts/configuration/pod-overhead/)機能を有効にして、Podのオーバーヘッドを考慮するようにします。 - `PodPriority`: [優先度](/docs/concepts/configuration/pod-priority-preemption/)に基づいてPodの再スケジューリングとプリエンプションを有効にします。 - `PodReadinessGates`: Podのreadinessの評価を拡張するために`PodReadinessGate`フィールドの設定を有効にします。詳細は[Pod readiness gate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)で確認できます。 +- `PodShareProcessNamespace`: Podで実行されているコンテナ間で単一のプロセス名前空間を共有するには、Podで`shareProcessNamespace`の設定を有効にします。 詳細については、[Pod内のコンテナ間でプロセス名前空間を共有する](/docs/tasks/configure-pod-container/share-process-namespace/)をご覧ください。 - `ProcMountType`: コンテナのProcMountTypeの制御を有効にします。 - `PVCProtection`: 永続ボリューム要求(PVC)がPodでまだ使用されているときに削除されないようにします。詳細は[ここ](/docs/tasks/administer-cluster/storage-object-in-use-protection/)で確認できます。 +- `QOSReserved`: QoSレベルでのリソース予約を許可して、低いQoSレベルのポッドが高いQoSレベルで要求されたリソースにバーストするのを防ぎます(現時点ではメモリのみ)。 - `ResourceLimitsPriorityFunction`: 入力したPodのCPU制限とメモリ制限の少なくとも1つを満たすノードに対して最低スコアを1に割り当てるスケジューラー優先機能を有効にします。その目的は同じスコアを持つノード間の関係を断つことです。 - `RequestManagement`: 各サーバーで優先順位付けと公平性を備えたリクエストの並行性の管理機能を有効にしました。 - `ResourceQuotaScopeSelectors`: リソース割当のスコープセレクターを有効にします。 @@ -287,6 +350,7 @@ GAになってからさらなる変更を加えることは現実的ではない - `ServerSideApply`: APIサーバーで[サーバーサイドApply(SSA)](/docs/reference/using-api/api-concepts/#server-side-apply)のパスを有効にします。 - `ServiceLoadBalancerFinalizer`: サービスロードバランサーのファイナライザー保護を有効にします。 - `ServiceNodeExclusion`: クラウドプロバイダーによって作成されたロードバランサーからのノードの除外を有効にします。"`alpha.service-controller.kubernetes.io/exclude-balancer`"キーでラベル付けされている場合ノードは除外の対象となります。 +- `StartupProbe`: kubeletで[startup](/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe)プローブを有効にします。 - `StorageObjectInUseProtection`: PersistentVolumeまたはPersistentVolumeClaimオブジェクトがまだ使用されている場合、それらの削除を延期します。 - `StorageVersionHash`: apiserversがディスカバリーでストレージのバージョンハッシュを公開できるようにします。 - `StreamingProxyRedirects`: ストリーミングリクエストのバックエンド(kubelet)からのリダイレクトをインターセプト(およびフォロー)するようAPIサーバーに指示します。ストリーミングリクエストの例には`exec`、`attach`、`port-forward`リクエストが含まれます。 @@ -304,5 +368,10 @@ GAになってからさらなる変更を加えることは現実的ではない - `VolumeSubpathEnvExpansion`: 環境変数を`subPath`に展開するための`subPathExpr`フィールドを有効にします。 - `WatchBookmark`: ブックマークイベントの監視サポートを有効にします。 - `WindowsGMSA`: GMSA資格仕様をpodからコンテナランタイムに渡せるようにします。 +- `WinDSR`: kube-proxyがWindows用のDSRロードバランサーを作成できるようにします。 +- `WinOverlay`: kube-proxyをWindowsのオーバーレイモードで実行できるようにします。 {{% /capture %}} +{{% capture whatsnext %}} +* Kubernetesの[非推奨ポリシー](/docs/reference/using-api/deprecation-policy/)では、機能とコンポーネントを削除するためのプロジェクトのアプローチを説明しています。 +{{% /capture %}} diff --git a/content/ja/docs/reference/glossary/cluster.md b/content/ja/docs/reference/glossary/cluster.md new file mode 100644 index 0000000000000..e88814a730945 --- /dev/null +++ b/content/ja/docs/reference/glossary/cluster.md @@ -0,0 +1,18 @@ +--- +title: クラスター +id: cluster +date: 2019-06-15 +full_link: +short_description: > + + Kubernetesが管理するコンテナ化されたアプリケーションを実行する、ノードと呼ばれるマシンの集合です。クラスターには、少なくとも1つのワーカーノードと少なくとも1つのマスターノードがあります。 + +aka: +tags: +- fundamental +- operation +--- +Kubernetesが管理するコンテナ化されたアプリケーションを実行する、ノードと呼ばれるマシンの集合です。クラスターには、少なくとも1つのワーカーノードと少なくとも1つのマスターノードがあります。 + + +ワーカーノードは、アプリケーションのコンポーネントであるPodをホストします。マスターノードは、クラスター内のワーカーノードとPodを管理します。複数のマスターノードを使用して、クラスターにフェイルオーバーと高可用性を提供します。 \ No newline at end of file diff --git a/content/ja/docs/reference/glossary/ingress.md b/content/ja/docs/reference/glossary/ingress.md new file mode 100755 index 0000000000000..56b13b29402e6 --- /dev/null +++ b/content/ja/docs/reference/glossary/ingress.md @@ -0,0 +1,19 @@ +--- +title: Ingress +id: ingress +date: 2018-04-12 +full_link: /docs/ja/concepts/services-networking/ingress/ +short_description: > + クラスター内のServiceに対する外部からのアクセス(主にHTTP)を管理するAPIオブジェクトです。 + +aka: +tags: +- networking +- architecture +- extension +--- + クラスター内のServiceに対する外部からのアクセス(主にHTTP)を管理するAPIオブジェクトです。 + + + +Ingressは負荷分散、SSL終端、名前ベースの仮想ホスティングの機能を提供します。 diff --git a/content/ja/docs/reference/kubectl/_index.md b/content/ja/docs/reference/kubectl/_index.md new file mode 100755 index 0000000000000..7b6c2d720b12a --- /dev/null +++ b/content/ja/docs/reference/kubectl/_index.md @@ -0,0 +1,5 @@ +--- +title: "kubectl CLI" +weight: 60 +--- + diff --git a/content/ja/docs/reference/kubectl/cheatsheet.md b/content/ja/docs/reference/kubectl/cheatsheet.md new file mode 100644 index 0000000000000..044e16bffa318 --- /dev/null +++ b/content/ja/docs/reference/kubectl/cheatsheet.md @@ -0,0 +1,384 @@ +--- +title: kubectlチートシート +content_template: templates/concept +card: + name: reference + weight: 30 +--- + +{{% capture overview %}} + +[Kubectl概要](/docs/reference/kubectl/overview/)と[JsonPathガイド](/docs/reference/kubectl/jsonpath)も合わせてご覧ください。 + +このページは`kubectl`コマンドの概要です。 + +{{% /capture %}} + +{{% capture body %}} + +# kubectl - チートシート + +## Kubectlコマンドの補完 + +### BASH + +```bash +source <(kubectl completion bash) # 現在のbashシェルにコマンド補完を設定するには、最初にbash-completionパッケージをインストールする必要があります。 +echo "source <(kubectl completion bash)" >> ~/.bashrc # bashシェルでのコマンド補完を永続化するために.bashrcに追記します。 +``` + +また、エイリアスを使用している場合にも`kubectl`コマンドを補完できます。 + +```bash +alias k=kubectl +complete -F __start_kubectl k +``` + +### ZSH + +```bash +source <(kubectl completion zsh) # 現在のzshシェルでコマンド補完を設定します +echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc # zshシェルでのコマンド補完を永続化するために.zshrcに追記します。 +``` + +## Kubectlコンテキストの設定 + +`kubectl`がどのKubernetesクラスターと通信するかを設定します。 +設定ファイル詳細については[kubeconfigを使用した複数クラスターとの認証](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)をご覧ください。 + +```bash +kubectl config view # マージされたkubeconfigの設定を表示します。 + +# 複数のkubeconfigファイルを同時に読み込む場合はこのように記述します。 +KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 + +kubectl config view + +# e2eユーザのパスワードを取得します。 +kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' + +kubectl config view -o jsonpath='{.users[].name}' # 最初のユーザー名を表示します +kubectl config view -o jsonpath='{.users[*].name}' # ユーザー名のリストを表示します +kubectl config get-contexts # コンテキストのリストを表示します +kubectl config current-context # 現在のコンテキストを表示します +kubectl config use-context my-cluster-name # デフォルトのコンテキストをmy-cluster-nameに設定します + +# basic認証をサポートする新たなクラスターをkubeconfigに追加します +kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword + +# 現在のコンテキストでkubectlのサブコマンドのネームスペースを永続的に変更します +kubectl config set-context --current --namespace=ggckad-s2 + +# 特定のユーザー名と名前空間を使用してコンテキストを設定します +kubectl config set-context gce --user=cluster-admin --namespace=foo \ + && kubectl config use-context gce + +kubectl config unset users.foo # ユーザーfooを削除します +``` + +## Apply + +`apply`はKubernetesリソースを定義するファイルを通じてアプリケーションを管理します。`kubectl apply`を実行して、クラスター内のリソースを作成および更新します。これは、本番環境でKubernetesアプリケーションを管理する推奨方法です。 +詳しくは[Kubectl Book](https://kubectl.docs.kubernetes.io)をご覧ください。 + + +## Objectの作成 + +Kubernetesのマニフェストファイルは、jsonまたはyamlで定義できます。ファイル拡張子として、`.yaml`や`.yml`、`.json`が使えます。 + +```bash +kubectl apply -f ./my-manifest.yaml # リソースを作成します +kubectl apply -f ./my1.yaml -f ./my2.yaml # 複数のファイルからリソースを作成します +kubectl apply -f ./dir # dirディレクトリ内のすべてのマニフェストファイルからリソースを作成します +kubectl apply -f https://git.io/vPieo # urlで公開されているファイルからリソースを作成します +kubectl create deployment nginx --image=nginx # 単一のnginx Deploymentを作成します +kubectl explain pods,svc # PodおよびServiceマニフェストのドキュメントを取得します + +# 標準入力から複数のYAMLオブジェクトを作成します + +cat < pod.yaml +kubectl attach my-pod -i # 実行中のコンテナに接続します +kubectl port-forward my-pod 5000:6000 # ローカルマシンのポート5000を、my-podのポート6000に転送します +kubectl exec my-pod -- ls / # 既存のPodでコマンドを実行(単一コンテナの場合) +kubectl exec my-pod -c my-container -- ls / # 既存のPodでコマンドを実行(複数コンテナがある場合) +kubectl top pod POD_NAME --containers # 特定のPodとそのコンテナのメトリクスを表示します +``` + +## ノードおよびクラスターとの対話処理 + +```bash +kubectl cordon my-node # my-nodeにスケーリングされないように設定します +kubectl drain my-node # メンテナンスの準備としてmy-nodeで動作中のPodを空にします +kubectl uncordon my-node # my-nodeにスケーリングされるように設定します +kubectl top node my-node # 特定のノードのメトリクスを表示します +kubectl cluster-info # Kubernetesクラスターのマスターとサービスのアドレスを表示します +kubectl cluster-info dump # 現在のクラスター状態を標準出力にダンプします +kubectl cluster-info dump --output-directory=/path/to/cluster-state # 現在のクラスター状態を/path/to/cluster-stateにダンプします + +# special-userキーとNoScheduleエフェクトを持つTaintが既に存在する場合、その値は指定されたとおりに置き換えられます +kubectl taint nodes foo dedicated=special-user:NoSchedule +``` + +### リソースタイプ + +サポートされているすべてのリソースタイプを、それらが[API group](/docs/concepts/overview/kubernetes-api/#api-groups)か[Namespaced](/docs/concepts/overview/working-with-objects/namespaces)、[Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects)に関わらずその短縮名をリストします。 + +```bash +kubectl api-resources +``` + +APIリソースを探索するためのその他の操作: + +```bash +kubectl api-resources --namespaced=true # 名前空間付きのすべてのリソースを表示します +kubectl api-resources --namespaced=false # 名前空間のないすべてのリソースを表示します +kubectl api-resources -o name # すべてのリソースを単純な出力(リソース名のみ)で表示します +kubectl api-resources -o wide # すべてのリソースを拡張された形(別名 "wide")で表示します +kubectl api-resources --verbs=list,get # "list"および"get"操作をサポートするすべてのリソースを表示します +kubectl api-resources --api-group=extensions # "extensions" APIグループのすべてのリソースを表示します +``` + +### 出力のフォーマット + +特定の形式で端末ウィンドウに詳細を出力するには、サポートされている`kubectl`コマンドに`-o`または`--output`フラグを追加します。 + +出力フォーマット | 説明 +---------------- | ----------- +`-o=custom-columns=` | カスタムカラムを使用してコンマ区切りのテーブルを表示します +`-o=custom-columns-file=` | ``ファイル内のカスタムカラムテンプレートを使用してテーブルを表示します +`-o=json` | JSON形式のAPIオブジェクトを出力します +`-o=jsonpath=