From b2d803d71eaeec4a7635fdc1fa26850d59d5841a Mon Sep 17 00:00:00 2001 From: Praveen Sastry Date: Fri, 13 Sep 2019 02:42:27 +1000 Subject: [PATCH 01/18] Rename `Flexvolume` to `FlexVolume` in docs (#16333) --- .../2019-01-17-update-volume-snapshot-alpha.md | 2 +- .../2019-03-22-e2e-testing-for-everyone.md | 2 +- .../docs/concepts/policy/pod-security-policy.md | 8 ++++---- .../docs/concepts/storage/persistent-volumes.md | 2 +- .../en/docs/concepts/storage/storage-classes.md | 2 +- content/en/docs/concepts/storage/volumes.md | 16 ++++++++-------- .../en/docs/contribute/style/write-new-topic.md | 2 +- content/en/docs/reference/glossary/flexvolume.md | 12 ++++++------ content/fr/docs/concepts/storage/volumes.md | 16 ++++++++-------- .../concepts/architecture/cloud-controller.md | 2 +- .../docs/concepts/storage/persistent-volumes.md | 4 ++-- 11 files changed, 34 insertions(+), 34 deletions(-) diff --git a/content/en/blog/_posts/2019-01-17-update-volume-snapshot-alpha.md b/content/en/blog/_posts/2019-01-17-update-volume-snapshot-alpha.md index 6b89b3d4c6428..4ffcea965c6dd 100644 --- a/content/en/blog/_posts/2019-01-17-update-volume-snapshot-alpha.md +++ b/content/en/blog/_posts/2019-01-17-update-volume-snapshot-alpha.md @@ -132,7 +132,7 @@ If a user deletes a `VolumeSnapshot` API object in active use by a PVC, the `Vol ## Which volume plugins support Kubernetes Snapshots? -Snapshots are only supported for CSI drivers (not for in-tree or Flexvolume). To use the Kubernetes snapshots feature, ensure that a CSI Driver that implements snapshots is deployed on your cluster. +Snapshots are only supported for CSI drivers (not for in-tree or FlexVolume). To use the Kubernetes snapshots feature, ensure that a CSI Driver that implements snapshots is deployed on your cluster. As of the publishing of this blog post, the following CSI drivers support snapshots: diff --git a/content/en/blog/_posts/2019-03-22-e2e-testing-for-everyone.md b/content/en/blog/_posts/2019-03-22-e2e-testing-for-everyone.md index b9bd6eb44bd2b..7f3f4c5efcb9c 100644 --- a/content/en/blog/_posts/2019-03-22-e2e-testing-for-everyone.md +++ b/content/en/blog/_posts/2019-03-22-e2e-testing-for-everyone.md @@ -8,7 +8,7 @@ date: 2019-03-22 More and more components that used to be part of Kubernetes are now being developed outside of Kubernetes. For example, storage drivers used to be compiled into Kubernetes binaries, then were moved into -[stand-alone Flexvolume +[stand-alone FlexVolume binaries](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) on the host, and now are delivered as [Container Storage Interface (CSI) drivers](https://github.com/container-storage-interface/spec) diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index 8c22d902f2b31..1b2211c6b5495 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -34,7 +34,7 @@ administrator to control the following: | Usage of host networking and ports | [`hostNetwork`, `hostPorts`](#host-namespaces) | | Usage of volume types | [`volumes`](#volumes-and-file-systems) | | Usage of the host filesystem | [`allowedHostPaths`](#volumes-and-file-systems) | -| White list of Flexvolume drivers | [`allowedFlexVolumes`](#flexvolume-drivers) | +| White list of FlexVolume drivers | [`allowedFlexVolumes`](#flexvolume-drivers) | | Allocating an FSGroup that owns the pod's volumes | [`fsGroup`](#volumes-and-file-systems) | | Requiring the use of a read only root file system | [`readOnlyRootFilesystem`](#volumes-and-file-systems) | | The user and group IDs of the container | [`runAsUser`, `runAsGroup`, `supplementalGroups`](#users-and-groups) | @@ -463,12 +463,12 @@ to effectively limit access to the specified `pathPrefix`. **ReadOnlyRootFilesystem** - Requires that containers must run with a read-only root filesystem (i.e. no writable layer). -### Flexvolume drivers +### FlexVolume drivers -This specifies a whitelist of Flexvolume drivers that are allowed to be used +This specifies a whitelist of FlexVolume drivers that are allowed to be used by flexvolume. An empty list or nil means there is no restriction on the drivers. Please make sure [`volumes`](#volumes-and-file-systems) field contains the -`flexVolume` volume type; no Flexvolume driver is allowed otherwise. +`flexVolume` volume type; no FlexVolume driver is allowed otherwise. For example: diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index 75cd04b303909..65a2720cfbe1e 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -351,7 +351,7 @@ In the CLI, the access modes are abbreviated to: | Cinder | ✓ | - | - | | CSI | depends on the driver | depends on the driver | depends on the driver | | FC | ✓ | ✓ | - | -| Flexvolume | ✓ | ✓ | depends on the driver | +| FlexVolume | ✓ | ✓ | depends on the driver | | Flocker | ✓ | - | - | | GCEPersistentDisk | ✓ | ✓ | - | | Glusterfs | ✓ | ✓ | ✓ | diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 9e3a254a209d6..56eae20437136 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -72,7 +72,7 @@ for provisioning PVs. This field must be specified. | CephFS | - | - | | Cinder | ✓ | [OpenStack Cinder](#openstack-cinder)| | FC | - | - | -| Flexvolume | - | - | +| FlexVolume | - | - | | Flocker | ✓ | - | | GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) | | Glusterfs | ✓ | [Glusterfs](#glusterfs) | diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index b92d4ff12ebd3..ddece09f0d0ea 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -1205,16 +1205,16 @@ several media types. ## Out-of-Tree Volume Plugins The Out-of-tree volume plugins include the Container Storage Interface (CSI) -and Flexvolume. They enable storage vendors to create custom storage plugins +and FlexVolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository. -Before the introduction of CSI and Flexvolume, all volume plugins (like +Before the introduction of CSI and FlexVolume, all volume plugins (like volume types listed above) were "in-tree" meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API. This meant that adding a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository. -Both CSI and Flexvolume allow volume plugins to be developed independent of +Both CSI and FlexVolume allow volume plugins to be developed independent of the Kubernetes code base, and deployed (installed) on Kubernetes clusters as extensions. @@ -1371,14 +1371,14 @@ provisioning/delete, attach/detach, mount/unmount and resizing of volumes. In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented are listed in the "Types of Volumes" section above. -### Flexvolume {#flexVolume} +### FlexVolume {#flexVolume} -Flexvolume is an out-of-tree plugin interface that has existed in Kubernetes +FlexVolume is an out-of-tree plugin interface that has existed in Kubernetes since version 1.2 (before CSI). It uses an exec-based model to interface with -drivers. Flexvolume driver binaries must be installed in a pre-defined volume +drivers. FlexVolume driver binaries must be installed in a pre-defined volume plugin path on each node (and in some cases master). -Pods interact with Flexvolume drivers through the `flexvolume` in-tree plugin. +Pods interact with FlexVolume drivers through the `flexvolume` in-tree plugin. More details can be found [here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md). ## Mount propagation @@ -1414,7 +1414,7 @@ Its values are: In addition, all volume mounts created by the Container will be propagated back to the host and to all Containers of all Pods that use the same volume. - A typical use case for this mode is a Pod with a Flexvolume or CSI driver or + A typical use case for this mode is a Pod with a FlexVolume or CSI driver or a Pod that needs to mount something on the host using a `hostPath` volume. This mode is equal to `rshared` mount propagation as described in the diff --git a/content/en/docs/contribute/style/write-new-topic.md b/content/en/docs/contribute/style/write-new-topic.md index 8a5d7949e4f9a..22e55e3da8157 100644 --- a/content/en/docs/contribute/style/write-new-topic.md +++ b/content/en/docs/contribute/style/write-new-topic.md @@ -94,7 +94,7 @@ following cases (not an exhaustive list): - The code is not generic enough for users to try out. As an example, you can embed the YAML file for creating a Pod which depends on a specific - [Flexvolume](/docs/concepts/storage/volumes#flexvolume) implementation. + [FlexVolume](/docs/concepts/storage/volumes#flexvolume) implementation. - The code is an incomplete example because its purpose is to highlight a portion of a larger file. For example, when describing ways to customize the [PodSecurityPolicy](/docs/tasks/administer-cluster/sysctl-cluster/#podsecuritypolicy) diff --git a/content/en/docs/reference/glossary/flexvolume.md b/content/en/docs/reference/glossary/flexvolume.md index 2b7abc0e866a3..09f352dbc3ad1 100644 --- a/content/en/docs/reference/glossary/flexvolume.md +++ b/content/en/docs/reference/glossary/flexvolume.md @@ -1,22 +1,22 @@ --- -title: Flexvolume +title: FlexVolume id: flexvolume date: 2018-06-25 full_link: /docs/concepts/storage/volumes/#flexvolume short_description: > - Flexvolume is an interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface which addresses several problems with Flexvolumes. + FlexVolume is an interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface which addresses several problems with FlexVolumes. aka: tags: - storage --- - Flexvolume is an interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface which addresses several problems with Flexvolumes. + FlexVolume is an interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface which addresses several problems with FlexVolumes. -Flexvolumes enable users to write their own drivers and add support for their volumes in Kubernetes. FlexVolume driver binaries and dependencies must be installed on host machines. This requires root access. The Storage SIG suggests implementing a {{< glossary_tooltip text="CSI" term_id="csi" >}} driver if possible since it addresses the limitations with Flexvolumes. +FlexVolumes enable users to write their own drivers and add support for their volumes in Kubernetes. FlexVolume driver binaries and dependencies must be installed on host machines. This requires root access. The Storage SIG suggests implementing a {{< glossary_tooltip text="CSI" term_id="csi" >}} driver if possible since it addresses the limitations with FlexVolumes. -* [Flexvolume in the Kubernetes documentation](/docs/concepts/storage/volumes/#flexvolume) -* [More information on Flexvolumes](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) +* [FlexVolume in the Kubernetes documentation](/docs/concepts/storage/volumes/#flexvolume) +* [More information on FlexVolumes](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) * [Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md) diff --git a/content/fr/docs/concepts/storage/volumes.md b/content/fr/docs/concepts/storage/volumes.md index 3a5e7480906f5..062504b97a6b4 100644 --- a/content/fr/docs/concepts/storage/volumes.md +++ b/content/fr/docs/concepts/storage/volumes.md @@ -1054,13 +1054,13 @@ et pas d'isolation entre les conteneurs ou entre les Pods. Dans le futur, il est prévu que les volumes `emptyDir` et `hostPath` soient en mesure de demander une certaine quantité d'espace en utilisant une spécification de [ressource](/docs/user-guide/compute-resources) et de sélectionner un type de support à utiliser, pour les clusters qui ont plusieurs types de support. ## Plugins de volume Out-of-Tree -Les plugins de volume Out-of-tree incluent l'interface CSI (Container Storage Interface) et Flexvolume. +Les plugins de volume Out-of-tree incluent l'interface CSI (Container Storage Interface) et FlexVolume. Ils permettent aux fournisseurs de stockage de créer des plugins de stockage personnalisés sans les ajouter au dépôt Kubernetes. -Avant l'introduction de l'interface CSI et Flexvolume, tous les plugins de volume (tels que les types de volume listés plus haut) étaient "in-tree", ce qui signifie qu'ils étaient construits, liés, compilés et livrés avec les binaires de base Kubernetes et étendent l'API Kubernetes de base. +Avant l'introduction de l'interface CSI et FlexVolume, tous les plugins de volume (tels que les types de volume listés plus haut) étaient "in-tree", ce qui signifie qu'ils étaient construits, liés, compilés et livrés avec les binaires de base Kubernetes et étendent l'API Kubernetes de base. Cela signifiait que l'ajout d'un nouveau système de stockage à Kubernetes (un plugin de volume) requérait de vérifier le code dans le dépôt de base de Kubernetes. -CSI et Flexvolume permettent à des plugins de volume d'être développés indépendamment de la base de code Kubernetes et déployés (installés) sur des clusters Kubernetes en tant qu'extensions. +CSI et FlexVolume permettent à des plugins de volume d'être développés indépendamment de la base de code Kubernetes et déployés (installés) sur des clusters Kubernetes en tant qu'extensions. Pour les fournisseurs de stockage qui cherchent à créer un plugin de volume "out-of-tree", se référer à [cette FAQ](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md). @@ -1185,12 +1185,12 @@ Dans l'état alpha, les opérations et fonctionnalités qui sont supportées inc Les plugins "in-tree" qui supportent la migration CSI et qui ont un pilote CSI correspondant implémenté sont listés dans la section "Types de volumes" au-dessus. -### Flexvolume {#flexVolume} +### FlexVolume {#flexVolume} -Flexvolume est une interface de plugin "out-of-tree" qui existe dans Kubernetes depuis la version 1.2 (avant CSI). -Elle utilise un modèle basé sur exec pour s'interfacer avec les pilotes. Les binaires de pilote Flexvolume doivent être installés dans un chemin de volume de plugin prédéfini sur chaque nœud (et dans certains cas le nœud maître). +FlexVolume est une interface de plugin "out-of-tree" qui existe dans Kubernetes depuis la version 1.2 (avant CSI). +Elle utilise un modèle basé sur exec pour s'interfacer avec les pilotes. Les binaires de pilote FlexVolume doivent être installés dans un chemin de volume de plugin prédéfini sur chaque nœud (et dans certains cas le nœud maître). -Les Pods interagissent avec les pilotes Flexvolume à travers le plugin "in-tree" `flexvolume` +Les Pods interagissent avec les pilotes FlexVolume à travers le plugin "in-tree" `flexvolume` Plus de détails sont disponibles [ici](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md). ## Propagation de montage @@ -1217,7 +1217,7 @@ Ses valeurs sont : * `Bidirectional` - Ce montage de volume se comporte de la même manière que le montage `HostToContainer`. De plus, tous les montages de volume créés par le conteneur seront propagés à l'hôte et à tous les conteneurs des autres Pods qui utilisent le même volume. - Un cas d'utilisation typique pour ce mode est un Pod avec un Flexvolume ou un pilote CSI, ou un Pod qui nécessite de monter quelque chose sur l'hôte en utilisant un volume `hostPath`. + Un cas d'utilisation typique pour ce mode est un Pod avec un FlexVolume ou un pilote CSI, ou un Pod qui nécessite de monter quelque chose sur l'hôte en utilisant un volume `hostPath`. Ce mode est équivalent à une propagation de montage `rshared` tel que décrit dans la [documentation du noyau Linux](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt) diff --git a/content/id/docs/concepts/architecture/cloud-controller.md b/content/id/docs/concepts/architecture/cloud-controller.md index 16052ba1f74d4..7c6d0feb4d154 100644 --- a/content/id/docs/concepts/architecture/cloud-controller.md +++ b/content/id/docs/concepts/architecture/cloud-controller.md @@ -51,7 +51,7 @@ Pada versi 1.9, CCM menjalankan pengendali-pengendali dari daftar sebelumnya seb Volume Controller secara sengaja tidak dipilih sebagai bagian dari CCM. Hal ini adalah karena kerumitan untuk melakukannya, dan mempertimbangkan usaha-usaha yang sedang berlangsung untuk memisahkan logika volume yang spesifik vendor dari KCM, sehingga diputuskan bahwa Volume Contoller tidak akan dipisahkan dari KCM ke CCM. {{< /note >}} -Rencana awal untuk mendukung volume menggunakan CCM adalah dengan menggunakan Flexvolume untuk mendukung penambahan volume secara _pluggable_. Namun, ada sebuah usaha lain yang diberi nama Container Storage Interface (CSI) yang sedang berlangsung untuk menggantikan Flexvolume. +Rencana awal untuk mendukung volume menggunakan CCM adalah dengan menggunakan FlexVolume untuk mendukung penambahan volume secara _pluggable_. Namun, ada sebuah usaha lain yang diberi nama Container Storage Interface (CSI) yang sedang berlangsung untuk menggantikan FlexVolume. Mempertimbangkan dinamika tersebut, kami memutuskan untuk mengambil tindakan sementara hingga CSI siap digunakan. diff --git a/content/id/docs/concepts/storage/persistent-volumes.md b/content/id/docs/concepts/storage/persistent-volumes.md index e3d04b089c642..a650aa6496d5a 100644 --- a/content/id/docs/concepts/storage/persistent-volumes.md +++ b/content/id/docs/concepts/storage/persistent-volumes.md @@ -256,7 +256,7 @@ Tipe-tipe `PersistentVolume` (PV) diimplementasikan sebagai _plugin_. Kubernete * AzureFile * AzureDisk * FC (Fibre Channel) -* Flexvolume +* FlexVolume * Flocker * NFS * iSCSI @@ -338,7 +338,7 @@ Pada CLI, mode-mode akses tersebut disingkat menjadi: | CephFS | ✓ | ✓ | ✓ | | Cinder | ✓ | - | - | | FC | ✓ | ✓ | - | -| Flexvolume | ✓ | ✓ | depends on the driver | +| FlexVolume | ✓ | ✓ | depends on the driver | | Flocker | ✓ | - | - | | GCEPersistentDisk | ✓ | ✓ | - | | Glusterfs | ✓ | ✓ | ✓ | From 9e9eb252acdee0ed49e1f2ed664fae6182f425f3 Mon Sep 17 00:00:00 2001 From: Omer Levi Hevroni Date: Thu, 12 Sep 2019 19:46:27 +0300 Subject: [PATCH 02/18] Fixed CRD schema - missing root type (#15725) From 90899f2bca8d1fe1c3a1ffa0d935f72e5a0cc86a Mon Sep 17 00:00:00 2001 From: Gyuho Lee Date: Thu, 12 Sep 2019 09:58:29 -0700 Subject: [PATCH 03/18] content/en/docs: highlight known etcd client issue (#16156) Signed-off-by: Gyuho Lee --- .../administer-cluster/configure-upgrade-etcd.md | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md index 68fc7c2fc372f..d576493dd30ad 100644 --- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -203,7 +203,7 @@ If the majority of etcd members have permanently failed, the etcd cluster is con As of Kubernetes v1.13.0, etcd2 is no longer supported as a storage backend for new or existing Kubernetes clusters. The timeline for Kubernetes support for -etcd2 and etcd3 is as follows: +etcd2 and etcd3 is as follows: - Kubernetes v1.0: etcd2 only - Kubernetes v1.5.1: etcd3 support added, new clusters still default to etcd2 @@ -219,11 +219,23 @@ v1.13.x, etcd v2 data MUST by migrated to the v3 storage backend, and kube-apiserver invocations changed to use `--storage-backend=etcd3`. The process for migrating from etcd2 to etcd3 is highly dependent on how the -etcd cluster was deployed and configured, as well as how the Kubernetes +etcd cluster was deployed and configured, as well as how the Kubernetes cluster was deployed and configured. We recommend that you consult your cluster provider's documentation to see if there is a predefined solution. If your cluster was created via `kube-up.sh` and is still using etcd2 as its storage backend, please consult the [Kubernetes v1.12 etcd cluster upgrade docs](https://v1-12.docs.kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#upgrading-and-rolling-back-etcd-clusters) +## Known issue: etcd client balancer with secure endpoints + +The etcd v3 client, released in etcd v3.3.13 or earlier, has a [critical bug](https://github.com/kubernetes/kubernetes/issues/72102) which affects the kube-apiserver and HA deployments. The etcd client balancer failover does not properly work against secure endpoints. As a result, etcd servers may fail or disconnect briefly from the kube-apiserver. This affects kube-apiserver HA deployments. + +The fix was made in [etcd v3.4](https://github.com/etcd-io/etcd/pull/10911) (and backported to v3.3.14 or later): the new client now creates its own credential bundle to correctly set authority target in dial function. + +Because the fix requires gRPC dependency upgrade (to v1.23.0), downstream Kubernetes [did not backport etcd upgrades](https://github.com/kubernetes/kubernetes/issues/72102#issuecomment-526645978). Which means the [etcd fix in kube-apiserver](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab) is only available from Kubernetes 1.16. + +To urgently fix this bug for Kubernetes 1.15 or earlier, build a custom kube-apiserver. You can make local changes to [`vendor/google.golang.org/grpc/credentials/credentials.go`](https://github.com/kubernetes/kubernetes/blob/7b85be021cd2943167cd3d6b7020f44735d9d90b/vendor/google.golang.org/grpc/credentials/credentials.go#L135) with [etcd@db61ee106](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab). + +See ["kube-apiserver 1.13.x refuses to work when first etcd-server is not available"](https://github.com/kubernetes/kubernetes/issues/72102). + {{% /capture %}} From f7a969fd7b49b9928bef7cb74558b7f546f95c91 Mon Sep 17 00:00:00 2001 From: Maru Newby Date: Thu, 12 Sep 2019 19:00:30 +0200 Subject: [PATCH 04/18] Fix title of audit task (#16230) * Fix title of audit task The content the title refers to indicates an intent of describing how to configure auditing for multiple apiservers rather than multiple clusters. * Update content/en/docs/tasks/debug-application-cluster/audit.md Co-Authored-By: Tim Bannister --- content/en/docs/tasks/debug-application-cluster/audit.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/debug-application-cluster/audit.md b/content/en/docs/tasks/debug-application-cluster/audit.md index 51471175ad207..e8bb3dbda9fc8 100644 --- a/content/en/docs/tasks/debug-application-cluster/audit.md +++ b/content/en/docs/tasks/debug-application-cluster/audit.md @@ -323,7 +323,7 @@ Administrators should be aware that allowing write access to this feature grants Currently, this feature has performance implications for the apiserver in the form of increased cpu and memory usage. This should be nominal for a small number of sinks, and performance impact testing will be done to understand its scope before the API progresses to beta. -## Multi-cluster setup +## Setup for multiple API servers If you're extending the Kubernetes API with the [aggregation layer][kube-aggregator], you can also set up audit logging for the aggregated apiserver. To do this, pass the configuration options in the From 2ded739ec17e0c2ddc00ec749b5589e75b00bd58 Mon Sep 17 00:00:00 2001 From: Adam Wolfe Gordon Date: Thu, 12 Sep 2019 11:02:28 -0600 Subject: [PATCH 05/18] Update DaemonSet deletion documentation (#16235) * Update DaemonSet deletion documentation The "Updating a DaemonSet" section referred to pre-1.6 behavior, where rolling updates of DaemonSets were not supported and thus orphaned pods from deleted DaemonSets would not be replaced by a new DaemonSet. Describe the new behavior, where orphaned pods can be adopted by a new DaemonSet and may be replaced depending on the update strategy in use. * Tweak language around Pod replacement after DaemonSet deletion * Update note about DaemonSet rolling updates No need to call out the version in which rolling updates for DaemonSets were introduced given how long they've been supported. --- .../docs/concepts/workloads/controllers/daemonset.md | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md index 37b9fac0f3157..62687e8cdae4c 100644 --- a/content/en/docs/concepts/workloads/controllers/daemonset.md +++ b/content/en/docs/concepts/workloads/controllers/daemonset.md @@ -194,14 +194,12 @@ You can modify the Pods that a DaemonSet creates. However, Pods do not allow al fields to be updated. Also, the DaemonSet controller will use the original template the next time a node (even with the same name) is created. - You can delete a DaemonSet. If you specify `--cascade=false` with `kubectl`, then the Pods -will be left on the nodes. You can then create a new DaemonSet with a different template. -The new DaemonSet with the different template will recognize all the existing Pods as having -matching labels. It will not modify or delete them despite a mismatch in the Pod template. -You will need to force new Pod creation by deleting the Pod or deleting the node. +will be left on the nodes. If you subsequently create a new DaemonSet with the same selector, +the new DaemonSet adopts the existing Pods. If any Pods need replacing the DaemonSet replaces +them according to its `updateStrategy`. -In Kubernetes version 1.6 and later, you can [perform a rolling update](/docs/tasks/manage-daemon/update-daemon-set/) on a DaemonSet. +You can [perform a rolling update](/docs/tasks/manage-daemon/update-daemon-set/) on a DaemonSet. ## Alternatives to DaemonSet From 9baad86759c601b8a5cfbf9033a7f37a3bf16034 Mon Sep 17 00:00:00 2001 From: houjun Date: Fri, 13 Sep 2019 01:04:28 +0800 Subject: [PATCH 06/18] Add StorageObjectInUseProtection to the default enabled admission plugins (#16261) --- .../reference/access-authn-authz/admission-controllers.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index 41e8a0acbcf05..1124ce351a384 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -89,10 +89,10 @@ To see which admission plugins are enabled: kube-apiserver -h | grep enable-admission-plugins ``` -In 1.14, they are: +In 1.15, they are: ```shell -NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota +NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota, StorageObjectInUseProtection ``` ## What does each admission controller do? From dc87550f49bb8bffe187844e5c925723a519be8d Mon Sep 17 00:00:00 2001 From: Josh Gavant <4421720+joshgav@users.noreply.github.com> Date: Thu, 12 Sep 2019 12:06:29 -0500 Subject: [PATCH 07/18] correct link to storage provisioner lib repo (#16269) --- content/en/docs/concepts/storage/storage-classes.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 56eae20437136..dc75e2433d826 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -92,14 +92,15 @@ alongside Kubernetes). You can also run and specify external provisioners, which are independent programs that follow a [specification](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md) defined by Kubernetes. Authors of external provisioners have full discretion over where their code lives, how the provisioner is shipped, how it needs to be -run, what volume plugin it uses (including Flex), etc. The repository [kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage) +run, what volume plugin it uses (including Flex), etc. The repository +[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner) houses a library for writing external provisioners that implements the bulk of -the specification plus various community-maintained external provisioners. +the specification. Some external provisioners are listed under the repository +[kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage). -For example, NFS doesn't provide an internal provisioner, but an external provisioner -can be used. Some external provisioners are listed under the repository [kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage). -There are also cases when 3rd party storage vendors provide their own external -provisioner. +For example, NFS doesn't provide an internal provisioner, but an external +provisioner can be used. There are also cases when 3rd party storage +vendors provide their own external provisioner. ### Reclaim Policy From d8adaa4d35b11d027161cf262b91499be9a2a15c Mon Sep 17 00:00:00 2001 From: houjun Date: Fri, 13 Sep 2019 01:08:28 +0800 Subject: [PATCH 08/18] Fix error links (#16278) --- .../docs/concepts/configuration/scheduling-framework.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/docs/concepts/configuration/scheduling-framework.md b/content/en/docs/concepts/configuration/scheduling-framework.md index ca179c6cdf8c1..7e1fd970e3ad0 100644 --- a/content/en/docs/concepts/configuration/scheduling-framework.md +++ b/content/en/docs/concepts/configuration/scheduling-framework.md @@ -139,7 +139,7 @@ happens before the scheduler actually binds the Pod to the Node, and it exists to prevent race conditions while the scheduler waits for the bind to succeed. This is the last step in a scheduling cycle. Once a Pod is in the reserved -state, it will either trigger [Un-reserve](#un-reserve) plugins (on failure) or +state, it will either trigger [Unreserve](#unreserve) plugins (on failure) or [Post-bind](#post-bind) plugins (on success) at the end of the binding cycle. *Note: This concept used to be referred to as "assume".* @@ -154,13 +154,13 @@ can do one of three things. 1. **deny** \ If any permit plugin denies a Pod, it is returned to the scheduling queue. - This will trigger [Un-reserve](#un-reserve) plugins. + This will trigger [Unreserve](#unreserve) plugins. 1. **wait** (with a timeout) \ If a permit plugin returns "wait", then the Pod is kept in the permit phase until a [plugin approves it](#frameworkhandle). If a timeout occurs, **wait** becomes **deny** and the Pod is returned to the scheduling queue, triggering - [un-reserve](#un-reserve) plugins. + [Unreserve](#unreserve) plugins. **Approving a Pod binding** @@ -175,7 +175,7 @@ These plugins are used to perform any work required before a Pod is bound. For example, a pre-bind plugin may provision a network volume and mount it on the target node before allowing the Pod to run there. -If any pre-bind plugin returns an error, the Pod is [rejected](#un-reserve) and +If any pre-bind plugin returns an error, the Pod is [rejected](#unreserve) and returned to the scheduling queue. ### Bind From 2df5c5c68bb22af361ed57dcbc2b8fab9def9eff Mon Sep 17 00:00:00 2001 From: widearea101 <50230602+widearea101@users.noreply.github.com> Date: Thu, 12 Sep 2019 19:10:28 +0200 Subject: [PATCH 09/18] Update deployment.md (#16280) --- content/en/docs/concepts/workloads/controllers/deployment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 587e9658e0284..dec61e6af3119 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -229,7 +229,7 @@ up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. Next time you want to update these Pods, you only need to update the Deployment's Pod template again. Deployment ensures that only a certain number of Pods are down while they are being updated. By default, - it ensures that at least 25% of the desired number of Pods are up (25% max unavailable). + it ensures that at least 75% of the desired number of Pods are up (25% max unavailable). Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. By default, it ensures that at most 25% of the desired number of Pods are up (25% max surge). From e54743410d3967e00190e4111ae8a6c0d2cab7ba Mon Sep 17 00:00:00 2001 From: mohamed chiheb ben jemaa Date: Thu, 12 Sep 2019 18:14:29 +0100 Subject: [PATCH 10/18] ssue with k8s.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ (#16285) * change livenessprobe restart concept to container * Update content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md Co-Authored-By: Tim Bannister --- .../configure-liveness-readiness-probes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md index fc9f9c3473acc..7c0a5fecfa423 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.md @@ -286,7 +286,7 @@ to 1 second. Minimum value is 1. considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. * `failureThreshold`: When a Pod starts and the probe fails, Kubernetes will -try `failureThreshold` times before giving up. Giving up in case of liveness probe means restarting the Pod. In case of readiness probe the Pod will be marked Unready. +try `failureThreshold` times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1. [HTTP probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core) From de90bd100c335178fd18dc4174640c1a2375b718 Mon Sep 17 00:00:00 2001 From: k1eran Date: Thu, 12 Sep 2019 18:16:28 +0100 Subject: [PATCH 11/18] Update kubeadm-init-phase.md (#16335) typo --- .../en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md index 7cbbfcbddb1c9..3e3f76bb3acd4 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md @@ -37,7 +37,7 @@ Can be used to create all required certificates by kubeadm. ## kubeadm init phase kubeconfig {#cmd-phase-kubeconfig} -You can create all required kubeconfig files by calling the `all` subcommand or call then individually. +You can create all required kubeconfig files by calling the `all` subcommand or call them individually. {{< tabs name="tab-kubeconfig" >}} {{< tab name="kubeconfig" include="generated/kubeadm_init_phase_kubeconfig.md" />}} From a7d6c57fee2cc77cb68dcbd378cc4cac675f9091 Mon Sep 17 00:00:00 2001 From: Jingyi Hu Date: Fri, 13 Sep 2019 12:58:28 -0700 Subject: [PATCH 12/18] Update announcing-etcd-3.4.md (#16205) Remove mentioning of using concurrent read in compaction. The original PR https://github.com/etcd-io/etcd/pull/11021 was superseded by https://github.com/etcd-io/etcd/pull/11034, which no long uses concurrent read in compaction. --- content/en/blog/_posts/2019-08-30-announcing-etcd-3.4.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blog/_posts/2019-08-30-announcing-etcd-3.4.md b/content/en/blog/_posts/2019-08-30-announcing-etcd-3.4.md index 07004b78d1124..1358bdd6d0272 100644 --- a/content/en/blog/_posts/2019-08-30-announcing-etcd-3.4.md +++ b/content/en/blog/_posts/2019-08-30-announcing-etcd-3.4.md @@ -18,7 +18,7 @@ etcd v3.4 includes a number of performance improvements for large scale Kubernet In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. `“read-only range request ... took too long to execute”`). Previously, the storage backend commit operation on pending writes blocks incoming read transactions, even when there was no pending write. Now, the commit [does not block reads](https://github.com/etcd-io/etcd/pull/9296) which improve long-running read transaction performance. -We further made [backend read transactions fully concurrent](https://github.com/etcd-io/etcd/pull/10523). Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. We also ran [Kubernetes 5000-node scalability test on GCE](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1130745634945503235) with this change and observed similar improvements. For example, in the very beginning of the test where there are a lot of long-running “LIST pods”, the P99 latency of “POST clusterrolebindings” is [reduced by 97.4%](https://github.com/etcd-io/etcd/pull/10523#issuecomment-499262001). This non-blocking read transaction is now [used for compaction](https://github.com/etcd-io/etcd/pull/11034), which, combined with the reduced compaction batch size, reduces the P99 server request latency during compaction. +We further made [backend read transactions fully concurrent](https://github.com/etcd-io/etcd/pull/10523). Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. We also ran [Kubernetes 5000-node scalability test on GCE](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1130745634945503235) with this change and observed similar improvements. For example, in the very beginning of the test where there are a lot of long-running “LIST pods”, the P99 latency of “POST clusterrolebindings” is [reduced by 97.4%](https://github.com/etcd-io/etcd/pull/10523#issuecomment-499262001). More improvements have been made to lease storage. We enhanced [lease expire/revoke performance](https://github.com/etcd-io/etcd/pull/9418) by storing lease objects more efficiently, and made [lease look-up operation non-blocking](https://github.com/etcd-io/etcd/pull/9229) with current lease grant/revoke operation. And etcd v3.4 introduces [lease checkpoint](https://github.com/etcd-io/etcd/pull/9924) as an experimental feature to persist remaining time-to-live values through consensus. This ensures short-lived lease objects are not auto-renewed after leadership election. This also prevents lease object pile-up when the time-to-live value is relatively large (e.g. [1-hour TTL never expired in Kubernetes use case](https://github.com/kubernetes/kubernetes/issues/65497)). From 17e360c389b6d6795ff4e78172a6726bc9b377fb Mon Sep 17 00:00:00 2001 From: icheikhrouhou <38262569+icheikhrouhou@users.noreply.github.com> Date: Mon, 16 Sep 2019 09:58:37 +0200 Subject: [PATCH 13/18] translate tasks pod storage (#16358) --- .../configure-volume-storage.md | 135 ++++++++++++++++++ content/fr/examples/pods/storage/redis.yaml | 14 ++ 2 files changed, 149 insertions(+) create mode 100644 content/fr/docs/tasks/configure-pod-container/configure-volume-storage.md create mode 100644 content/fr/examples/pods/storage/redis.yaml diff --git a/content/fr/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/fr/docs/tasks/configure-pod-container/configure-volume-storage.md new file mode 100644 index 0000000000000..45cec7cba0b66 --- /dev/null +++ b/content/fr/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -0,0 +1,135 @@ +--- +title: Configurer un pod en utilisant un volume pour le stockage +content_template: templates/task +weight: 50 +--- + +{{% capture overview %}} + +Cette page montre comment configurer un Pod pour utiliser un Volume pour le stockage. + +Le système de fichiers d'un conteneur ne vit que tant que le conteneur vit. Ainsi, quand un conteneur se termine et redémarre, les modifications apportées au système de fichiers sont perdues. Pour un stockage plus consistant et indépendant du conteneur, vous pouvez utiliser un +[Volume](/fr/docs/concepts/storage/volumes/). +C'est particulièrement important pour les applications Stateful, telles que les key-value stores (comme par exemple Redis) et les bases de données. + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## Configurer un volume pour un Pod + +Dans cet exercice, vous créez un pod qui contient un seul conteneur. Ce Pod a un Volume de type +[emptyDir](/fr/docs/concepts/storage/volumes/#emptydir) qui dure toute la vie du Pod, même si le conteneur se termine et redémarre. +Voici le fichier de configuration du Pod : + +{{< codenew file="pods/storage/redis.yaml" >}} + +1. Créez le Pod : + + ```shell + kubectl apply -f https://k8s.io/examples/pods/storage/redis.yaml + ``` + +1. Vérifiez que le conteneur du pod est en cours d'exécution, puis surveillez les modifications apportées au pod : + + ```shell + kubectl get pod redis --watch + ``` + + La sortie ressemble à ceci : + + ```shell + NAME READY STATUS RESTARTS AGE + redis 1/1 Running 0 13s + ``` + +1. Dans un autre terminal, accédez à la console shell du conteneur en cours d'exécution : + + ```shell + kubectl exec -it redis -- /bin/bash + ``` + +1. Dans votre shell, allez dans `/data/redis`, puis créez un fichier : + + ```shell + root@redis:/data# cd /data/redis/ + root@redis:/data/redis# echo Hello > test-file + ``` + +1. Dans votre shell, listez les processus en cours d'exécution : + + ```shell + root@redis:/data/redis# apt-get update + root@redis:/data/redis# apt-get install procps + root@redis:/data/redis# ps aux + ``` + + La sortie ressemble à ceci : + + ```shell + USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND + redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 + root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash + root 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux + ``` + +1. Dans votre shell, arrêtez le processus Redis : + + ```shell + root@redis:/data/redis# kill + ``` + + où `` est l'ID de processus Redis (PID). + +1. Dans votre terminal initial, surveillez les changements apportés au Pod de Redis. Éventuellement, +vous verrez quelque chose comme ça : + + ```shell + NAME READY STATUS RESTARTS AGE + redis 1/1 Running 0 13s + redis 0/1 Completed 0 6m + redis 1/1 Running 1 6m + ``` + +A ce stade, le conteneur est terminé et redémarré. C'est dû au fait que le Pod de Redis a une +[restartPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) +fixé à `Always`. + +1. Accédez à la console shell du conteneur redémarré : + + ```shell + kubectl exec -it redis -- /bin/bash + ``` + +1. Dans votre shell, allez dans `/data/redis`, et vérifiez que `test-file` est toujours là. + ```shell + root@redis:/data/redis# cd /data/redis/ + root@redis:/data/redis# ls + test-file + ``` + +1. Supprimez le pod que vous avez créé pour cet exercice : + + ```shell + kubectl delete pod redis + ``` + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Voir [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core). + +* Voir [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). + +* En plus du stockage sur disque local fourni par `emptyDir`, Kubernetes supporte de nombreuses solutions de stockage connectées au réseau, y compris PD sur GCE et EBS sur EC2, qui sont préférés pour les données critiques et qui s'occuperont des autres détails tels que le montage et le démontage sur les nœuds. Voir [Volumes](/fr/docs/concepts/storage/volumes/) pour plus de détails. + +{{% /capture %}} + + diff --git a/content/fr/examples/pods/storage/redis.yaml b/content/fr/examples/pods/storage/redis.yaml new file mode 100644 index 0000000000000..cb06456d4b315 --- /dev/null +++ b/content/fr/examples/pods/storage/redis.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Pod +metadata: + name: redis +spec: + containers: + - name: redis + image: redis + volumeMounts: + - name: redis-storage + mountPath: /data/redis + volumes: + - name: redis-storage + emptyDir: {} From 4395684e07b8b0bed5fc716d18c5f2e3ceeddc33 Mon Sep 17 00:00:00 2001 From: icheikhrouhou <38262569+icheikhrouhou@users.noreply.github.com> Date: Mon, 16 Sep 2019 10:46:37 +0200 Subject: [PATCH 14/18] docs | tasks | configure-pod-container | assign cpu (#15065) * tasks cpu resource * paraphrasing --- .../assign-cpu-resource.md | 255 ++++++++++++++++++ .../pods/resource/cpu-request-limit-2.yaml | 17 ++ .../pods/resource/cpu-request-limit.yaml | 17 ++ 3 files changed, 289 insertions(+) create mode 100644 content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md create mode 100644 content/fr/examples/pods/resource/cpu-request-limit-2.yaml create mode 100644 content/fr/examples/pods/resource/cpu-request-limit.yaml diff --git a/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md b/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md new file mode 100644 index 0000000000000..a925a4c0bce75 --- /dev/null +++ b/content/fr/docs/tasks/configure-pod-container/assign-cpu-resource.md @@ -0,0 +1,255 @@ +--- +title: Allouer des ressources CPU aux conteneurs et aux pods +content_template: templates/task +weight: 20 +--- + +{{% capture overview %}} + +Cette page montre comment assigner une *demande* (request en anglais) de CPU et une *limite* de CPU à un conteneur. +Un conteneur est garanti d'avoir autant de CPU qu'il le demande, mais n'est pas autorisé à utiliser plus de CPU que sa limite. + + +{{% /capture %}} + + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +Chaque nœud de votre cluster doit avoir au moins 1 CPU. + +Pour certaines des étapes de cette page, vous devez lancer [metrics-server](https://github.com/kubernetes-incubator/metrics-server) dans votre cluster. Si le serveur de métriques est déja lancé, +vous pouvez sauter ces étapes. + +Si vous utilisez minikube, exécutez la commande suivante pour activer metrics-server : + +```shell +minikube addons enable metrics-server +``` + +Pour voir si metrics-server (ou un autre fournisseur de l'API des métriques de ressources `metrics.k8s.io`) est lancé, tapez la commande suivante: + +```shell +kubectl get apiservices +``` + +Si l'API de métriques de ressources est disponible, la sortie inclura une +référence à `metrics.k8s.io`. + + +```shell +NAME +v1beta1.metrics.k8s.io +``` + +{{% /capture %}} + + +{{% capture steps %}} + +## Créer un namespace + +Créez un namespace de manière à ce que les ressources que vous créez dans cet exercice soient isolés du reste de votre cluster. + +```shell +kubectl create namespace cpu-example +``` + +## Spécifier une demande de CPU et une limite de CPU + +Pour spécifier une demande de CPU pour un conteneur, incluez le champ `resources:requests`. +dans le manifeste des ressources du conteneur. Pour spécifier une limite de CPU, incluez `resources:limits`. + +Dans cet exercice, vous allez créer un Pod qui a un seul conteneur. Le conteneur a une demande de 0.5 CPU et une limite de 1 CPU. Voici le fichier de configuration du Pod : + +{{< codenew file="pods/resource/cpu-request-limit.yaml" >}} + +La section `args` du fichier de configuration fournit des arguments pour le conteneur lorsqu'il démarre. L'argument `-cpus "2"` demande au conteneur d'utiliser 2 CPUs. + +Créez le Pod: + +```shell +kubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit.yaml --namespace=cpu-example +``` + +Vérifiez que le Pod fonctionne : + +```shell +kubectl get pod cpu-demo --namespace=cpu-example +``` + +Consultez des informations détaillées sur le Pod : + +```shell +kubectl get pod cpu-demo --output=yaml --namespace=cpu-example +``` + +La sortie indique que le conteneur dans le Pod a une demande CPU de 500 milliCPU. +et une limite de CPU de 1 CPU. + +```yaml +resources: + limits: + cpu: "1" + requests: + cpu: 500m +``` + +Utilisez `kubectl top` pour récupérer les métriques du pod : + +```shell +kubectl top pod cpu-demo --namespace=cpu-example +``` + +La sortie montre que le Pod utilise 974 milliCPU, ce qui est légèrement inférieur à +la limite de 1 CPU spécifiée dans le fichier de configuration du Pod. + +``` +NAME CPU(cores) MEMORY(bytes) +cpu-demo 974m +``` + +Souvenez-vous qu'en réglant `-cpu "2"`, vous avez configuré le conteneur pour faire en sorte qu'il utilise 2 CPU, mais que le conteneur ne peut utiliser qu'environ 1 CPU. L'utilisation du CPU du conteneur est entravée, car le conteneur tente d'utiliser plus de ressources CPU que sa limite. + +{{< note >}} +Une autre explication possible de la la restriction du CPU est que le Nœud pourrait ne pas avoir +suffisamment de ressources CPU disponibles. Rappelons que les conditions préalables à cet exercice exigent que chacun de vos Nœuds doit avoir au moins 1 CPU. +Si votre conteneur fonctionne sur un nœud qui n'a qu'un seul CPU, le conteneur ne peut pas utiliser plus que 1 CPU, quelle que soit la limite de CPU spécifiée pour le conteneur. +{{< /note >}} + +## Unités de CPU + +La ressource CPU est mesurée en unités *CPU*. Un CPU, à Kubernetes, est équivalent à: + +* 1 AWS vCPU +* 1 GCP Core +* 1 Azure vCore +* 1 Hyperthread sur un serveur physique avec un processeur Intel qui a de l'hyperthreading. + +Les valeurs fractionnelles sont autorisées. Un conteneur qui demande 0,5 CPU est garanti deux fois moins CPU par rapport à un conteneur qui demande 1 CPU. Vous pouvez utiliser le suffixe m pour signifier milli. Par exemple 100m CPU, 100 milliCPU, et 0.1 CPU sont tous égaux. Une précision plus fine que 1m n'est pas autorisée. + +Le CPU est toujours demandé en tant que quantité absolue, jamais en tant que quantité relative, 0.1 est la même quantité de CPU sur une machine single-core, dual-core ou 48-core. + +Supprimez votre pod : + +```shell +kubectl delete pod cpu-demo --namespace=cpu-example +``` + +## Spécifier une demande de CPU trop élevée pour vos nœuds. + +Les demandes et limites de CPU sont associées aux conteneurs, mais il est utile de réfléchir à la demande et à la limite de CPU d'un pod. La demande de CPU pour un Pod est la somme des demandes de CPU pour tous les conteneurs du Pod. De même, la limite de CPU pour les un Pod est la somme des limites de CPU pour tous les conteneurs du Pod. + +L'ordonnancement des pods est basé sur les demandes. Un Pod est prévu pour se lancer sur un Nœud uniquement si le nœud dispose de suffisamment de ressources CPU pour satisfaire la demande de CPU du Pod. + +Dans cet exercice, vous allez créer un Pod qui a une demande de CPU si importante qu'elle dépassera la capacité de n'importe quel nœud de votre cluster. Voici le fichier de configuration d'un Pod +qui a un seul conteneur. Le conteneur nécessite 100 CPU, ce qui est susceptible de dépasser la capacité de tous les nœuds de votre cluster. + +{{< codenew file="pods/resource/cpu-request-limit-2.yaml" >}} + +Créez le Pod : + +```shell +kubectl apply -f https://k8s.io/examples/pods/resource/cpu-request-limit-2.yaml --namespace=cpu-example +``` + +Affichez l'état du Pod : + +```shell +kubectl get pod cpu-demo-2 --namespace=cpu-example +``` + +La sortie montre que l'état du Pod est en attente. En d'autres termes, le Pod n'a pas été +planifié pour tourner sur n'importe quel Nœud, et il restera à l'état PENDING indéfiniment : + + +```shell +kubectl get pod cpu-demo-2 --namespace=cpu-example +NAME READY STATUS RESTARTS AGE +cpu-demo-2 0/1 Pending 0 7m +``` + +Afficher des informations détaillées sur le Pod, y compris les événements: + + +```shell +kubectl describe pod cpu-demo-2 --namespace=cpu-example +``` + +la sortie signale que le conteneur ne peut pas être planifié en raison d'une quantité insuffisante de ressources de CPU sur les Nœuds : + + +```shell +Events: + Reason Message + ------ ------- + FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3). +``` + +Supprimez votre Pod : + +```shell +kubectl delete pod cpu-demo-2 --namespace=cpu-example +``` + +## Si vous ne spécifiez pas de limite CPU + +Si vous ne spécifiez pas de limite CPU pour un conteneur, une de ces situations s'applique : + +* Le conteneur n'a pas de limite maximale quant aux ressources CPU qu'il peut utiliser. Le conteneur +pourrait utiliser toutes les ressources CPU disponibles sur le nœud où il est lancé. + +* Le conteneur est lancé dans un namespace qui a une limite par défaut de CPU, ainsi le conteneur reçoit automatiquement cette limite par défaut. Les administrateurs du cluster peuvent utiliser un +[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core/) +pour spécifier une valeur par défaut pour la limite de CPU. + +## Motivation pour les demandes et les limites du CPU + +En configurant les demandes et les limites de CPU des conteneurs qui se lancent sur votre cluster, +vous pouvez utiliser efficacement les ressources CPU disponibles sur les Nœuds de votre cluster. +En gardant une demande faible de CPU de pod, vous donnez au Pod une bonne chance d'être ordonnancé. +En ayant une limite CPU supérieure à la demande de CPU, vous accomplissez deux choses : + +* Le Pod peut avoir des pics d'activité où il utilise les ressources CPU qui se sont déjà disponible. +* La quantité de ressources CPU qu'un Pod peut utiliser pendant une pic d'activité est limitée à une quantité raisonnable. + +## Nettoyage + +Supprimez votre namespace : + +```shell +kubectl delete namespace cpu-example +``` + +{{% /capture %}} + +{{% capture whatsnext %}} + + +### Pour les développeurs d'applications + +* [Allocation des ressources mémoire aux conteneurs et aux pods](/fr/docs/tasks/configure-pod-container/assign-memory-resource/) + +* [Configuration de la qualité de service pour les pods](/docs/tasks/configure-pod-container/quality-service-pod/) + +### Pour les administrateurs de cluster + +* [Configuration des demandes et des limites de mémoire par défaut pour un Namespace](/docs/tasks/administer-cluster/memory-default-namespace/) + +* [Configuration des demandes et des limites par défaut de CPU pour un Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/) + +* [Configuration des contraintes de mémoire minimales et maximales pour un Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/) + +* [Configuration des contraintes minimales et maximales du CPU pour un Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/) + +* [Configuration des quotas de mémoire et de CPU pour un Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) + +* [Configuration du quota de pods pour un Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/) + +* [Configuration des quotas pour les objets API](/docs/tasks/administer-cluster/quota-api-object/) + +{{% /capture %}} + + + diff --git a/content/fr/examples/pods/resource/cpu-request-limit-2.yaml b/content/fr/examples/pods/resource/cpu-request-limit-2.yaml new file mode 100644 index 0000000000000..f505c77fbb91b --- /dev/null +++ b/content/fr/examples/pods/resource/cpu-request-limit-2.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Pod +metadata: + name: cpu-demo-2 + namespace: cpu-example +spec: + containers: + - name: cpu-demo-ctr-2 + image: vish/stress + resources: + limits: + cpu: "100" + requests: + cpu: "100" + args: + - -cpus + - "2" diff --git a/content/fr/examples/pods/resource/cpu-request-limit.yaml b/content/fr/examples/pods/resource/cpu-request-limit.yaml new file mode 100644 index 0000000000000..2cc0b2cf4f635 --- /dev/null +++ b/content/fr/examples/pods/resource/cpu-request-limit.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Pod +metadata: + name: cpu-demo + namespace: cpu-example +spec: + containers: + - name: cpu-demo-ctr + image: vish/stress + resources: + limits: + cpu: "1" + requests: + cpu: "0.5" + args: + - -cpus + - "2" From b1b1d11f9db73d6d19b65735783d6dc4b0be310c Mon Sep 17 00:00:00 2001 From: Julie K Date: Mon, 16 Sep 2019 10:48:37 +0200 Subject: [PATCH 15/18] docs-fr | reference | glossary | customresourcedefinition (#16052) * docs-fr | reference | glossary | customresourcedefinition * docs-fr | reference | glossary | customresourcedefinition [update] --- .../glossary/customresourcedefinition.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100755 content/fr/docs/reference/glossary/customresourcedefinition.md diff --git a/content/fr/docs/reference/glossary/customresourcedefinition.md b/content/fr/docs/reference/glossary/customresourcedefinition.md new file mode 100755 index 0000000000000..ca61ab7240235 --- /dev/null +++ b/content/fr/docs/reference/glossary/customresourcedefinition.md @@ -0,0 +1,19 @@ +--- +title: CustomResourceDefinition +id: CustomResourceDefinition +date: 2018-04-12 +full_link: fr/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/ +short_description: > + Définition d'une ressource personnalisée qui est ajoutée au serveur d'API Kubernetes sans construire un serveur personnalisé complet. + +aka: +tags: +- fundamental +- operation +- extension +--- + Définition d'une ressource personnalisée qui est ajoutée au serveur d'API Kubernetes sans construire un serveur personnalisé complet. + + + +Les définitions de ressources personnalisées permettent s'ajoutent aux ressouces natives (ou "d'origine", "de base") de Kubernetes quand celles-ci ne peuvent répondre à vos besoins. From e3a04f302910a625b22dd20dcf7d8e0a86857ee3 Mon Sep 17 00:00:00 2001 From: Julie K Date: Mon, 16 Sep 2019 10:50:38 +0200 Subject: [PATCH 16/18] docs-fr | docs | reference | glossary | container-lifecycle-hooks (#16148) --- .../glossary/container-lifecycle-hooks.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) create mode 100644 content/fr/docs/reference/glossary/container-lifecycle-hooks.md diff --git a/content/fr/docs/reference/glossary/container-lifecycle-hooks.md b/content/fr/docs/reference/glossary/container-lifecycle-hooks.md new file mode 100644 index 0000000000000..bd04c2ce3e988 --- /dev/null +++ b/content/fr/docs/reference/glossary/container-lifecycle-hooks.md @@ -0,0 +1,17 @@ +--- +title: Container Lifecycle Hooks +id: container-lifecycle-hooks +date: 2018-10-08 +full_link: fr/docs/concepts/containers/container-lifecycle-hooks/ +short_description: > + Les hooks (ou déclencheurs) du cycle de vie exposent les événements du cycle de vie de la gestion du conteneur et permettent à l'utilisateur d'exécuter le code lorsque les événements se produisent. + +aka: +tags: +- extension +--- + Les hooks (ou déclencheurs) du cycle de vie exposent les événements du cycle de vie de la gestion du {{< glossary_tooltip text="conteneur" term_id="container" >}} et permettent à l'utilisateur d'exécuter le code lorsque les événements se produisent + + + +Deux hooks (ou déclencheurs) sont exposés aux conteneurs : PostStart qui s'exécute immédiatement après la création d'un conteneur et PreStop qui est appelé immédiatement avant qu'un conteneur soit terminé. From 48d73a6443d3bcfe8972cc54bd44009453f1de81 Mon Sep 17 00:00:00 2001 From: Geunho Kim Date: Tue, 17 Sep 2019 10:36:23 +0900 Subject: [PATCH 17/18] Fix links to locate Korean docs (#16277) * Fix links to locate Korean docs * Fix typo --- content/ko/docs/home/_index.md | 12 +++++----- .../ko/docs/tasks/tools/install-minikube.md | 2 +- content/ko/docs/tutorials/_index.md | 22 +++++++++---------- .../expose/expose-intro.html | 8 +++---- .../kubernetes-basics/scale/scale-intro.html | 2 +- .../stateless-application/guestbook.md | 6 ++--- 6 files changed, 26 insertions(+), 26 deletions(-) diff --git a/content/ko/docs/home/_index.md b/content/ko/docs/home/_index.md index a2a01163aeb1b..d542f5a1ddaf0 100644 --- a/content/ko/docs/home/_index.md +++ b/content/ko/docs/home/_index.md @@ -21,32 +21,32 @@ cards: title: "기초 이해하기" description: "쿠버네티스와 쿠버네티스의 기본 개념을 배운다." button: "개념 배우기" - button_path: "/docs/concepts" + button_path: "/ko/docs/concepts" - name: tutorials title: "쿠버네티스 사용해보기" description: "쿠버네티스에 애플리케이션을 배포하는 방법을 튜토리얼을 따라하며 배운다." button: "튜토리얼 보기" - button_path: "/docs/tutorials" + button_path: "/ko/docs/tutorials" - name: setup title: "클러스터 구축하기" description: "보유한 자원과 요구에 맞게 동작하는 쿠버네티스를 구축한다." button: "쿠버네티스 구축하기" - button_path: "/docs/setup" + button_path: "/ko/docs/setup" - name: tasks title: "쿠버네티스 사용법 배우기" description: "일반적인 태스크와 이를 수행하는 방법을 여러 단계로 구성된 짧은 시퀀스를 통해 살펴본다." button: "태스크 보기" - button_path: "/docs/tasks" + button_path: "/ko/docs/tasks" - name: reference title: 레퍼런스 정보 찾기 description: 용어, 커맨드 라인 구문, API 자원 종류, 그리고 설치 툴 문서를 살펴본다. button: 레퍼런스 보기 - button_path: /docs/reference + button_path: "/ko/docs/reference" - name: contribute title: 문서에 기여하기 description: 이 프로젝트가 처음인 사람이든, 오래 활동한 사람이든 상관없이 누구나 기여할 수 있다. button: 문서에 기여하기 - button_path: /docs/contribute + button_path: "/ko/docs/contribute" - name: download title: 쿠버네티스 내려받기 description: 쿠버네티스를 설치하거나 최신의 버전으로 업그레이드하는 경우, 현재 릴리스 노트를 참고한다. diff --git a/content/ko/docs/tasks/tools/install-minikube.md b/content/ko/docs/tasks/tools/install-minikube.md index 5d80c63981d2b..17b8e041d53f5 100644 --- a/content/ko/docs/tasks/tools/install-minikube.md +++ b/content/ko/docs/tasks/tools/install-minikube.md @@ -9,7 +9,7 @@ card: {{% capture overview %}} -이 페이지는 단일 노드 쿠버네티스 클러스터를 노트북의 가상 머신에서 구동하는 도구인 [Minikube](/docs/tutorials/hello-minikube)의 설치 방법을 설명한다. +이 페이지는 단일 노드 쿠버네티스 클러스터를 노트북의 가상 머신에서 구동하는 도구인 [Minikube](/ko/docs/tutorials/hello-minikube)의 설치 방법을 설명한다. {{% /capture %}} diff --git a/content/ko/docs/tutorials/_index.md b/content/ko/docs/tutorials/_index.md index cd772dd93d71e..e5af6c2afdad9 100644 --- a/content/ko/docs/tutorials/_index.md +++ b/content/ko/docs/tutorials/_index.md @@ -8,11 +8,11 @@ content_template: templates/concept {{% capture overview %}} 쿠버네티스 문서의 본 섹션은 튜토리얼을 포함하고 있다. -튜토리얼은 개별 [작업](/docs/tasks) 단위보다 더 큰 목표를 달성하기 +튜토리얼은 개별 [작업](/ko/docs/tasks) 단위보다 더 큰 목표를 달성하기 위한 방법을 보여준다. 일반적으로 튜토리얼은 각각 순차적 단계가 있는 여러 섹션으로 구성된다. 각 튜토리얼을 따라하기 전에, 나중에 참조할 수 있도록 -[표준 용어집](/docs/reference/glossary/) 페이지를 북마크하기를 권한다. +[표준 용어집](/ko/docs/reference/glossary/) 페이지를 북마크하기를 권한다. {{% /capture %}} @@ -30,23 +30,23 @@ content_template: templates/concept ## 구성 -* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/) +* [컨피그 맵을 사용해서 Redis 설정하기](/ko/docs/tutorials/configuration/configure-redis-using-configmap/) ## 상태 유지를 하지 않는(stateless) 애플리케이션 -* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) +* [외부 IP 주소를 노출하여 클러스터의 애플리케이션에 접속하기](/ko/docs/tutorials/stateless-application/expose-external-ip-address/) -* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/) +* [예시: Redis를 사용한 PHP 방명록 애플리케이션 배포하기](/ko/docs/tutorials/stateless-application/guestbook/) ## 상태 유지가 필요한(stateful) 애플리케이션 -* [스테이트풀셋 기본](/docs/tutorials/stateful-application/basic-stateful-set/) +* [스테이트풀셋 기본](/ko/docs/tutorials/stateful-application/basic-stateful-set/) -* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) +* [예시: WordPress와 MySQL을 퍼시스턴트 볼륨에 배포하기](/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) -* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/) +* [예시: 카산드라를 스테이트풀셋으로 배포하기](/ko/docs/tutorials/stateful-application/cassandra/) -* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/) +* [분산 시스템 코디네이터 ZooKeeper 실행하기](/ko/docs/tutorials/stateful-application/zookeeper/) ## CI/CD 파이프라인 @@ -60,11 +60,11 @@ content_template: templates/concept ## 클러스터 -* [AppArmor](/docs/tutorials/clusters/apparmor/) +* [AppArmor](/ko/docs/tutorials/clusters/apparmor/) ## 서비스 -* [Using Source IP](/docs/tutorials/services/source-ip/) +* [소스 IP 주소 이용하기](/ko/docs/tutorials/services/source-ip/) {{% /capture %}} diff --git a/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html index cf49a82505092..1c763fbd7092a 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -28,9 +28,9 @@

목표

쿠버네티스 서비스들에 대한 개요

-

쿠버네티스 파드들 은 언젠가는 죽게된다. 실제 파드들은 생명주기를 갖는다. 워커 노드가 죽으면, 노드 상에서 동작하는 파드들 또한 종료된다. 레플리카 셋은 여러분의 애플리케이션이 지속적으로 동작할 수 있도록 새로운 파드들의 생성을 통해 동적으로 클러스터를 미리 지정해 둔 상태로 되돌려 줄 수도 있다. 또 다른 예시로서, 3개의 복제본을 갖는 이미지 처리용 백엔드를 고려해 보자. 그 복제본들은 교체 가능한 상태이다. 그래서 프론트엔드 시스템은 하나의 파드가 소멸되어 재생성이 되더라도, 백엔드 복제본들에 의한 영향을 받아서는 안된다. 즉, 동일 노드 상의 파드들이라 할지라도, 쿠버네티스 클러스터 내 각 파드는 유일한 IP 주소를 가지며, 여러분의 애플리케이션들이 지속적으로 기능할 수 있도록 파드들 속에서 발생하는 변화에 대해 자동으로 조정해 줄 방법이 있어야 한다.

+

쿠버네티스 파드들 은 언젠가는 죽게된다. 실제 파드들은 생명주기를 갖는다. 워커 노드가 죽으면, 노드 상에서 동작하는 파드들 또한 종료된다. 레플리카 셋은 여러분의 애플리케이션이 지속적으로 동작할 수 있도록 새로운 파드들의 생성을 통해 동적으로 클러스터를 미리 지정해 둔 상태로 되돌려 줄 수도 있다. 또 다른 예시로서, 3개의 복제본을 갖는 이미지 처리용 백엔드를 고려해 보자. 그 복제본들은 교체 가능한 상태이다. 그래서 프론트엔드 시스템은 하나의 파드가 소멸되어 재생성이 되더라도, 백엔드 복제본들에 의한 영향을 받아서는 안된다. 즉, 동일 노드 상의 파드들이라 할지라도, 쿠버네티스 클러스터 내 각 파드는 유일한 IP 주소를 가지며, 여러분의 애플리케이션들이 지속적으로 기능할 수 있도록 파드들 속에서 발생하는 변화에 대해 자동으로 조정해 줄 방법이 있어야 한다.

-

쿠버네티스에서 서비스는 하나의 논리적인 파드 셋과 그 파드들에 접근할 수 있는 정책을 정의하는 추상적 개념이다. 서비스는 종속적인 파드들 사이를 느슨하게 결합되도록 해준다. 서비스는 모든 쿠버네티스 오브젝트들과 같이 YAML (보다 선호하는) 또는 JSON을 이용하여 정의된다. 서비스가 대상으로 하는 파드 셋은 보통 LabelSelector에 의해 결정된다 (여러분이 왜 스펙에 selector가 포함되지 않은 서비스를 필요로 하게 될 수도 있는지에 대해 아래에서 확인해 보자).

+

쿠버네티스에서 서비스는 하나의 논리적인 파드 셋과 그 파드들에 접근할 수 있는 정책을 정의하는 추상적 개념이다. 서비스는 종속적인 파드들 사이를 느슨하게 결합되도록 해준다. 서비스는 모든 쿠버네티스 오브젝트들과 같이 YAML (보다 선호하는) 또는 JSON을 이용하여 정의된다. 서비스가 대상으로 하는 파드 셋은 보통 LabelSelector에 의해 결정된다 (여러분이 왜 스펙에 selector가 포함되지 않은 서비스를 필요로 하게 될 수도 있는지에 대해 아래에서 확인해 보자).

비록 각 파드들이 고유의 IP를 갖고 있기는 하지만, 그 IP들은 서비스의 도움없이 클러스터 외부로 노출되어질 수 없다. 서비스들은 여러분의 애플리케이션들에게 트래픽이 실릴 수 있도록 허용해준다. 서비스들은 ServiceSpec에서 type을 지정함으로써 다양한 방식들로 노출시킬 수 있다:

    @@ -39,7 +39,7 @@

    쿠버네티스 서비스들에 대한 개요

  • LoadBalancer - (지원 가능한 경우) 기존 클라우드에서 외부용 로드밸런서를 생성하고 서비스에 고정된 공인 IP를 할당해준다. NodePort의 상위 집합이다.
  • ExternalName - 이름으로 CNAME 레코드를 반환함으로써 임의의 이름(스펙에서 externalName으로 명시)을 이용하여 서비스를 노출시켜준다. 프록시는 사용되지 않는다. 이 방식은 kube-dns 버전 1.7 이상에서 지원 가능하다.
-

다른 서비스 타입들에 대한 추가 정보는 소스 IP 이용하기 튜토리얼에서 확인 가능하다. 또한 서비스들로 애플리케이션에 접속하기도 참고해 보자.

+

다른 서비스 타입들에 대한 추가 정보는 소스 IP 이용하기 튜토리얼에서 확인 가능하다. 또한 서비스들로 애플리케이션에 접속하기도 참고해 보자.

부가적으로, spec에 selector를 정의하지 않고 말아넣은 서비스들의 몇 가지 유즈케이스들이 있음을 주의하자. selector 없이 생성된 서비스는 상응하는 엔드포인트 오브젝트들 또한 생성하지 않는다. 이로써 사용자들로 하여금 하나의 서비스를 특정한 엔드포인트에 매핑 시킬수 있도록 해준다. selector를 생략하게 되는 또 다른 가능성은 여러분이 type: ExternalName을 이용하겠다고 확고하게 의도하는 경우이다.

@@ -73,7 +73,7 @@

서비스와 레이블

서비스는 파드 셋에 걸쳐서 트래픽을 라우트한다. 여러분의 애플리케이션에 영향을 주지 않으면서 쿠버네티스에서 파드들이 죽게도 하고, 복제가 되게도 해주는 추상적 개념이다. 종속적인 파드들 사이에서의 디스커버리와 라우팅은 (하나의 애플리케이션에서 프로트엔드와 백엔드 컴포넌트와 같은) 쿠버네티스 서비스들에 의해 처리된다.

-

서비스는 쿠버네티스의 객체들에 대해 논리 연산을 허용해주는 기본 그룹핑 단위인, 레이블과 셀렉터를 이용하여 파드 셋과 매치시킨다. 레이블은 오브젝트들에 붙여진 키/밸류 쌍으로 다양한 방식으로 이용 가능하다:

+

서비스는 쿠버네티스의 객체들에 대해 논리 연산을 허용해주는 기본 그룹핑 단위인, 레이블과 셀렉터를 이용하여 파드 셋과 매치시킨다. 레이블은 오브젝트들에 붙여진 키/밸류 쌍으로 다양한 방식으로 이용 가능하다:

  • 개발, 테스트, 그리고 상용환경에 대한 객체들의 지정
  • 임베디드된 버전 태그들
  • diff --git a/content/ko/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/ko/docs/tutorials/kubernetes-basics/scale/scale-intro.html index 7744bf75e727b..e411c4f8abfdf 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/scale/scale-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -27,7 +27,7 @@

    목표

    애플리케이션을 스케일하기

    -

    지난 모듈에서 디플로이먼트를 만들고, +

    지난 모듈에서 디플로이먼트를 만들고, 서비스를 통해서 디플로이먼트를 외부에 노출시켜 봤다. 해당 디플로이먼트는 애플리케이션을 구동하기 위해 단 하나의 파드(Pod)만을 생성했었다. 트래픽이 증가하면, 사용자 요청에 맞추어 애플리케이션의 규모를 diff --git a/content/ko/docs/tutorials/stateless-application/guestbook.md b/content/ko/docs/tutorials/stateless-application/guestbook.md index 0c2517592c19c..663e01dc5baba 100644 --- a/content/ko/docs/tutorials/stateless-application/guestbook.md +++ b/content/ko/docs/tutorials/stateless-application/guestbook.md @@ -359,9 +359,9 @@ Google Compute Engine 또는 Google Kubernetes Engine과 같은 일부 클라우 {{% /capture %}} {{% capture whatsnext %}} -* [ELK 로깅과 모니터링](/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/)을 방명록 애플리케이션에 추가하기 -* [쿠버네티스 기초](/docs/tutorials/kubernetes-basics/) 튜토리얼을 완료 -* [MySQL과 Wordpress을 위한 퍼시스턴트 볼륨](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)을 사용하여 블로그 생성하는데 쿠버네티스 이용하기 +* [ELK 로깅과 모니터링](/ko/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/)을 방명록 애플리케이션에 추가하기 +* [쿠버네티스 기초](/ko/docs/tutorials/kubernetes-basics/) 튜토리얼을 완료 +* [MySQL과 Wordpress을 위한 퍼시스턴트 볼륨](/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)을 사용하여 블로그 생성하는데 쿠버네티스 이용하기 * [애플리케이션 접속](/docs/concepts/services-networking/connect-applications-service/)에 대해 더 알아보기 * [자원 관리](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)에 대해 더 알아보기 {{% /capture %}} From 67c20b3d379297b5df3c7c192831da26d43290c9 Mon Sep 17 00:00:00 2001 From: Jordan Liggitt Date: Tue, 17 Sep 2019 12:30:24 -0400 Subject: [PATCH 18/18] Clarify deprecated API versions and releases (#16403) --- ...19-07-18-some-apis-are-being-deprecated.md | 43 ++++++++----------- 1 file changed, 17 insertions(+), 26 deletions(-) diff --git a/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md b/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md index bc9a3f3e15fd3..68bd0dcae13ab 100644 --- a/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md +++ b/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md @@ -10,32 +10,23 @@ slug: api-deprecations-in-1-16 As the Kubernetes API evolves, APIs are periodically reorganized or upgraded. When APIs evolve, the old API is deprecated and eventually removed. -The 1.16 release will deprecate APIs for four services: - -* NetworkPolicy -* PodSecurityPolicy -* DaemonSet, Deployment, StatefulSet, and ReplicaSet -* Ingress - -None of these resources will be removed from Kubernetes or deprecated in any way. -However, to continue using these resources, you must use a current version of -the Kubernetes API. - -# Migration Details - -* NetworkPolicy: will no longer be served from **extensions/v1beta1** in **v1.16**. - * Migrate to the networking.k8s.io/v1 API, available since v1.8. Existing persisted - data can be retrieved/updated via the networking.k8s.io/v1 API. -* PodSecurityPolicy: will no longer be served from **extensions/v1beta1** in **v1.16**. - * Migrate to the policy/v1beta1 API, available since v1.10. Existing persisted - data can be retrieved/updated via the policy/v1beta1 API. -* DaemonSet, Deployment, StatefulSet, and ReplicaSet: will no longer be served -from **extensions/v1beta1**, **apps/v1beta1**, or **apps/v1beta2** in **v1.16**. - * Migrate to the apps/v1 API, available since v1.9. Existing persisted data - can be retrieved/updated via the apps/v1 API. -* Ingress: will no longer be served from **extensions/v1beta1** in **v1.18**. - * Migrate to the networking.k8s.io/v1beta1 API, serving Ingress since v1.14. - Existing persisted data can be retrieved/updated via the networking.k8s.io/v1beta1 API. +The **v1.16** release will stop serving the following deprecated API versions in favor of newer and more stable API versions: + +* NetworkPolicy (in the **extensions/v1beta1** API group) + * Migrate to use the **networking.k8s.io/v1** API, available since v1.8. + Existing persisted data can be retrieved/updated via the **networking.k8s.io/v1** API. +* PodSecurityPolicy (in the **extensions/v1beta1** API group) + * Migrate to use the **policy/v1beta1** API, available since v1.10. + Existing persisted data can be retrieved/updated via the **policy/v1beta1** API. +* DaemonSet, Deployment, StatefulSet, and ReplicaSet (in the **extensions/v1beta1** and **apps/v1beta2** API groups) + * Migrate to use the **apps/v1** API, available since v1.9. + Existing persisted data can be retrieved/updated via the **apps/v1** API. + +The **v1.20** release will stop serving the following deprecated API versions in favor of newer and more stable API versions: + +* Ingress (in the **extensions/v1beta1** API group) + * Migrate to use the **networking.k8s.io/v1beta1** API, serving Ingress since v1.14. + Existing persisted data can be retrieved/updated via the **networking.k8s.io/v1beta1** API. # What To Do