Skip to content

Commit

Permalink
Merge pull request #16419 from simplytunde/merged-master-dev-1.16
Browse files Browse the repository at this point in the history
Merged master into dev 1.16 before release
  • Loading branch information
simplytunde authored Sep 18, 2019
2 parents 234ff08 + e95ffa6 commit b2bc619
Show file tree
Hide file tree
Showing 34 changed files with 587 additions and 111 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ If a user deletes a `VolumeSnapshot` API object in active use by a PVC, the `Vol

## Which volume plugins support Kubernetes Snapshots?

Snapshots are only supported for CSI drivers (not for in-tree or Flexvolume). To use the Kubernetes snapshots feature, ensure that a CSI Driver that implements snapshots is deployed on your cluster.
Snapshots are only supported for CSI drivers (not for in-tree or FlexVolume). To use the Kubernetes snapshots feature, ensure that a CSI Driver that implements snapshots is deployed on your cluster.

As of the publishing of this blog post, the following CSI drivers support snapshots:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ date: 2019-03-22
More and more components that used to be part of Kubernetes are now
being developed outside of Kubernetes. For example, storage drivers
used to be compiled into Kubernetes binaries, then were moved into
[stand-alone Flexvolume
[stand-alone FlexVolume
binaries](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md)
on the host, and now are delivered as [Container Storage Interface
(CSI) drivers](https://github.com/container-storage-interface/spec)
Expand Down
43 changes: 17 additions & 26 deletions content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,32 +10,23 @@ slug: api-deprecations-in-1-16
As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
When APIs evolve, the old API is deprecated and eventually removed.

The 1.16 release will deprecate APIs for four services:

* NetworkPolicy
* PodSecurityPolicy
* DaemonSet, Deployment, StatefulSet, and ReplicaSet
* Ingress

None of these resources will be removed from Kubernetes or deprecated in any way.
However, to continue using these resources, you must use a current version of
the Kubernetes API.

# Migration Details

* NetworkPolicy: will no longer be served from **extensions/v1beta1** in **v1.16**.
* Migrate to the networking.k8s.io/v1 API, available since v1.8. Existing persisted
data can be retrieved/updated via the networking.k8s.io/v1 API.
* PodSecurityPolicy: will no longer be served from **extensions/v1beta1** in **v1.16**.
* Migrate to the policy/v1beta1 API, available since v1.10. Existing persisted
data can be retrieved/updated via the policy/v1beta1 API.
* DaemonSet, Deployment, StatefulSet, and ReplicaSet: will no longer be served
from **extensions/v1beta1**, **apps/v1beta1**, or **apps/v1beta2** in **v1.16**.
* Migrate to the apps/v1 API, available since v1.9. Existing persisted data
can be retrieved/updated via the apps/v1 API.
* Ingress: will no longer be served from **extensions/v1beta1** in **v1.18**.
* Migrate to the networking.k8s.io/v1beta1 API, serving Ingress since v1.14.
Existing persisted data can be retrieved/updated via the networking.k8s.io/v1beta1 API.
The **v1.16** release will stop serving the following deprecated API versions in favor of newer and more stable API versions:

* NetworkPolicy (in the **extensions/v1beta1** API group)
* Migrate to use the **networking.k8s.io/v1** API, available since v1.8.
Existing persisted data can be retrieved/updated via the **networking.k8s.io/v1** API.
* PodSecurityPolicy (in the **extensions/v1beta1** API group)
* Migrate to use the **policy/v1beta1** API, available since v1.10.
Existing persisted data can be retrieved/updated via the **policy/v1beta1** API.
* DaemonSet, Deployment, StatefulSet, and ReplicaSet (in the **extensions/v1beta1** and **apps/v1beta2** API groups)
* Migrate to use the **apps/v1** API, available since v1.9.
Existing persisted data can be retrieved/updated via the **apps/v1** API.

The **v1.20** release will stop serving the following deprecated API versions in favor of newer and more stable API versions:

* Ingress (in the **extensions/v1beta1** API group)
* Migrate to use the **networking.k8s.io/v1beta1** API, serving Ingress since v1.14.
Existing persisted data can be retrieved/updated via the **networking.k8s.io/v1beta1** API.

# What To Do

Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/_posts/2019-08-30-announcing-etcd-3.4.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ etcd v3.4 includes a number of performance improvements for large scale Kubernet

In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. `“read-only range request ... took too long to execute”`). Previously, the storage backend commit operation on pending writes blocks incoming read transactions, even when there was no pending write. Now, the commit [does not block reads](https://github.com/etcd-io/etcd/pull/9296) which improve long-running read transaction performance.

We further made [backend read transactions fully concurrent](https://github.com/etcd-io/etcd/pull/10523). Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. We also ran [Kubernetes 5000-node scalability test on GCE](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1130745634945503235) with this change and observed similar improvements. For example, in the very beginning of the test where there are a lot of long-running “LIST pods”, the P99 latency of “POST clusterrolebindings” is [reduced by 97.4%](https://github.com/etcd-io/etcd/pull/10523#issuecomment-499262001). This non-blocking read transaction is now [used for compaction](https://github.com/etcd-io/etcd/pull/11034), which, combined with the reduced compaction batch size, reduces the P99 server request latency during compaction.
We further made [backend read transactions fully concurrent](https://github.com/etcd-io/etcd/pull/10523). Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. We also ran [Kubernetes 5000-node scalability test on GCE](https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1130745634945503235) with this change and observed similar improvements. For example, in the very beginning of the test where there are a lot of long-running “LIST pods”, the P99 latency of “POST clusterrolebindings” is [reduced by 97.4%](https://github.com/etcd-io/etcd/pull/10523#issuecomment-499262001).

More improvements have been made to lease storage. We enhanced [lease expire/revoke performance](https://github.com/etcd-io/etcd/pull/9418) by storing lease objects more efficiently, and made [lease look-up operation non-blocking](https://github.com/etcd-io/etcd/pull/9229) with current lease grant/revoke operation. And etcd v3.4 introduces [lease checkpoint](https://github.com/etcd-io/etcd/pull/9924) as an experimental feature to persist remaining time-to-live values through consensus. This ensures short-lived lease objects are not auto-renewed after leadership election. This also prevents lease object pile-up when the time-to-live value is relatively large (e.g. [1-hour TTL never expired in Kubernetes use case](https://github.com/kubernetes/kubernetes/issues/65497)).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ happens before the scheduler actually binds the Pod to the Node, and it exists
to prevent race conditions while the scheduler waits for the bind to succeed.

This is the last step in a scheduling cycle. Once a Pod is in the reserved
state, it will either trigger [Un-reserve](#un-reserve) plugins (on failure) or
state, it will either trigger [Unreserve](#unreserve) plugins (on failure) or
[Post-bind](#post-bind) plugins (on success) at the end of the binding cycle.

*Note: This concept used to be referred to as "assume".*
Expand All @@ -154,13 +154,13 @@ can do one of three things.

1. **deny** \
If any permit plugin denies a Pod, it is returned to the scheduling queue.
This will trigger [Un-reserve](#un-reserve) plugins.
This will trigger [Unreserve](#unreserve) plugins.

1. **wait** (with a timeout) \
If a permit plugin returns "wait", then the Pod is kept in the permit phase
until a [plugin approves it](#frameworkhandle). If a timeout occurs, **wait**
becomes **deny** and the Pod is returned to the scheduling queue, triggering
[un-reserve](#un-reserve) plugins.
[Unreserve](#unreserve) plugins.

**Approving a Pod binding**

Expand All @@ -175,7 +175,7 @@ These plugins are used to perform any work required before a Pod is bound. For
example, a pre-bind plugin may provision a network volume and mount it on the
target node before allowing the Pod to run there.

If any pre-bind plugin returns an error, the Pod is [rejected](#un-reserve) and
If any pre-bind plugin returns an error, the Pod is [rejected](#unreserve) and
returned to the scheduling queue.

### Bind
Expand Down
8 changes: 4 additions & 4 deletions content/en/docs/concepts/policy/pod-security-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ administrator to control the following:
| Usage of host networking and ports | [`hostNetwork`, `hostPorts`](#host-namespaces) |
| Usage of volume types | [`volumes`](#volumes-and-file-systems) |
| Usage of the host filesystem | [`allowedHostPaths`](#volumes-and-file-systems) |
| White list of Flexvolume drivers | [`allowedFlexVolumes`](#flexvolume-drivers) |
| White list of FlexVolume drivers | [`allowedFlexVolumes`](#flexvolume-drivers) |
| Allocating an FSGroup that owns the pod's volumes | [`fsGroup`](#volumes-and-file-systems) |
| Requiring the use of a read only root file system | [`readOnlyRootFilesystem`](#volumes-and-file-systems) |
| The user and group IDs of the container | [`runAsUser`, `runAsGroup`, `supplementalGroups`](#users-and-groups) |
Expand Down Expand Up @@ -463,12 +463,12 @@ to effectively limit access to the specified `pathPrefix`.
**ReadOnlyRootFilesystem** - Requires that containers must run with a read-only
root filesystem (i.e. no writable layer).

### Flexvolume drivers
### FlexVolume drivers

This specifies a whitelist of Flexvolume drivers that are allowed to be used
This specifies a whitelist of FlexVolume drivers that are allowed to be used
by flexvolume. An empty list or nil means there is no restriction on the drivers.
Please make sure [`volumes`](#volumes-and-file-systems) field contains the
`flexVolume` volume type; no Flexvolume driver is allowed otherwise.
`flexVolume` volume type; no FlexVolume driver is allowed otherwise.

For example:

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/storage/persistent-volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -351,7 +351,7 @@ In the CLI, the access modes are abbreviated to:
| Cinder | ✓ | - | - |
| CSI | depends on the driver | depends on the driver | depends on the driver |
| FC | ✓ | ✓ | - |
| Flexvolume | ✓ | ✓ | depends on the driver |
| FlexVolume | ✓ | ✓ | depends on the driver |
| Flocker | ✓ | - | - |
| GCEPersistentDisk | ✓ | ✓ | - |
| Glusterfs | ✓ | ✓ | ✓ |
Expand Down
15 changes: 8 additions & 7 deletions content/en/docs/concepts/storage/storage-classes.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ for provisioning PVs. This field must be specified.
| CephFS | - | - |
| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder)|
| FC | - | - |
| Flexvolume | - | - |
| FlexVolume | - | - |
| Flocker | ✓ | - |
| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) |
| Glusterfs | ✓ | [Glusterfs](#glusterfs) |
Expand All @@ -92,14 +92,15 @@ alongside Kubernetes). You can also run and specify external provisioners,
which are independent programs that follow a [specification](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md)
defined by Kubernetes. Authors of external provisioners have full discretion
over where their code lives, how the provisioner is shipped, how it needs to be
run, what volume plugin it uses (including Flex), etc. The repository [kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage)
run, what volume plugin it uses (including Flex), etc. The repository
[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner)
houses a library for writing external provisioners that implements the bulk of
the specification plus various community-maintained external provisioners.
the specification. Some external provisioners are listed under the repository
[kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage).
For example, NFS doesn't provide an internal provisioner, but an external provisioner
can be used. Some external provisioners are listed under the repository [kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage).
There are also cases when 3rd party storage vendors provide their own external
provisioner.
For example, NFS doesn't provide an internal provisioner, but an external
provisioner can be used. There are also cases when 3rd party storage
vendors provide their own external provisioner.
### Reclaim Policy
Expand Down
16 changes: 8 additions & 8 deletions content/en/docs/concepts/storage/volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -1205,16 +1205,16 @@ several media types.

## Out-of-Tree Volume Plugins
The Out-of-tree volume plugins include the Container Storage Interface (CSI)
and Flexvolume. They enable storage vendors to create custom storage plugins
and FlexVolume. They enable storage vendors to create custom storage plugins
without adding them to the Kubernetes repository.

Before the introduction of CSI and Flexvolume, all volume plugins (like
Before the introduction of CSI and FlexVolume, all volume plugins (like
volume types listed above) were "in-tree" meaning they were built, linked,
compiled, and shipped with the core Kubernetes binaries and extend the core
Kubernetes API. This meant that adding a new storage system to Kubernetes (a
volume plugin) required checking code into the core Kubernetes code repository.

Both CSI and Flexvolume allow volume plugins to be developed independent of
Both CSI and FlexVolume allow volume plugins to be developed independent of
the Kubernetes code base, and deployed (installed) on Kubernetes clusters as
extensions.

Expand Down Expand Up @@ -1368,14 +1368,14 @@ provisioning/delete, attach/detach, mount/unmount and resizing of volumes.
In-tree plugins that support CSI Migration and have a corresponding CSI driver implemented
are listed in the "Types of Volumes" section above.

### Flexvolume {#flexVolume}
### FlexVolume {#flexVolume}

Flexvolume is an out-of-tree plugin interface that has existed in Kubernetes
FlexVolume is an out-of-tree plugin interface that has existed in Kubernetes
since version 1.2 (before CSI). It uses an exec-based model to interface with
drivers. Flexvolume driver binaries must be installed in a pre-defined volume
drivers. FlexVolume driver binaries must be installed in a pre-defined volume
plugin path on each node (and in some cases master).

Pods interact with Flexvolume drivers through the `flexvolume` in-tree plugin.
Pods interact with FlexVolume drivers through the `flexvolume` in-tree plugin.
More details can be found [here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md).

## Mount propagation
Expand Down Expand Up @@ -1411,7 +1411,7 @@ Its values are:
In addition, all volume mounts created by the Container will be propagated
back to the host and to all Containers of all Pods that use the same volume.

A typical use case for this mode is a Pod with a Flexvolume or CSI driver or
A typical use case for this mode is a Pod with a FlexVolume or CSI driver or
a Pod that needs to mount something on the host using a `hostPath` volume.

This mode is equal to `rshared` mount propagation as described in the
Expand Down
10 changes: 4 additions & 6 deletions content/en/docs/concepts/workloads/controllers/daemonset.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,14 +194,12 @@ You can modify the Pods that a DaemonSet creates. However, Pods do not allow al
fields to be updated. Also, the DaemonSet controller will use the original template the next
time a node (even with the same name) is created.


You can delete a DaemonSet. If you specify `--cascade=false` with `kubectl`, then the Pods
will be left on the nodes. You can then create a new DaemonSet with a different template.
The new DaemonSet with the different template will recognize all the existing Pods as having
matching labels. It will not modify or delete them despite a mismatch in the Pod template.
You will need to force new Pod creation by deleting the Pod or deleting the node.
will be left on the nodes. If you subsequently create a new DaemonSet with the same selector,
the new DaemonSet adopts the existing Pods. If any Pods need replacing the DaemonSet replaces
them according to its `updateStrategy`.

In Kubernetes version 1.6 and later, you can [perform a rolling update](/docs/tasks/manage-daemon/update-daemon-set/) on a DaemonSet.
You can [perform a rolling update](/docs/tasks/manage-daemon/update-daemon-set/) on a DaemonSet.

## Alternatives to DaemonSet

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
Next time you want to update these Pods, you only need to update the Deployment's Pod template again.
Deployment ensures that only a certain number of Pods are down while they are being updated. By default,
it ensures that at least 25% of the desired number of Pods are up (25% max unavailable).
it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).
Deployment also ensures that only a certain number of Pods are created above the desired number of Pods.
By default, it ensures that at most 25% of the desired number of Pods are up (25% max surge).
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/contribute/style/write-new-topic.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ following cases (not an exhaustive list):
- The code is not generic enough for users to try out. As an example, you can
embed the YAML
file for creating a Pod which depends on a specific
[Flexvolume](/docs/concepts/storage/volumes#flexvolume) implementation.
[FlexVolume](/docs/concepts/storage/volumes#flexvolume) implementation.
- The code is an incomplete example because its purpose is to highlight a
portion of a larger file. For example, when describing ways to
customize the [PodSecurityPolicy](/docs/tasks/administer-cluster/sysctl-cluster/#podsecuritypolicy)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,10 +89,10 @@ To see which admission plugins are enabled:
kube-apiserver -h | grep enable-admission-plugins
```

In 1.14, they are:
In 1.15, they are:

```shell
NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota
NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota, StorageObjectInUseProtection
```

## What does each admission controller do?
Expand Down
Loading

0 comments on commit b2bc619

Please sign in to comment.