Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Official 1.15 Release Docs #14984

Merged
merged 35 commits into from
Jun 19, 2019
Merged
Show file tree
Hide file tree
Changes from 33 commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
111e6d1
initial commit
jimangel Mar 27, 2019
22441ec
Update content on kube-dns to coredns configmap translation (#13826)
rajansandeep Apr 25, 2019
99ae92b
Readded the link to Instana and Sysdig, which seem to have gone missi…
noctarius May 6, 2019
a1f811d
Updated self-hosting documentation (#13866)
May 7, 2019
c7c55c5
Watch bookmarks documentation (#14379)
wojtek-t May 20, 2019
b8759a7
Add docs for volume cloning (#14591)
j-griffith May 31, 2019
817ee7e
Add support for quotas for ephemeral storage monitoring. (#14268)
RobertKrawitz May 31, 2019
7acab64
Add documentation for PVC in use protection (#14700)
xing-yang Jun 4, 2019
40ed466
Add a placeholder doc (#14643)
gnufied Jun 4, 2019
6e75d5c
Update CSI migration docs with Azure Disk/File details (#14707)
ddebroy Jun 6, 2019
19e9d31
kubeadm-1.15-certs-renewal (#14716)
fabriziopandini Jun 6, 2019
e7b5f0e
move podresources endpoint to beta (#14622)
dashpole Jun 6, 2019
3ad640e
Add webhook admission outline (#14671)
liggitt Jun 10, 2019
fc86f8f
Add custom resource quota example (#14492)
liggitt Jun 10, 2019
b1a0711
kubeadm: Document new v1beta2 config format (#14607)
rosti Jun 10, 2019
1ab3957
CSI Inline Ephemeral Documentation Update (#14704)
vladimirvivien Jun 10, 2019
1223857
Docs for feature: PDB support for custom resource with scale subresou…
mortent Jun 10, 2019
d1bdefd
VolumeSubpathEnvExpansion Beta Documentation (#13846)
Jun 10, 2019
b51345a
kubeadm-setup: update all setup related documents for 1.15 (#14594)
neolit123 Jun 10, 2019
9e102b5
Add a section for service load balancer cleanup
MrHohn May 23, 2019
b495bd1
promote AWS-NLB Support from alpha to beta (#14451)
M00nF1sh Jun 11, 2019
71e69fd
Added explanation of alpha non-preempting PriorityClasses to the "Pod…
vllry Jun 11, 2019
c22345d
Merge pull request #14496 from MrHohn/svc-lb-finalizer
jimangel Jun 11, 2019
5532ab3
Add a user document for the scheduling framework (#14388)
bsalamat Jun 11, 2019
e45144f
Graduate node PIDS limiting to beta (#14425)
RobertKrawitz Jun 11, 2019
3d1d270
Drop .travis.yml from dev-1.15 branch (#14812)
tengqm Jun 11, 2019
2a0f39f
kubeadm: update the reference documentation for 1.15 (#14596)
neolit123 Jun 11, 2019
d9b1970
Update HPA Algorithm Docs for v1.15 (#14728)
gjtempleton Jun 11, 2019
57f6eee
concepts/extend-kubernetes/api-extension: add 1.15 features (#14583)
sttts Jun 11, 2019
71a7828
kubeadm-tasks: include v1.14->v1.15 upgrade document (#14595)
neolit123 Jun 11, 2019
21d3206
Document webhook and kube-aggerator port configuration (#14674)
jpbetz Jun 11, 2019
b42f019
Create nodelocaldns.md to describe NodeLocal DNSCache feature. (#14625)
prameshj Jun 11, 2019
8781518
update with master content resolving merge conflicts
makoscafee Jun 19, 2019
f04d0cf
updated config.toml for 1.15
makoscafee Jun 19, 2019
455f312
Merge branch 'master' into dev-1.15
makoscafee Jun 19, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,14 @@ kubeadm has configuration options to specify configuration information for cloud
in-tree cloud provider can be configured using kubeadm as shown below:

```yaml
apiVersion: kubeadm.k8s.io/v1beta1
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta1
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
apiServer:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -384,6 +384,70 @@ The scheduler ensures that the sum of the resource requests of the scheduled Con

For container-level isolation, if a Container's writable layer and logs usage exceeds its storage limit, the Pod will be evicted. For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the limit, the Pod will be evicted.

### Monitoring ephemeral-storage consumption

When local ephemeral storage is used, it is monitored on an ongoing
basis by the kubelet. The monitoring is performed by scanning each
emptyDir volume, log directories, and writable layers on a periodic
basis. Starting with Kubernetes 1.15, emptyDir volumes (but not log
directories or writable layers) may, at the cluster operator's option,
be managed by use of [project
quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html).
Project quotas were originally implemented in XFS, and have more
recently been ported to ext4fs. Project quotas can be used for both
monitoring and enforcement; as of Kubernetes 1.15, they are available
as alpha functionality for monitoring only.

Quotas are faster and more accurate than directory scanning. When a
directory is assigned to a project, all files created under a
directory are created in that project, and the kernel merely has to
keep track of how many blocks are in use by files in that project. If
a file is created and deleted, but with an open file descriptor, it
continues to consume space. This space will be tracked by the quota,
but will not be seen by a directory scan.

Kubernetes uses project IDs starting from 1048576. The IDs in use are
registered in `/etc/projects` and `/etc/projid`. If project IDs in
this range are used for other purposes on the system, those project
IDs must be registered in `/etc/projects` and `/etc/projid` to prevent
Kubernetes from using them.

To enable use of project quotas, the cluster operator must do the
following:

* Enable the `LocalStorageCapacityIsolationFSQuotaMonitoring=true`
feature gate in the kubelet configuration. This defaults to `false`
in Kubernetes 1.15, so must be explicitly set to `true`.

* Ensure that the root partition (or optional runtime partition) is
built with project quotas enabled. All XFS filesystems support
project quotas, but ext4 filesystems must be built specially.

* Ensure that the root partition (or optional runtime partition) is
mounted with project quotas enabled.

#### Building and mounting filesystems with project quotas enabled

XFS filesystems require no special action when building; they are
automatically built with project quotas enabled.

Ext4fs filesystems must be built with quotas enabled, then they must
be enabled in the filesystem:

```
% sudo mkfs.ext4 other_ext4fs_args... -E quotatype=prjquota /dev/block_device
% sudo tune2fs -O project -Q prjquota /dev/block_device

```

To mount the filesystem, both ext4fs and XFS require the `prjquota`
option set in `/etc/fstab`:

```
/dev/block_device /var/kubernetes_data defaults,prjquota 0 0
```


## Extended resources

Extended resources are fully-qualified resource names outside the
Expand Down
56 changes: 56 additions & 0 deletions content/en/docs/concepts/configuration/pod-priority-preemption.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,13 @@ when a cluster is under resource pressure. For this reason, it is not
recommended to disable preemption.
{{< /note >}}

{{< note >}}
In Kubernetes 1.15 and later,
if the feature `NonPreemptingPriority` is enabled,
PriorityClasses have the option to set `preemptionPolicy: Never`.
This will prevent pods of that PriorityClass from preempting other pods.
{{< /note >}}

In Kubernetes 1.11 and later, preemption is controlled by a kube-scheduler flag
`disablePreemption`, which is set to `false` by default.
If you want to disable preemption despite the above note, you can set
Expand Down Expand Up @@ -145,6 +152,55 @@ globalDefault: false
description: "This priority class should be used for XYZ service pods only."
```

### Non-preempting PriorityClasses (alpha) {#non-preempting-priority-class}

1.15 adds the `PreemptionPolicy` field as an alpha feature.
It is disabled by default in 1.15,
and requires the `NonPreemptingPriority`[feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/
) to be enabled.

Pods with `PreemptionPolicy: Never` will be placed in the scheduling queue
ahead of lower-priority pods,
but they cannot preempt other pods.
A non-preempting pod waiting to be scheduled will stay in the scheduling queue,
until sufficient resources are free,
and it can be scheduled.
Non-preempting pods,
like other pods,
are subject to scheduler back-off.
This means that if the scheduler tries these pods and they cannot be scheduled,
they will be retried with lower frequency,
allowing other pods with lower priority to be scheduled before them.

Non-preempting pods may still be preempted by other,
high-priority pods.

`PreemptionPolicy` defaults to `PreemptLowerPriority`,
which will allow pods of that PriorityClass to preempt lower-priority pods
(as is existing default behavior).
If `PreemptionPolicy` is set to `Never`,
pods in that PriorityClass will be non-preempting.

An example use case is for data science workloads.
A user may submit a job that they want to be prioritized above other workloads,
but do not wish to discard existing work by preempting running pods.
The high priority job with `PreemptionPolicy: Never` will be scheduled
ahead of other queued pods,
as soon as sufficient cluster resources "naturally" become free.

#### Example Non-preempting PriorityClass

```yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority-nonpreempting
value: 1000000
preemptionPolicy: Never
globalDefault: false
description: "This priority class will not cause other pods to be preempted."
```

## Pod priority

After you have one or more PriorityClasses, you can create Pods that specify one
Expand Down
Loading