Skip to content

Commit

Permalink
Merge pull request #37810 from kubernetes/dev-1.26
Browse files Browse the repository at this point in the history
Official 1.26 Release Docs
  • Loading branch information
krol3 authored Dec 10, 2022
2 parents 31ace2f + 126f81b commit 8233fa2
Show file tree
Hide file tree
Showing 62 changed files with 1,923 additions and 532 deletions.
20 changes: 10 additions & 10 deletions config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -138,10 +138,10 @@ time_format_default = "January 02, 2006 at 3:04 PM PST"
description = "Production-Grade Container Orchestration"
showedit = true

latest = "v1.25"
latest = "v1.26"

fullversion = "v1.25.0"
version = "v1.25"
fullversion = "v1.26.0"
version = "v1.26"
githubbranch = "main"
docsbranch = "main"
deprecated = false
Expand Down Expand Up @@ -180,6 +180,13 @@ js = [
"script"
]

[[params.versions]]
fullversion = "v1.26.0"
version = "v1.26"
githubbranch = "v1.26.0"
docsbranch = "main"
url = "https://kubernetes.io"

[[params.versions]]
fullversion = "v1.25.0"
version = "v1.25"
Expand Down Expand Up @@ -208,13 +215,6 @@ githubbranch = "v1.22.11"
docsbranch = "release-1.22"
url = "https://v1-22.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.21.14"
version = "v1.21"
githubbranch = "v1.21.14"
docsbranch = "release-1.21"
url = "https://v1-21.docs.kubernetes.io"

# User interface configuration
[params.ui]
# Enable to show the side bar menu in its compact state.
Expand Down
5 changes: 2 additions & 3 deletions content/en/blog/_posts/2022-05-03-kubernetes-release-1.24.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Existing beta APIs and new versions of existing beta APIs will continue to be en

Release artifacts are [signed](https://github.com/kubernetes/enhancements/issues/3031) using [cosign](https://github.com/sigstore/cosign)
signatures,
and there is experimental support for [verifying image signatures](/docs/tasks/administer-cluster/verify-signed-images/).
and there is experimental support for [verifying image signatures](/docs/tasks/administer-cluster/verify-signed-artifacts/).
Signing and verification of release artifacts is part of [increasing software supply chain security for the Kubernetes release process](https://github.com/kubernetes/enhancements/issues/3027).

### OpenAPI v3
Expand Down Expand Up @@ -84,8 +84,7 @@ that enables the caller of a function to control all aspects of logging (output
### Avoiding Collisions in IP allocation to Services

Kubernetes 1.24 introduces a new opt-in feature that allows you to
[soft-reserve a range for static IP address assignments](/docs/concepts/services-networking/service/#service-ip-static-sub-range)
to Services.
soft-reserve a range for static IP address assignments to Services.
With the manual enablement of this feature, the cluster will prefer automatic assignment from
the pool of Service IP addresses, thereby reducing the risk of collision.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ JAMES LAVERACK: This is really about encouraging the use of stable APIs. There w

JAMES LAVERACK: That's correct. There's no breaking changes in beta APIs other than the ones we've documented this release. It's only new things.

**CRAIG BOX: Now in this release, [the artifacts are signed](https://github.com/kubernetes/enhancements/issues/3031) using Cosign signatures, and there is [experimental support for verification of those signatures](https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-images/). What needed to happen to make that process possible?**
**CRAIG BOX: Now in this release, [the artifacts are signed](https://github.com/kubernetes/enhancements/issues/3031) using Cosign signatures, and there is [experimental support for verification of those signatures](https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/). What needed to happen to make that process possible?**

JAMES LAVERACK: This was a huge process from the other half of SIG Release. SIG Release has the release team, but it also has the release engineering team that handles the mechanics of actually pushing releases out. They have spent, and one of my friends over there, Adolfo, has spent a lot of time trying to bring us in line with [SLSA](https://slsa.dev/) compliance. I believe we're [looking now at Level 3 compliance](https://github.com/kubernetes/enhancements/issues/3027).

Expand Down
80 changes: 80 additions & 0 deletions content/en/docs/concepts/architecture/leases.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
---
title: Leases
content_type: concept
weight: 30
---

<!-- overview -->

Distrbuted systems often have a need for "leases", which provides a mechanism to lock shared resources and coordinate activity between nodes.
In Kubernetes, the "lease" concept is represented by `Lease` objects in the `coordination.k8s.io` API group, which are used for system-critical
capabilities like node heart beats and component-level leader election.

<!-- body -->

## Node Heart Beats

Kubernetes uses the Lease API to communicate kubelet node heart beats to the Kubernetes API server.
For every `Node` , there is a `Lease` object with a matching name in the `kube-node-lease`
namespace. Under the hood, every kubelet heart beat is an UPDATE request to this `Lease` object, updating
the `spec.renewTime` field for the Lease. The Kubernetes control plane uses the time stamp of this field
to determine the availability of this `Node`.

See [Node Lease objects](/docs/concepts/architecture/nodes/#heartbeats) for more details.

## Leader Election

Leases are also used in Kubernetes to ensure only one instance of a component is running at any given time.
This is used by control plane components like `kube-controller-manager` and `kube-scheduler` in
HA configurations, where only one instance of the component should be actively running while the other
instances are on stand-by.

## API Server Identity

{{< feature-state for_k8s_version="v1.26" state="beta" >}}

Starting in Kubernetes v1.26, each `kube-apiserver` uses the Lease API to publish its identity to the
rest of the system. While not particularly useful on its own, this provides a mechanism for clients to
discover how many instances of `kube-apiserver` are operating the Kubernetes control plane.
Existence of kube-apiserver leases enables future capabilities that may require coordination between
each kube-apiserver.

You can inspect Leases owned by each kube-apiserver by checking for lease objects in the `kube-system` namespace
with the name `kube-apiserver-<sha256-hash>`. Alternatively you can use the label selector `k8s.io/component=kube-apiserver`:

```shell
$ kubectl -n kube-system get lease -l k8s.io/component=kube-apiserver
NAME HOLDER AGE
kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 5m33s
kube-apiserver-dz2dqprdpsgnm756t5rnov7yka kube-apiserver-dz2dqprdpsgnm756t5rnov7yka_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s
kube-apiserver-fyloo45sdenffw2ugwaz3likua kube-apiserver-fyloo45sdenffw2ugwaz3likua_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s
```

The SHA256 hash used in the lease name is based on the OS hostname as seen by kube-apiserver. Each kube-apiserver should be
configured to use a hostname that is unique within the cluster. New instances of kube-apiserver that use the same hostname
will take over existing Leases using a new holder identity, as opposed to instantiating new lease objects. You can check the
hostname used by kube-apisever by checking the value of the `kubernetes.io/hostname` label:

```shell
$ kubectl -n kube-system get lease kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a -o yaml
```

```yaml
apiVersion: coordination.k8s.io/v1
kind: Lease
metadata:
creationTimestamp: "2022-11-30T15:37:15Z"
labels:
k8s.io/component: kube-apiserver
kubernetes.io/hostname: kind-control-plane
name: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a
namespace: kube-system
resourceVersion: "18171"
uid: d6c68901-4ec5-4385-b1ef-2d783738da6c
spec:
holderIdentity: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4
leaseDurationSeconds: 3600
renewTime: "2022-11-30T18:04:27.912073Z"
```
Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.
2 changes: 1 addition & 1 deletion content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -456,7 +456,7 @@ Message: Pod was terminated in response to imminent node shutdown.

## Non Graceful node shutdown {#non-graceful-node-shutdown}

{{< feature-state state="alpha" for_k8s_version="v1.24" >}}
{{< feature-state state="beta" for_k8s_version="v1.26" >}}

A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
either because the command does not trigger the inhibitor locks mechanism used by
Expand Down
15 changes: 15 additions & 0 deletions content/en/docs/concepts/containers/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,6 +167,9 @@ Credentials can be provided in several ways:
- Configuring Nodes to Authenticate to a Private Registry
- all pods can read any configured private registries
- requires node configuration by cluster administrator
- Kubelet Credential Provider to dynamically fetch credentials for private registries
- kubelet can be configured to use credential provider exec plugin
for the respective private registry.
- Pre-pulled Images
- all pods can use any images cached on a node
- requires root access to all nodes to set up
Expand All @@ -187,6 +190,18 @@ For an example of configuring a private container image registry, see the
[Pull an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry)
task. That example uses a private registry in Docker Hub.

### Kubelet credential provider for authenticated image pulls {#kubelet-credential-provider}

{{< note >}}
This approach is especially suitable when kubelet needs to fetch registry credentials dynamically.
Most commonly used for registries provided by cloud providers where auth tokens are short-lived.
{{< /note >}}

You can configure the kubelet to invoke a plugin binary to dynamically fetch registry credentials for a container image.
This is the most robust and versatile way to fetch credentials for private registries, but also requires kubelet-level configuration to enable.

See [Configure a kubelet image credential provider](/docs/tasks/administer-cluster/kubelet-credential-provider/) for more details.

### Interpretation of config.json {#config-json}

The interpretation of `config.json` varies between the original Docker
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ weight: 20
---

<!-- overview -->
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
{{< feature-state for_k8s_version="v1.26" state="stable" >}}

Kubernetes provides a [device plugin framework](https://git.k8s.io/design-proposals-archive/resource-management/device-plugin.md)
that you can use to advertise system hardware resources to the
Expand Down Expand Up @@ -145,8 +145,8 @@ The general workflow of a device plugin includes the following steps:
### Handling kubelet restarts

A device plugin is expected to detect kubelet restarts and re-register itself with the new
kubelet instance. In the current implementation, a new kubelet instance deletes all the existing Unix sockets
under `/var/lib/kubelet/device-plugins` when it starts. A device plugin can monitor the deletion
kubelet instance. A new kubelet instance deletes all the existing Unix sockets under
`/var/lib/kubelet/device-plugins` when it starts. A device plugin can monitor the deletion
of its Unix socket and re-register itself upon such an event.

## Device plugin deployment
Expand All @@ -165,16 +165,28 @@ Pod onto Nodes, to restart the daemon Pod after failure, and to help automate up

## API compatibility

Kubernetes device plugin support is in beta. The API may change before stabilization,
in incompatible ways. As a project, Kubernetes recommends that device plugin developers:
Previously, the versioning scheme required the Device Plugin's API version to match
exactly the Kubelet's version. Since the graduation of this feature to Beta in v1.12
this is no longer a hard requirement. The API is versioned and has been stable since
Beta graduation of this feature. Because of this, kubelet upgrades should be seamless
but there still may be changes in the API before stabilization making upgrades not
guaranteed to be non-breaking.

* Watch for changes in future releases.
{{< caution >}}
Although the Device Manager component of Kubernetes is a generally available feature,
the _device plugin API_ is not stable. For information on the device plugin API and
version compatibility, read [Device Plugin API versions](/docs/reference/node/device-plugin-api-versions/).
{{< caution >}}

As a project, Kubernetes recommends that device plugin developers:

* Watch for Device Plugin API changes in the future releases.
* Support multiple versions of the device plugin API for backward/forward compatibility.

If you enable the DevicePlugins feature and run device plugins on nodes that need to be upgraded to
a Kubernetes release with a newer device plugin API version, upgrade your device plugins
to support both versions before upgrading these nodes. Taking that approach will
ensure the continuous functioning of the device allocations during the upgrade.
To run device plugins on nodes that need to be upgraded to a Kubernetes release with
a newer device plugin API version, upgrade your device plugins to support both versions
before upgrading these nodes. Taking that approach will ensure the continuous functioning
of the device allocations during the upgrade.

## Monitoring device plugin resources

Expand Down
2 changes: 2 additions & 0 deletions content/en/docs/concepts/scheduling-eviction/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,10 @@ of terminating one or more Pods on Nodes.
* [Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/)
* [Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework)
* [Dynamic Resource Allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation)
* [Scheduler Performance Tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* [Resource Bin Packing for Extended Resources](/docs/concepts/scheduling-eviction/resource-bin-packing/)
* [Pod Scheduling Readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)

## Pod Disruption

Expand Down
Loading

0 comments on commit 8233fa2

Please sign in to comment.