From 1b39aa7cd7495dae041421e97f1eb37aa7df3830 Mon Sep 17 00:00:00 2001 From: Jan Safranek Date: Fri, 20 Jan 2023 17:39:43 +0100 Subject: [PATCH] Add draft of volume reconstruction KEP --- .../3756-volume-reconstruction/README.md | 905 ++++++++++++++++++ .../3756-volume-reconstruction/kep.yaml | 44 + 2 files changed, 949 insertions(+) create mode 100644 keps/sig-storage/3756-volume-reconstruction/README.md create mode 100644 keps/sig-storage/3756-volume-reconstruction/kep.yaml diff --git a/keps/sig-storage/3756-volume-reconstruction/README.md b/keps/sig-storage/3756-volume-reconstruction/README.md new file mode 100644 index 000000000000..ed6a71eff306 --- /dev/null +++ b/keps/sig-storage/3756-volume-reconstruction/README.md @@ -0,0 +1,905 @@ + +# KEP-3756: Robust VolumeManager reconstruction after kubelet restart + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories (Optional)](#user-stories-optional) + - [Story 1](#story-1) + - [Story 2](#story-2) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + +After kubelet is restarted, it looses track of all volume it mounted for +running Pods. It tries to restore this state from the API server, where kubelet +can find Pods that _should_ be running, and from the host's OS, where it can +find actually mounted volumes. We know this process is imperfect. +This KEP tries to rework the process. While the work is technically a bugfix, +it changes large parts of kubelet, and we'd like to have it behind a feature +gate to provide users a way to get to the old implementations in case of +problems. + +This work started as part of +[KEP 1790](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling) +and even went alpha in v1.26, but we'd like to have a separate feature + feature +gate to be able to graduate VolumeManager reconstruction faster. + + + +## Motivation + +### Goals + +* During kubelet startup, allow it to populate additional information about + _how_ are existing volumes mounted. + [KEP 1710](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling) + needs to know what mount options did the previous kubelet used when mounting + the volumes, to be able to tell if they need any change or not. +* Fix [#105536](https://github.com/kubernetes/kubernetes/issues/105536): Volumes + are not cleaned up (unmounted) after kubelet restart, which needs a similar + VolumeManager refactoring. +* In general, make volume cleanup more robust. + + + +### Non-Goals + + + +## Introduction + +*VolumeManager* is a piece of kubelet that mounts volumes that should be +mounted (i.e. a Pod that needs the volume exists) and unmounts volumes that are +not needed any longer (all Pods that used them were deleted). + +VolumeManager keeps two caches: +* *DesiredStateOfWorld* (DSW) contains volumes that should be mounted. +* *ActualStateOfWorld* (ASW) contains currently mounted volumes. + +VolumeManager then compares these two caches and tries to move ASW towards DSW. + +Both caches exist only in memory and are lost when kubelet process dies. It's +relatively easy to populate DSW - just list all Pods from the API server +(and static pods and whatnot) and collect their volumes. + +*Volume reconstruction* is a process where kubelet tries to create a single +valid `PersistentVolumeSpec` or `VolumeSpec` for a volume from the OS. +Typically from mount table by looking at what's mounted at +`/var/lib/kubelet/pods/*/volumes/XYZ`. This process is imperfect, +it populates only `(Persistent)VolumeSpec` fields that are necessary to unmount +the volume (i.e. to call `volumePlugin.TearDown` + `UnmountDevice` calls). + +Today, kubelet populates VolumeManager's DSW first, from static Pods and pods +received from the API server. ASW is populated from the OS +after DSW is complete and **only volumes missing in DSW are added there**. +In other words, kubelet reconstructs only the volumes for Pods that were running +before kubelet started, but they were delered when kubelet started. (If the pod +was Running, its volumes would be in DSW). + +We assumed that this was enough, because if a volume is in DSW, the +VolumeManager will try to mount the volume, and it will eventually reach ASW. + +We needed to add +[a complex workaround](https://github.com/kubernetes/kubernetes/pull/110670) +to actually unmount a volume if it's initially in DSW, but user deletes all +Pods that need it before the volume reaches ASW. + +## Proposal + + + +We propose to reverse the kubelet startup process. + +1. Quickly reconstruct ASW from the OS and add **all** found volumes to ASW + when kubelet starts. "Quickly" means the process should look only at the OS + and files/directories in `/var/lib/kubelet/` and it should not + require the API server or any network calls. Esp. the API server may + not be available at this stage of kubelet startup. +2. Once all mounted volumes are in ASW, start populating DSW from static Pods + and Pods received from the API server. +3. When connection to the API server becomes available, complete information + in ASW with data from the API server (e.g. from `node.status`). This + typically happens in parallel to the previous step. + +Benefits: + +* All volumes are reconstructed from the OS. As result, ASW can contain the + real information how are the volumes mounted, e.g. their mount options. + This will help with + [KEP 1710](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling). +* Some issues become much easier to fix, e.g. + * [#105536](https://github.com/kubernetes/kubernetes/issues/105536) + * We can remove workarounds for + [#96635](https://github.com/kubernetes/kubernetes/issues/96635) + and [#70044](https://github.com/kubernetes/kubernetes/issues/70044), + they will get fixed naturally by the refactoring. + +We also propose to split this work out of +[KEP 1710](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling), +as it can be useful outside of SELinux relabeling and could graduate separately. +to split the feature, we propose feature gate `NewVolumeManagerReconstruction`. + +### User Stories (Optional) + +#### Story 1 + +(This is not a new story, we want to keep this behavior) + +As a cluster admin, I want kubelet to resume where it stopped when it was +restarted or its machine was rebooted. It must be able to recognize what +happened in the meantime and either unmount any volumes of Pods that were +deleted in the API server or mount volumes for newly created Pods. + +### Notes/Constraints/Caveats (Optional) + +TODO: delete? + + + +### Risks and Mitigations + +The whole VolumeManager startup was rewritten as part of +[KEP 1710](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling). +It can contain bugs that are not trivial to find, because kubelet can be used +in number of situations that we don't have in CI. For example, we found out +(and fixed) a case where the API server is actually a static Pod in kubelet +that is starting. We don't know what other kubelet configurations people use, +so we decided to write a KEP and move the new VolumeManager startup behind +a feature gate. + + +## Design Details + +### Proposed VolumeManager startup + +When kubelet starts: + +1. VolumeManager starts populating DSW from PodManager, who reads Pods from + static pods and/or the API server. + +2. In parallel to 1., VolumeManager scans `/var/lib/kubelet/pods/*` and + reconstructs *all* found volumes and adds them to ASW as *uncertain*. I.e. + depending on DSW, the VolumeManager will either call `volumePlugin.SetUp()` + to make sure the volume is fully mounted or it will call + `volumePlugin.TearDown()` to make sure the volume is unmounted. Only + information that is available in the Pod directory on the disk are + reconstructed into ASW at this point. + * Since the volume reconstruction can be imperfect and can miss `devicePath`, + VolumeManager adds all reconstructed volumes to `volumesNeedDevicePath` + array, to finish their reconstruction from `node.status.volumesAttached` + * All volumes that failed reconstruction are added to + `volumesFailedReconstruction` list. + + +After **ASW** is populated, VolumeManager starts its reconciliation loop, i.e. +starts comparing ASW and DSW and periodically calls: +1. `mountOrAttachVolumes()` - mounts (and attaches, if necessary) volumes that + are in DSW, but not in ASW. This can happen even before DSW is fully + populated, because ASW is fully populated at this point. + +2. `updateReconstructedDevicePaths()` - once kubelet gets connection to the API + server and reads its own `Node.status`, volumes in `volumesNeedDevicePath` + (i.e. all reconstructed volumes) are updated from + `node.status.attachedVolumes`. This happens only once, + `volumesNeedDevicePath` is cleared afterwards. + +3. (Only once): Add all reconstructed volumes to `node.status.volumesInUse`. + +4. Only after DSW was fully populated (i.e. it contains at least Pods that were + Running when kubelet started), **and** `devicePaths` were populated from + `node.status`, VolumeManager can start unmounting volumes and calls: + 1. `unmountVolumes()` - unmounts (`TearDown`) pod local volumes that are in + ASW and are not in DSW. + 2. `unmountDetachDevices()` - unmounts (`UnmountDevice`) global volume mounts + of volumes that are in ASW and are not in DSW. + 3. `cleanOrphanVolumes()` - tries to clean up `volumesFailedReconstruction`. + Here kubelet cannot call appropriate volume plugin to unmount a + volume, because kubelet failed to reconstruct the volume spec from + `/var/lib/kubelet/pods//volumes/xyz`. Kubelet at least tries to + unmount the directory and clean up any orphan files there. + +Note that e.g. `mountOrAttachVolumes` can call `volumePlugin.MountDevice` / +`SetUp()` on a reconstructed volume (because it's *uncertain*) and finally +update ASW, while the VolumeManager is still waiting for the API server to +update `devicePath` of the same volume in ASW (step 2. above). +We made sure that `updateReconstructedDevicePaths` will update the +`devicePath` only for *uncertain* volumes, not to overwrite the *certain* ones. + +### Old VolumeManager startup + +When kubelet starts: + +1. VolumeManager starts populating DSW from PodManager, who reads Pods from + static pods and/or the API server. + +While DSW populator runs in the another thread, VolumeManager starts its +reconciliation loop. Note that ASW is empty at this point + DSW is being +populated. It starts comparing ASW and DSW and periodically following calls: + +1. `unmountVolumes()` - unmounts (`TearDown`) pod local volumes that are in + ASW and are not in DSW. Since the ASW is initially empty, this call + becomes useful later. +2. `mountOrAttachVolumes()` - mounts (and attaches, if necessary) volumes that + are in DSW, but not in ASW. This will eventually happen for all volumes in + DSW, because ASW is empty. This actually the way how AWS is populated. +3. `unmountDetachDevices()` - unmounts (`UnmountDevice`) global volume mounts + of volumes that are in ASW and are not in DSW. +4. Only once after DSW is fully populated (i.e. it contains at least Pods that + were Running when kubelet started): + 1. VolumeManager scans `/var/lib/kubelet/pods/*` + and reconstructs **only** volumes that are not in DSW. (If a volume is in + DSW, we expect that it reaches ASW during step 3.) + * `devicePath` of reconstructed volumes is populated from + `node.status.attachedVolumes` right away. + * In the next reconciliation loop, reconstructed volumes that are not in + DSW are finally unmounted in step 1. above. + * TODO: mention the workaround for deleting Pods before their volumes + reach ASW. + 2. VolumeManager reports all reconstructed volumes in + `node.status.volumesInUse`. + +### Test Plan + +[x] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + +- ``: `` - `` + +##### Integration tests + + + +- : + +##### e2e tests + + + +- : + +### Graduation Criteria + + + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [ ] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: + - Components depending on the feature gate: +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? (Do not assume `Dynamic Kubelet Config` feature is enabled). + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + + + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-storage/3756-volume-reconstruction/kep.yaml b/keps/sig-storage/3756-volume-reconstruction/kep.yaml new file mode 100644 index 000000000000..9c8ad9cf40ff --- /dev/null +++ b/keps/sig-storage/3756-volume-reconstruction/kep.yaml @@ -0,0 +1,44 @@ +title: Robust VolumeManager reconstruction after kubelet restart +kep-number: 3756 +authors: + - "@jsafrane" +owning-sig: sig-storage +participating-sigs: +status: provisional +creation-date: 2023-01-20 +reviewers: + - "@msau42" + - "@gnufied" + - "@jingxu97" +approvers: + - "@msau42" +see-also: + - "/keps/sig-storage/1710-selinux-relabeling" +replaces: + +# The target maturity stage in the current dev cycle for this KEP. +stage: beta + +# The most recent milestone for which work toward delivery of this KEP has been +# done. This can be the current (upcoming) milestone, if it is being actively +# worked on. +latest-milestone: "v1.27" + +# The milestone at which this feature was, or is targeted to be, at each stage. +milestone: + alpha: "v1.26" # as part of /keps/sig-storage/1710-selinux-relabeling + beta: "v1.27" + stable: "v1.29" + +# The following PRR answers are required at alpha release +# List the feature gate name and the components for which it must be enabled +feature-gates: + - name: NewVolumeManagerReconstruction + components: + - kubelet + +disable-supported: true + +# The following PRR answers are required at beta release +metrics: +# - TODO: my_feature_metric