Skip to content

Commit

Permalink
KEP-596: address review feedback
Browse files Browse the repository at this point in the history
  • Loading branch information
dobsonj committed Jan 26, 2022
1 parent d3aa8d0 commit 622d4cc
Showing 1 changed file with 19 additions and 6 deletions.
25 changes: 19 additions & 6 deletions keps/sig-storage/596-csi-inline-volumes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -339,10 +339,13 @@ Since ephemeral volume requests will participate in only the mount/unmount volum
A rollback (or disabling the feature flag) may impact running workloads,
as any pod specs taking advantage of ephemeral inline volumes would fail
to start when the feature is disabled or the CSIDriver does not support it.
It is recommended to delete any pods using this feature before disabling it.

###### What specific metrics should inform a rollback?

Number of failures to attach ephemeral inline CSI volumes for supported drivers.
No rollback criteria based on metrics alone. This feature has been enabled by
default since 1.16 (beta) and pods that are using ephemeral inline volumes should
not start failing as a result of this feature moving to GA.

###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?

Expand Down Expand Up @@ -417,13 +420,23 @@ are missing a bunch of machinery and tooling and can't do that now.
There is a new `VolumeLifecycleModes` field in CSIDriverSpec that may slightly increase
the size by adding 2 new strings (`Persistent` and `Ephemeral`).

There is also `pod.spec.volumes` that can contain `csi` items now. Its size is comparable
to the size of Secrets/ConfigMap volumes inline in Pods. The CSI volume definitions will
be bigger, as they contain a generic map of `volumeAttributes`, which contains opaque
parameters for the CSI driver.

###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?

There should be no increase to the amount of time taken.
Compared to CSI persistent volumes, there should be no increase in the amount of time
taken to mount / unmount, as there will be fewer CSI calls required for inline volumes.
Compared to Secrets and ConfigMaps, inline CSI volumes will be slower mount / unmount,
since an external CSI driver is responsible for providing the actual volume.

###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?

There should be no noticeable increase to resource usage for any components.
Kubernetes itself should not see any noticeable increase in resource consumption,
but CSI driver pods will need to be deployed on all the nodes in order to make
use of this feature.

### Troubleshooting

Expand All @@ -433,9 +446,9 @@ are missing a bunch of machinery and tooling and can't do that now.

###### What are other known failure modes?

If the storage system fails to provision volumes, then this will be
exposed as events for the PVC and (depending on the storage system)
may also show up in metrics data.
If the storage system fails to provision inline volumes, there will
be an event in the affected pod indicating what went wrong when
mounting the CSI volume.

Failure modes related to individual CSI drivers may require examining
the CSI driver logs.
Expand Down

0 comments on commit 622d4cc

Please sign in to comment.