Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSI: support for in-line volumes in pods. #2273

Closed
wants to merge 7 commits into from
Closed
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,211 @@
# In-line CSI volumes in Pods

Author: @jsafrane

## Goal
* Define API and high level design for in-line CSI volumes in Pod.
* Make in-line CSI volumes secure for using ephemeral volumes (such as Secrets or ConfigMap).

## Motivation
Currently, CSI can be used only through PersistentVolume object. All other persistent volume sources support in-line volumes in Pods, CSI should be no exception. There are three main drivers:
* We want to move away from in-tree volume plugins to CSI, as designed in a separate proposal https://github.com/kubernetes/community/pull/2199/. In-line volumes should use CSI too.
* CSI drivers can be used to provide ephemeral volumes used to inject state, configuration, secrets, identity or similar information to pods, like Secrets and ConfigMap in-tree volumes do today. We don't want to force users to create PVs for each such volume, we should allow to use them in-line in pods as regular Secrets or ephemeral Flex volumes.
* Get the same features as Flex and deprecate Flex. (I.e. replace it with some CSI-Flex bridge. This bridge is out of scope of this proposal.)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You hint at it a bit, but might be good to explicitly add it as a motivation:

  • Reach feature parity with FlexVolume api and allow for its deprecation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a new motivation item.

## API
`VolumeSource` needs to be extended with CSI volume source:
```go
type VolumeSource struct {
// <snip>

// CSI (Container Storage Interface) represents storage that handled by an external CSI driver (Beta feature).
// +optional
CSI *CSIVolumeSource
}


// Represents storage that is managed by an external CSI volume driver (Alpha feature)
type CSIVolumeSource struct {
// Driver is the name of the driver to use for this volume.
// Required.
Driver string

// VolumeHandle is the unique ID of the volume. It is the volume ID used in
// all CSI calls, optionally with a prefix based on VolumeHandlePrefix

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

VolumeHandlePrefix isn't in the spec anymore.

// value.
// Required
VolumeHandle string

// VolumeHandlePrefix is type of prefix added to VolumeHandle before using
// it as CSI volume ID. It ensures that volumes with the same VolumeHandle
// in different pods or namespaces get unique CSI volume ID.
// Required.
VolumeHandlePrefix CSIVolumeHandlePrefix

// Optional: The value to pass to ControllerPublishVolumeRequest.
// Defaults to false (read/write).
// +optional
ReadOnly bool

// Filesystem type to mount.
// Must be a filesystem type supported by the host operating system.
// Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given kubernetes/kubernetes#65122, I'm not sure that ext4 should be inferred.

// +optional
FSType string

// Attributes of the volume. This corresponds to "volume_attributes" in some
// CSI calls.
// +optional
VolumeAttributes map[string]string

// ControllerPublishSecretRef is a reference to the secret object containing
// sensitive information to pass to the CSI driver to complete the CSI
// ControllerPublishVolume and ControllerUnpublishVolume calls.
// This field is optional, and may be empty if no secret is required. If the
// secret object contains more than one secret, all secrets are passed.
// +optional
ControllerPublishSecretRef *LocalObjectReference

// NodeStageSecretRef is a reference to the secret object containing sensitive
// information to pass to the CSI driver to complete the CSI NodeStageVolume
// and NodeStageVolume and NodeUnstageVolume calls.
// This field is optional, and may be empty if no secret is required. If the
// secret object contains more than one secret, all secrets are passed.
// +optional
NodeStageSecretRef *LocalObjectReference

// NodePublishSecretRef is a reference to the secret object containing
// sensitive information to pass to the CSI driver to complete the CSI
// NodePublishVolume and NodeUnpublishVolume calls.
// This field is optional, and may be empty if no secret is required. If the
// secret object contains more than one secret, all secrets are passed.
// +optional
NodePublishSecretRef *LocalObjectReference
}

type CSIVolumeHandlePrefix string
const (
// VolumeHandle is prefixed by Pod UID.
CSIVolumeHandlePrefixPod CSIVolumeHandlePrefix = "Pod"
// VolumeHandle is prefixed by UID of the namespace where the pod is located.
CSIVolumeHandlePrefixNamespace CSIVolumeHandlePrefix = "Namespace"
// VolumeHandle is not modified.
CSIVolumeHandlePrefixNone CSIVolumeHandlePrefix = "None"
)
```

The difference between `CSIVolumeSource` (in-lined in a pod) and `CSIPersistentVolumeSource` (in PV) are:

* All secret references in in-line volumes can refer only to secrets in the same namespace where the corresponding pod is running. This is common in all other volume sources that refer to secrets, incl. Flex.
* VolumeHandle in in-line volumes can have a prefix. This prefix (Pod UID, Namespace UID or nothing) is added to the VolumeHandle before each CSI call. It makes sure that each pod uses a different volume ID for its ephemeral volumes. The prefix must be explicitly set by pod author, there is no default.
* Users don't need to think about VolumeHandles used in other pods in their namespace, as each pod will get an unique prefix when `CSIVolumeHandlePrefixPod` is used. CSI volume ID with this prefix cannot accidentally conflict by another volume ID in another pod.
* Each pod created by ReplicaSet, StatefulSet or DaemonSet will get the same copy of a pod template. `CSIVolumeHandlePrefixPod` makes sure that each pod gets its own unique volume ID and thus can get its own volume instance.
* Without the prefix, user could guess volume ID of a secret-like CSI volume of another user and craft a pod with in-line volume referencing it. CSI driver, obeying idempotency, must then give the same volume to this pod. If users can use only`CSIVolumeHandlePrefixNamespace` or `CSIVolumeHandlePrefixPod`in their in-line volumes, we can make sure that they can't steal secrets of each other.
* `PodSecurityPolicy` will be extended to allow / deny users using in-line volumes with no prefix.
* Finally, `CSIVolumeHandlePrefixNone` allows selected users (based on PSP) to use persistent storage volumes in-line in pods.

## Implementation
#### Provisioning/Deletion
N/A, it works only with PVs and not with in-line volumes.

### Attach/Detach
Current `storage.VolumeAttachment` object contains only reference to PV that's being attached. It must be extended with VolumeSource for in-line volumes in pods.

```go
// VolumeAttachmentSpec is the specification of a VolumeAttachment request.
type VolumeAttachmentSpec struct {
// <snip>

// Source represents the volume that should be attached.
Source VolumeAttachmentSource
}

// VolumeAttachmentSource represents a volume that should be attached, either
// PersistentVolume or a volume in-lined in a Pod.
// Exactly one member can be set.
type VolumeAttachmentSource struct {
// Name of the persistent volume to attach.
// +optional
PersistentVolumeName *string

// InlineVolumeSource represents the source location of a in-line volume in a pod to attach.
// +optional
InlineVolumeSource *InlineVolumeSource
Copy link
Member

@mikedanese mikedanese Jul 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does inline mean here? Is this actually inline if the VolumeSource spec isn't embedded in the PodSpec? Maybe I'm misreading the API. What would a pod.yaml look like if I were to use an inline CSI volume?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pod yaml has an in-line volume:

apiVersion: v1
kind: Pod
metadata:
  name: testpod
spec:
  containers:
    ...
  volumes:
      - name: vol
        csi:
          driver: io.kubernetes.storage.mock
          volumeAttributes:
              name: "Mock Volume 1"
          volumeHandle: "1"

This csi: gets copied to VolumeAttachment because Pod does not need to exist at detach time and we need the volume source somewhere to detach volumes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jsafrane couple of things I would like to suggest here:

  1. I don't like the name InlineVolumeSource as it seems to prematurely leak the usage/impl of component in its name. Maybe
  2. I am thinking of removing one level of object abstraction by putting PersistentVolumeName and Source directly in the VolumeAttachmentSpec
type VolumeAttachmentSpec struct {
    // <snip>
    // Source represents the volume that should be attached.
    PersistentVolumeName *string

    Source *VolumeAttachmentSource
}

type VolumeAttachmentSource struct {
	// VolumeSource is copied from the pod. It ensures that attacher has enough
	// information to detach a volume when the pod is deleted before detaching.
	// Only CSIVolumeSource can be set.
	// Required.
	VolumeSource v1.VolumeSource
 	// Namespace of the pod with in-line volume. It is used to resolve
	// references to Secrets in VolumeSource.
	// Required.
	Namespace string
}

Let me know if those changes work for you... If so, I can continue with implementation.

}

// InlineVolumeSource represents the source location of a in-line volume in a pod.
type InlineVolumeSource struct {
// VolumeSource is copied from the pod. It ensures that attacher has enough
// information to detach a volume when the pod is deleted before detaching.
// Only CSIVolumeSource can be set.
// Required.
VolumeSource v1.VolumeSource

// Namespace of the pod with in-line volume. It is used to resolve
// references to Secrets in VolumeSource.
// Required.
Namespace string
}
```

* A/D controller **copies whole `VolumeSource`** from `Pod` into `VolumeAttachment`. This allows external CSI attacher to detach volumes for deleted pods without keeping any internal database of attached VolumeSources.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • what will create VolumeAttachment objects corresponding to pods?
  • what will clean up VolumeAttachment objects after pods are deleted?
  • how will the VolumeAttachment objects be named? is it derived from the pod namespace and name?
  • what is the behavior if two pods both contain identical VolumeSources?

Copy link
Member Author

@jsafrane jsafrane Jun 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what will create VolumeAttachment objects corresponding to pods?
what will clean up VolumeAttachment objects after pods are deleted?

A/D controller. There is no difference there from attaching PVs.

how will the VolumeAttachment objects be named? is it derived from the pod namespace and name?

It will be the same as PVs, i.e. CSI driver name + CSI volume handle + node name, all hashed to conform to naming rules.

what is the behavior if two pods both contain identical VolumeSources?

They both get the volume, assuming the volume is ReadWriteMany. And it will work even if one of the pods uses PV and the other has the same volume inlined.

Copy link
Member

@liggitt liggitt Jun 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be the same as PVs, i.e. CSI driver name + CSI volume handle + node name, all hashed to conform to naming rules.

CSI volume handle in a PV is considered trustworthy, because creating a PV is a highly privileged operation. The entity creating the PV is assumed to create unique volume handles, right?

CSI volume handle in a pod is not particularly trustworthy. What happens if two pods in different namespaces (scheduled to the same node) both specify the same CSI driver and volume handle, but different VolumeAttributes, fstype, etc? which one wins and gets its content in the single VolumeAttachment object?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first one wins. Its the same as when two users create non-CSI in-line volume, such as Ceph RBD or AWS EBS.

Which brings another topic: we should extend PSP / SCC to allow/disallow individual CSI drivers inlined in pods, similar to Flex:

https://kubernetes.io/docs/concepts/policy/pod-security-policy/#flexvolume-drivers

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added PSP change.

* Using whole `VolumeSource` allows us to re-use `VolumeAttachment` for any other in-line volume in the future. We provide validation that this `VolumeSource` contains only `CSIVolumeSource` to clearly state that only CSI is supported now.
* External CSI attacher must be extended to process either `PersistentVolumeName` or `VolumeSource`.
* Since in-line volume in a pod can refer to a secret in the same namespace as the pod, **external attacher may need permissions to read any Secrets in any namespace**.
* CSI `ControllerUnpublishVolume` call (~ volume detach) requires the Secrets to be available at detach time. Current CSI attacher implementation simply expects that the Secrets are available at detach time.
* Secrets for PVs are "global", out of user's namespace, so this assumption is probably OK.
* Secrets for in-line volumes must be in the same namespace as the pod that contains the volume. Users can delete them before the volume is detached. We deliberately choose to let the external attacher to fail when such Secret cannot be found on detach time and keep the volume attached, reporting errors about missing Secrets to user.
* Since access to in-line volumes can be configured by `PodSecurityPolicy` (see below), we expect that cluster admin gives access to CSI drivers that require secrets at detach time only to educated users that know they should not delete Secrets used in volumes.
* Number of CSI drivers that require Secrets on detach is probably very limited. No in-tree Kubernetes volume plugin requires them on detach.
* We will provide clear documentation that using in-line volumes with drivers that require credentials on detach may leave orphaned attached volumes that Kubernetes is not able to detach. It's up to the cluster admin to decide if using such CSI driver is worth it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious, if it happens, what would be a recovery steps for a cluster admin to clear this condition, it cannot stay orphaned forever.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good compromise.

Maybe we should specify finer grained behaviour with respect to pod security policy though. Maybe that belongs in a different spec. Being able to white list particular csi drivers such as image-driver while making a driver such as nfs-driver, pv only could be handy. Looking at the pod security policy, 'volumes' is currently a string array so would only allow/disallow all of csi. Perhaps the psp volume list could ignore just csi though and use the ".volume.csi.driver" info instead.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never mind. I just noticed it was already in one of the more recent versions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sbezverk, there already is metric of failed detaches and admin can look at VolumeAttachments with failed detach.


### Kubelet (MountDevice/SetUp/TearDown/UnmountDevice)
In-tree CSI volume plugin calls in kubelet get universal `volume.Spec`, which contains either `v1.VolumeSource` from Pod (for in-line volumes) or `v1.PersistentVolume`. We need to modify CSI volume plugin to check for presence of `VolumeSource` or `PersistentVolume` and read NodeStage/NodePublish secrets from appropriate source. Kubelet does not need any new permissions, it already can read secrets for pods that it handles. These secrets are needed only for `MountDevice/SetUp` calls and don't need to be cached until `TearDown`/`UnmountDevice`.

### `PodSecurityPolicy`
* `PodSecurityPolicy` must be enhanced to limit pods in using in-line CSI volumes. It will be modeled following existing Flex volume policy. There is no default, users can't use in-line CSI volumes unless some CSI drivers are explicitly allowed.
```go
type PodSecurityPolicySpec struct {
// <snip>

// AllowedFlexVolumes is a whitelist of allowed Flexvolumes. Empty or nil indicates that all
// Flexvolumes may be used. This parameter is effective only when the usage of the Flexvolumes
// is allowed in the "Volumes" field.
// +optional
AllowedFlexVolumes []AllowedFlexVolume

// AllowedCSIVolumes is a whitelist of allowed CSI volumes. Empty or nil indicates that all
// CSI volumes may be used. This parameter is effective only when the usage of the CSI volumes
// is allowed in the "Volumes" field.
// +optional
AllowedCSIVolumes []AllowedCSIVolume
}

// AllowedCSIVolume represents a single CSI volume that is allowed to be used.
type AllowedCSIVolume struct {
// Driver is the name of the CSI volume driver.
Driver string
}
```
* `PodSecurityPolicy` must be extended to allow users to use in-line volumes with no prefixes. This prevents users from stealing data from Secrets-like ephemeral volumes inlined in pods by guessing volume ID of someone else. There is no default, users can't use in-line CSI volumes unless some prefixes are explicitly allowed.
```go
type PodSecurityPolicySpec struct {
// <snip>
// AllowedCSIVolumeHandlePrefixes is a whitelist of volume prefixes
// allowed to be used in CSI volumes in-lined in pods.
AllowedCSIVolumeHandlePrefixes []core.CSIVolumeHandlePrefix
}
```

* `PodSecurityPolicy` must be extended to allow users to use attachable in-line CSI volumes. This prevents users from leaving orphaned attached volumes when they delete Secrets required to detach volumes. **Kubernetes currently does not know which CSI volumes are attachable or not. There are several options considered and it will be handled in a separate proposal.**
```go
type PodSecurityPolicySpec struct {
// <snip>
// AllowAttachableCSIVolumes allows users to use attachable CSI volumes
// in-line in pod definitions.
AllowAttachableCSIVolumes bool
}
```

### Security considerations
As written above, external attacher may requrie permissions to read Secrets in any namespace. It is up to CSI driver author to document if the driver needs such permission (i.e. access to Secrets at attach/detach time) and up to cluster admin to deploy the driver with these permissions or restrict external attacher to access secrets only in some namespaces.