-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storage Class parameters during all CSI calls #387
Comments
Can you elaborate about the endpoint use case. What is this endpoint in your use case? is it the backend storage detail of your CSI driver like address? Which APIs require the endpoints in your use case? is it only the controller APIs or also the node APIs? |
Yes, it is the address of the storage API. The volume creation/deletion is achieved through the vendor API calls. Since We make use of
As explained above, we have use cases for both types of CSI APIs, particularly, |
for today CSI spec, you will have to add configmap to your driver in order to get this detail in all the calls. |
Yes. As it was mentioned in the description, we deployed two different drivers, each with their own endpoint tied to the driver deployment. This is more of a feature request. Moreover, the only way for vendor CSI drivers to get configurable information cleanly is with StorageClass parameters and passing these during all CSI calls will enable vendor CSI drivers to get configurable information such as endpoint address. |
In this case it sounds like "API address" is related to the SP's storage topology, and there's a desire to map different classes (ala StorageClass) of storage to different backends/racks (or storage segments). I'm not super familiar with how CSI topology has been integrated into the k8s CSI stack, but if there's topological information passed along to each of the CSI node/controller calls then it could be useful to capture "API address" that way, or at least provide a mapping element from which the proper API address could be derived. |
I've given this a little more thought and, you're right, the topology suggestion I made doesn't quite fit. That said, I'm wondering if you really want a single instance of the driver talking to two different backends. For example, if your different backends are for different tenants, and tenant A repeatedly issues a set of RPCs that chokes the driver instance, then tenant B's use of the same driver is impacted. Another example: you want to upgrade the testing cluster backend, but that requires a driver upgrade as well - but there's risk to production now because bumping the driver may impact the stability of the production cluster (yeah, it's probably low risk, but still). It sounds like your use case is: "I have multiple storage backends, each for a different tenant, at different endpoints and I want to be able to configure a single CSI driver instance to communicate to both of them depending on the storage class of an existing or to-be-created volume." CSI hasn't actually tried to resolve tenancy issues - we've typically punted to COs for that. I'm not convinced that we should try to tackle multi-tenancy at this level. |
@venkatsc +1 |
@jdef we hope it can be added that Storage Class parameters during all CSI calls. The url of the storage can be added to the secret is a options, but I think it's not suitable, In all of our use cases, on kubernetes to multi backend storage. |
Has this been discussed in the k8s sig-storage channel yet? If not, it probably should be. Tenancy is a CO issue. |
True, but providing Additionally, passing parameters during all CSI calls also gives users an opportunity to configure additional required KVs for aforementioned CSI calls. For example, with our storage, we need volumeID (akin to K8S resource name) and tenant ID (comparable to K8S namespace). With current CSI design, we need to append this information to |
Volume id uniquely identifies a CSI volume. I don't see anything wrong with
embedding the tenant id inside the volume id if that's what your
implementation needs to uniquely identify volumes across tenants.
…On Tue, Oct 29, 2019, 11:36 AM Venkat ***@***.***> wrote:
I'm wondering if you *really* want a single instance of the driver
talking to two different backends. For example, if your different backends
are for different tenants, and tenant A repeatedly issues a set of RPCs
that chokes the driver instance, then tenant B's use of the same driver is
impacted. Another example: you want to upgrade the testing cluster backend,
but that requires a driver upgrade as well - but there's risk to production
now because bumping the driver may impact the stability of the production
cluster (yeah, it's probably low risk, but still).
True, but providing StorageClass parameters does not change the existing
behavior. It enables additional flexibility of CSI driver deployment as
explained in the scenario above. Storage Vendor and their customers can
choose to deploy single instance or multiple instance of their driver
depending on their requirement.
Additionally, passing parameters during all CSI calls also gives users an
opportunity to configure additional required KVs for aforementioned CSI
calls. For example, with our storage, we need volumeID (akin to K8S
resource name) and tenant ID (comparable to K8S namespace). With current
CSI design, we need to append this information to PV.volumeID during
volume creation, as the configured data in StorageClass does not reach
beyond CreateVolume CSI call. This would complicate the standard CSI flow
for the storage vendor such as using common e2e test cases or deviations
from standard K8S resource definitions (example, for pre-provisioned
volume, we need user to configure PV.volumeID: "namespace|volume" as Node*
CSI calls don't read information from StorageClass).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#387?email_source=notifications&email_token=AAR5KLEUPLRCKCWSJWQU6ITQRBJ7VA5CNFSM4ITE3DA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECQ67NI#issuecomment-547483573>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAR5KLEVR4KQXU5UNOPUUSDQRBJ7VANCNFSM4ITE3DAQ>
.
|
Currently, CSI sends parameters configured in a storage class in
CreateVolumeRequest
,ValidateVolumeCapabilitiesRequest
,GetCapacityRequest
, andCreateSnapshotRequest
but all other request types are missing the KV pairs configured underStorageClass.parameters
.Use case
We have two storage clusters -- production, testing. To use these both clusters with same CSI driver, we need to deploy the same CSI driver with different provisioner names (quobyte-csi-prod, quobyte-csi-testing) each pointing to their own endpoint.
With missing StorageClass parameters in some CSI calls (DeleteVolume, PublishVolume etc), we have only two options
VolumeHandle:
in the vendor CSI driver duringCreateVolume
for making parameters available during later stages of CSI calls.The text was updated successfully, but these errors were encountered: