Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dynamic provisioning fully support #120

Closed
andyzhangx opened this issue Sep 23, 2020 · 17 comments · Fixed by #259
Closed

dynamic provisioning fully support #120

andyzhangx opened this issue Sep 23, 2020 · 17 comments · Fixed by #259
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@andyzhangx
Copy link
Member

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

PR(#61) added storage class support, while it's just an empty implementation, need to add fully dynamic provisioning support:

  • CreateVolume
    • Should create a new directory under smb server
  • DeleteVolume
    • Should delete current directly under smb server

To implement this feature, need to figure out whether there is a smb go-client

Describe alternatives you've considered

Additional context

@andyzhangx andyzhangx added kind/feature Categorizes issue or PR as related to a new feature. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Sep 23, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2020
@blurpy
Copy link

blurpy commented Jan 8, 2021

Are you going to support the other part of dynamic provisioning - self service using a claim?

We are looking at this driver, but none of our users have access to cluster scoped resources like storage classes and persistent volumes. They do however have access to persistent volume claims.

Ideally users would make a PVC with similar parameters as shown in the PV-example and a PV would be created automatically.

@andyzhangx
Copy link
Member Author

Are you going to support the other part of dynamic provisioning - self service using a claim?

We are looking at this driver, but none of our users have access to cluster scoped resources like storage classes and persistent volumes. They do however have access to persistent volume claims.

Ideally users would make a PVC with similar parameters as shown in the PV-example and a PV would be created automatically.

@blurpy if there is already smb storage class or smb PV, they can use this example: https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/e2e_usage.md

@blurpy
Copy link

blurpy commented Jan 8, 2021

That's the example I was talking about, but we don't really want to manually create a PV or storage class for them every time someone needs to mount a samba volume in a pod. It's a large network with hundreds of shares everywhere, so self service would be much preferred.

@andyzhangx
Copy link
Member Author

andyzhangx commented Jan 8, 2021

That's the example I was talking about, but we don't really want to manually create a PV or storage class for them every time someone needs to mount a samba volume in a pod. It's a large network with hundreds of shares everywhere, so self service would be much preferred.

@blurpy if you only have one SMB server, admin could set up a smb storage class with that server and user only needs to create a PVC, and when PVC is provisioned, this driver will create a standalone dir under smb server, is that what you want?

@blurpy
Copy link

blurpy commented Jan 8, 2021

We don't look for a way of provisioning a "new" volume using SMB, we look for a way of mounting an existing shared network folder in a pod, so apps can be migrated from legacy infrastructure. The pod needs access to the same data.

There are many servers exposing SMB shares, and those of us who manage k8s don't want to be middlemen, and rather let users do this in their own namespace. Maybe this driver is not made for this use case?

@andyzhangx
Copy link
Member Author

then why not create a common PVC and that PVC could be shared between multiple pods. Current scenario is:

  • admin create smb storage class with SMB server config
  • User create a PVC based on storage class and pod could access SMB server, createSubDir parameter could decide whether create new folder inside that SMB server

@andyzhangx
Copy link
Member Author

With smb storage class, user only needs to create PVC, the PV will be created automatically

@blurpy
Copy link

blurpy commented Jan 8, 2021

From what I can understand from the documentation the storage class defines the share to connect to, so we would have to create a new storage class every time someone needs to mount a different share. Or can we configure a generic storage class and let the user specify url, username and password in the PVC somehow?

Note that we are not looking for a way to create a subdir on a share, we just want to let users mount any share they want inside a pod without involving cluster admins.

@andyzhangx
Copy link
Member Author

From what I can understand from the documentation the storage class defines the share to connect to, so we would have to create a new storage class every time someone needs to mount a different share. Or can we configure a generic storage class and let the user specify url, username and password in the PVC somehow?

Note that we are not looking for a way to create a subdir on a share, we just want to let users mount any share they want inside a pod without involving cluster admins.

PVC is quite a generic config in k8s, there is no self-defined parameter support in PVC

@andyzhangx
Copy link
Member Author

andyzhangx commented Jan 8, 2021

I think you are asking for this inline volume feature, it's not supported yet:
https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html#which-feature-should-my-driver-support

apiVersion: v1
kind: Pod
metadata:
  name: some-pod
spec:
  containers:
    ...
  volumes:
      - name: vol
        csi:
          driver: inline.storage.kubernetes.io
          volumeAttributes:
              foo: bar

@blurpy
Copy link

blurpy commented Jan 8, 2021

OK, thanks for the link. Guess we have to find a different way of accessing these shares for now.

@andyzhangx
Copy link
Member Author

OK, thanks for the link. Guess we have to find a different way of accessing these shares for now.

@blurpy filed an issue here: #198, I could get someone work on this, the above example is for single pod, does it work in your scenario?

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 7, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@andyzhangx
Copy link
Member Author

refer to kubernetes-csi/csi-driver-nfs#53, should support:

  • create subDir in CreateVolume
  • delete subDir in DeleteVolume

@andyzhangx andyzhangx reopened this May 4, 2021
andyzhangx pushed a commit to andyzhangx/csi-driver-smb that referenced this issue May 1, 2022
docs: steps for adding testing against new Kubernetes release
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants