-
Notifications
You must be signed in to change notification settings - Fork 1.6k
NFS-Client Provisioner work across all namespaces ? #1275
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
If I understand what you're trying to do and you want the pods in the namespaces share the same directory (not just the NFS server), the solution I described in #1210 (comment) might work for you. You'd need to use TWO provisioners with two different storage classes. One is the stock nfs-client-provisioner. This will create the directory on the NFS server. Then you run a modified nfs-client-provisioner, with the changes I described, that mounts the PVC from the first provisioner. Even if nothing in what I just said applies to you, maybe I know what's broken with your new provisioner. Do you run it in the namespace using the |
BTW, the second provisioner would be able to create new clone volumes in as many namespaces as you want, not just the second one. |
This might be a solution for kubernetes-retired#1210 and kubernetes-retired#1275 In combination with an existing NFS claim in namespace X, allow the administrator or the users to create new claims for the same NFS tree in namespaces A, B, C, etc. This change tries as much as possible NOT to disrupt existing setups. A few things still left: 1. Is "alias" the right term? Previous choices I tossed out: clone (easy to mix with real volume cloning), proxy (there's no middle-man NFS server), passthrough (too nebulous). 1. Is NFS_SERVER=--alias the best way to trigger this? 1. Should the auto-detection of the server host:path happen always, even when not in alias mode? 1. Should the parent's ReadOnly field be propagated to the newly provisioned PV? 1. Should this be made more generic so that one deployment can expose N different claims/trees to the other namespaces? See comment below for more details.
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I came across this issue #1210 and was wondering whether this could be possible with nfs-client-provisioner
I mean, is it possible to have a single namespace with one pod/provisioner per each nfs mount point in use and just declare a persistent volume claim in another namespace that use these storageclassnames ?
For the time being I have 2 namespaces that are mounting the same nfs export, both are working fine (both use 'default' serviceAccount in each namespace), but I'd like to share storargeclassname/provisioner between them(is this possible?)
StorageClasses
Pods per namespace xfiles
Pods per namespace ssditto
Persitent volumes
Persistent volume claim for xfiles
Persistent volume claim for ssditto
So far so good, pods deployed into one of these namespaces have the nfs dir properly mounted as "read-only" and files can be read from the application.
But if I create a dedicated namespace to nfs-client-provisioner, although pods deployed there have access to the nfs mounted dir, when from another namespace a persistent volume claim uses the storageclass of nfs-client-provisioner I get the following output error
I am using the following config files to test this in default namesspace
Why I am getting this read-only file system error when trying to use it across namespaces and not when using nfs-client-provisioner on a namespace basis ?
I tried several things without success
Maybe as read-only does not work across different namespaces ?
Thanks!
The text was updated successfully, but these errors were encountered: