-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Is there a way to have efs-provisioner work across all namespaces? #1210
Comments
I have not attempted it, but standard rbac should allow you use any resource across namespaces. |
I am looking for a similar solution. I have two namespaces NS1 and NS2. In the NS1 I have deployed a PVC and a Pod which writes data to the storage. I know that generally PVCs are namespace resources and NS2 should not have access to the PVC from NS1. But there need to be a way to use the same storage folder from different Namespaces. Maybe by creating a PVC in each Namespace but with an uniq identifier so that the @martin2176 it should not be a privileges problem. If i deploy a Pod and I reference a PVC as volume, there is no way to tell the volume the namespace. So Kubernetes always use the namespace from the Pod. In my case i use the "nfs-client-provisioner" not the "aws-efs" but the idea should be the same. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I need something similar. I think there's a way to do this with a few small changes and you might not even need an unique identifier. You create PVC1 in NS1 as before. If you're on AWS, you use efs-provisioner for that, presumably. Then you run nfs-client-provisioner in NS1, mounting PVC1. It doesn't do it at the moment, but nfs-client-provisioner can find out the server and the path for PVC1, by looking at /proc/mounts. These would be the changes needed:
In the former case, the server field will be populated automatically with the value discovered from /proc/mounts. In the latter, the provisioner no longer creates a new directory on the server, but reuses the path from /proc/mount. It should also set the reclaim policy to Retain. (And it could propagate whether the mount is R/W or R/O!) I don't think efs-provisioner will delete your EFS server once the last reference to it goes away, but it will remove recursively the PVC1 directory tree if the claim gets deleted and The above shouldn't require a lot of code, which I can help with. The real work is mostly a matter of agreeing on how to trigger this mode, how to test it and what kind of guarantees to offer (or not to offer). |
PersistentVolumes are cluster-wide, but claims are namespaced. So you can't escape creating one per namespace. The question is then: do you want the namespaces to see the same files (NFS exports) or not? If yes, then my initial implementation #1318 should work. It does for me. Basically, I create an efs-provisioner PVC once, then mount that in a new nfs-client provisioner, which passes through the NFS details to any new claim it sees. So, for N namespaces, you need N PVCs and two provisioners. If not, then you'll have to keep using efs-provisioner for all the namespaces (each of them gets a different subdirectory) |
This might be a solution for kubernetes-retired#1210 and kubernetes-retired#1275 In combination with an existing NFS claim in namespace X, allow the administrator or the users to create new claims for the same NFS tree in namespaces A, B, C, etc. This change tries as much as possible NOT to disrupt existing setups. A few things still left: 1. Is "alias" the right term? Previous choices I tossed out: clone (easy to mix with real volume cloning), proxy (there's no middle-man NFS server), passthrough (too nebulous). 1. Is NFS_SERVER=--alias the best way to trigger this? 1. Should the auto-detection of the server host:path happen always, even when not in alias mode? 1. Should the parent's ReadOnly field be propagated to the newly provisioned PV? 1. Should this be made more generic so that one deployment can expose N different claims/trees to the other namespaces? See comment below for more details.
Seems that it only works if you create pods in the same namespace as the pvc. Is there a way that other name spaces can share the same EFS server?
Looks like you can do it if you create a new PVC foreach namespace with
storageClassName: "aws-efs"
set, but was wondering if there is a simpler way that will allow all namespaces by default?Thanks
The text was updated successfully, but these errors were encountered: