Skip to content
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.

Is there a way to have efs-provisioner work across all namespaces? #1210

Closed
richstokes opened this issue Aug 16, 2019 · 8 comments
Closed

Is there a way to have efs-provisioner work across all namespaces? #1210

richstokes opened this issue Aug 16, 2019 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@richstokes
Copy link

richstokes commented Aug 16, 2019

Seems that it only works if you create pods in the same namespace as the pvc. Is there a way that other name spaces can share the same EFS server?

Looks like you can do it if you create a new PVC foreach namespace with storageClassName: "aws-efs" set, but was wondering if there is a simpler way that will allow all namespaces by default?

Thanks

@richstokes richstokes changed the title Is there a way to have efs-provisioner work across all namspaces? Is there a way to have efs-provisioner work across all namespaces? Aug 20, 2019
@martin2176
Copy link

I have not attempted it, but standard rbac should allow you use any resource across namespaces.
In your case, the service account which runs the pod (default service account in most cases) should be given admin role in the namespace which has pvc. via a rolebinding.
If you dont want to give admin role, a more restrictive privilege can be given .

@ThWoywod
Copy link

ThWoywod commented Oct 17, 2019

I am looking for a similar solution. I have two namespaces NS1 and NS2.

In the NS1 I have deployed a PVC and a Pod which writes data to the storage.
In the NS2 i will deploy a Pod which read that data and process it.

I know that generally PVCs are namespace resources and NS2 should not have access to the PVC from NS1. But there need to be a way to use the same storage folder from different Namespaces. Maybe by creating a PVC in each Namespace but with an uniq identifier so that the nfs-client-provisioner could match these PVC and they use the same folder.

@martin2176 it should not be a privileges problem. If i deploy a Pod and I reference a PVC as volume, there is no way to tell the volume the namespace. So Kubernetes always use the namespace from the Pod.

In my case i use the "nfs-client-provisioner" not the "aws-efs" but the idea should be the same.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 15, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 14, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@therc
Copy link

therc commented May 9, 2020

I know that generally PVCs are namespace resources and NS2 should not have access to the PVC from NS1. But there need to be a way to use the same storage folder from different Namespaces. Maybe by creating a PVC in each Namespace but with an uniq identifier so that the nfs-client-provisioner could match these PVC and they use the same folder.

In my case i use the "nfs-client-provisioner" not the "aws-efs" but the idea should be the same.

I need something similar. I think there's a way to do this with a few small changes and you might not even need an unique identifier.

You create PVC1 in NS1 as before. If you're on AWS, you use efs-provisioner for that, presumably. Then you run nfs-client-provisioner in NS1, mounting PVC1.

It doesn't do it at the moment, but nfs-client-provisioner can find out the server and the path for PVC1, by looking at /proc/mounts.

These would be the changes needed:

  1. allow NFS_SERVER to be set to "auto" (or a new AUTO_NFS_SERVER=true)
  2. allow NFS_PATH to be set to "clone" (or a new CLONE_NFS_PATH=true)

In the former case, the server field will be populated automatically with the value discovered from /proc/mounts.

In the latter, the provisioner no longer creates a new directory on the server, but reuses the path from /proc/mount. It should also set the reclaim policy to Retain. (And it could propagate whether the mount is R/W or R/O!)

I don't think efs-provisioner will delete your EFS server once the last reference to it goes away, but it will remove recursively the PVC1 directory tree if the claim gets deleted and retainPolicy == Delete|Recycle, so your clones might see data disappear if you're not careful.

The above shouldn't require a lot of code, which I can help with. The real work is mostly a matter of agreeing on how to trigger this mode, how to test it and what kind of guarantees to offer (or not to offer).

@therc
Copy link

therc commented May 10, 2020

PersistentVolumes are cluster-wide, but claims are namespaced. So you can't escape creating one per namespace.

The question is then: do you want the namespaces to see the same files (NFS exports) or not? If yes, then my initial implementation #1318 should work. It does for me. Basically, I create an efs-provisioner PVC once, then mount that in a new nfs-client provisioner, which passes through the NFS details to any new claim it sees. So, for N namespaces, you need N PVCs and two provisioners.

If not, then you'll have to keep using efs-provisioner for all the namespaces (each of them gets a different subdirectory)

therc added a commit to therc/external-storage that referenced this issue May 10, 2020
This might be a solution for kubernetes-retired#1210 and kubernetes-retired#1275

In combination with an existing NFS claim in namespace X, allow the administrator or the users to create new claims for the same NFS tree in namespaces A, B, C, etc.

This change tries as much as possible NOT to disrupt existing setups.

A few things still left:

1. Is "alias" the right term? Previous choices I tossed out: clone (easy to mix with real volume cloning), proxy (there's no middle-man NFS server), passthrough (too nebulous).
1. Is NFS_SERVER=--alias the best way to trigger this?
1. Should the auto-detection of the server host:path happen always, even when not in alias mode?
1. Should the parent's ReadOnly field be propagated to the newly provisioned PV?
1. Should this be made more generic so that one deployment can expose N different claims/trees to the other namespaces? See comment below for more details.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants