-
Notifications
You must be signed in to change notification settings - Fork 767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alias mode to export a share to other namespaces #47
Conversation
Although persistent volumes are cluster-wide, claims are namespaced. Sometimes it is desirable to expose the same NFS folder to an arbitrary number of namespaces. See two different issues in the old repo[1]. This PR introduces a new alias mode, in which the provisioner can expose to others an existing NFS claim in namespace X that it mounts. Then, the provisioner lets the administrator or the users create new claims for the same NFS tree in namespaces A, B, C, etc. More detailed example: 1. The admin creates PVC1 in NS1 as before. If on AWS, they could be using efs-provisioner. Or, if it's NFS subdir external provisioner itself, a new directory will be created, just once. It shouldn't matter which one is used here, as long as it points to some NFS share. 1. They run nfs-client-provisioner in NS1 in alias mode (setting the `NFS_SERVER` variable to `--alias`), as a deployment named something like `nfs-alias-provisioner`, which mounts PVC1 and has its `PROVISIONER_NAME` variable set to `nfs-alias`. 1. They create a new StorageClass, `nfs-alias`, which uses the new `nfs-alias` provisioner. 1. One or more `nfs-alias` claims are created in namespaces A, B, C, etc. These could be created manually or as part of whatever automated namespace provisioning that the cluster administrator has setup. 1. Each of these claims will now use the same server and path as PVC1. The provisioner does that by looking up its own mount entry for PVC1 and passing the details through to the new PV. So, in order to export the same folder to N namespaces, N+1 PVCs and two provisioners will be needed. Some NFS provisioners might remove recursively the PVC1 directory tree if the claim gets deleted and retainPolicy == Delete|Recycle. Thus, aliases might see data disappear if storage is not set up differently. This change tries as much as possible NOT to disrupt existing setups. The code is not very complicated. The real work is mostly a matter of agreeing on how to trigger this mode, how to test it and what kind of guarantees to offer (or not to offer). Questions still left: 1. Is "alias" the right term? Previous choices I tossed out: clone (easy to mix with real volume cloning), proxy (there's no middle-man NFS server), passthrough (too nebulous). 1. Is NFS_SERVER=--alias the best way to trigger this? I like it because you can't set a real server name by mistake, you are forced into making a choice. And server names can't start with dashes. 1. Should one provisioner be able to handle both regular and alias volumes? If so, this could be done by registering a second provisioner named after the ALIAS_PROVISIONER_NAME variable. It would also need to look up or track the parent PVs for their `server:path`. 1. Should the new volume's reclaim policy be set to Retain? 1. Is it possible to prevent the original directory tree from getting delete as long as any alias are still around? 1. Should the auto-detection of the server host:path happen always, even when not in alias mode? It's a convenience feature that saves you from setting NFS_SERVER/NFS_PATH (at the cost of having to mount an existing volume). 1. What if we want a similar behaviour, where we inherit the details from PVC1, but each new claim gets its own subdirectory? 1. Should the parent's ReadOnly field be propagated to the newly provisioned PV? 1. Should this be made more generic so that one deployment can expose N different claims/trees to the other namespaces? It could work like this: Say we have existing claims: `data`, `code`, `docs`. We want them available in any number of other namespaces, as allowed by RBAC. We run a foo.com/nfs-alias provisioner that mounts the three claims under /persistentvolumes/ and we use it in storage class `shared-volumes`. Now we create claim `code` in namespace `developer1` under the new class. The provisioner will lookup the `server:path` for `/persistentvolumes/code` and stick those in a new volume `developer1-code-BLABLABLA`. [1] kubernetes-retired/external-storage#1210 kubernetes-retired/external-storage#1275
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: therc The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @therc! |
@@ -119,6 +119,8 @@ spec: | |||
path: /var/nfs | |||
``` | |||
|
|||
**Alias mode:** use the provisioner in this mode to share the same existing NFS claim to multiple namespaces, without propagating manually the server/path in each namespace's claim. For example, first create a `data-original` claim as normal, through any provisioner such as `example.com/efs-aws` or the `fuseim.pri/ifs` example below. In the same namespace of your choice, run a new NFS client provisioner that uses the claim. Set NFS_SERVER to the magic value of `--alias`. Give the new deployment a clearer name, `nfs-alias-provisioner`, and set PROVISIONER_NAME to `foo.com/nfs-alias-provisioner`. Then create a StorageClass `nfs-alias` with its provisioner set to `foo.com/nfs-alias-provisioner`. Now, every new `nfs-alias` claim you create in any namespace will have the same `server:path` as the `data-original` volume. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the fuseim.pri/ifs
soon will be changed to k8s-sigs.io/nfs-subdir-external-provisioner
(#37)
@@ -129,14 +109,51 @@ func (p *nfsProvisioner) Provision(ctx context.Context, options controller.Provi | |||
NFS: &v1.NFSVolumeSource{ | |||
Server: p.server, | |||
Path: path, | |||
ReadOnly: false, | |||
ReadOnly: false, // Pass ReadOnly through if in alias mode? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
afaik it have no real effect, you can try to set it to true and it will be writable anyway.
// mounted in the provisioner's pod under /persistentvolumes and never | ||
// make a new directory for each volume we are asked to provision. | ||
var alias bool | ||
if server == magicAliasHostname { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
imo its not very intuitive way and --alias
sounds like command line option.
what do you think about another environment variable?
or StorageClass parameter where the same provisioner can be used both with alias and without?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was something I cooked up quickly last year as a proof of concept. I agree that supporting both modes with a class parameter is better, but if we use that to reference a volume claim rather than a server:path pair, resolving that reference might involve another round trip to the API server with potential failure modes. I'll rework the code and try both kinds of class parameters.
@therc: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Fixes #44. Although persistent volumes are cluster-wide, claims are namespaced. Sometimes it is desirable to expose the same NFS folder to an arbitrary number of namespaces. See two different issues in the old repo¹.
This PR introduces a new alias mode, in which the provisioner can expose to others an existing NFS claim in namespace X that it mounts. Then, the provisioner lets the administrator or the users create new claims for the same NFS tree in namespaces A, B, C, etc.
More detailed example:
The admin creates PVC1 in NS1 as before. If on AWS, they could be using efs-provisioner. Or, if it's NFS subdir external provisioner itself, a new directory will be created, just once. It shouldn't matter which one is used here, as long as it points to some NFS share.
They run nfs-client-provisioner in NS1 in alias mode (setting the
NFS_SERVER
variable to--alias
), as a deployment named something likenfs-alias-provisioner
, which mounts PVC1 and has itsPROVISIONER_NAME
variable set tonfs-alias
.They create a new StorageClass,
nfs-alias
, which uses the newnfs-alias
provisioner.One or more
nfs-alias
claims are created in namespaces A, B, C, etc. These could be created manually or as part of whatever automated namespace provisioning that the cluster administrator has setup.Each of these claims will now use the same server and path as PVC1. The provisioner does that by looking up its own mount entry for PVC1 and passing the details through to the new PV.
So, in order to export the same folder to N namespaces, N+1 PVCs and two provisioners will be needed.
Some NFS provisioners might remove recursively the PVC1 directory tree if the claim gets deleted and retainPolicy == Delete|Recycle. Thus, aliases might see data disappear if storage is not set up differently.
This change tries as much as possible NOT to disrupt existing setups.
The code is not very complicated. The real work is mostly a matter of agreeing on how to trigger this mode, how to test it and what kind of guarantees to offer (or not to offer). Questions still left:
server:path
.Say we have existing claims:
data
,code
,docs
. We want them available in any number of other namespaces, as allowed by RBAC. We run a foo.com/nfs-alias provisioner that mounts the three claims under /persistentvolumes/ and we use it in storage classshared-volumes
.Now we create claim
code
in namespacedeveloper1
under the new class. The provisioner will lookup theserver:path
for/persistentvolumes/code
and stick those in a new volumedeveloper1-code-BLABLABLA
.¹ old issues:
kubernetes-retired/external-storage#1210
kubernetes-retired/external-storage#1275