Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alias mode to export a share to other namespaces #47

Closed
wants to merge 1 commit into from

Conversation

therc
Copy link

@therc therc commented Feb 2, 2021

Fixes #44. Although persistent volumes are cluster-wide, claims are namespaced. Sometimes it is desirable to expose the same NFS folder to an arbitrary number of namespaces. See two different issues in the old repo¹.

This PR introduces a new alias mode, in which the provisioner can expose to others an existing NFS claim in namespace X that it mounts. Then, the provisioner lets the administrator or the users create new claims for the same NFS tree in namespaces A, B, C, etc.

More detailed example:

  1. The admin creates PVC1 in NS1 as before. If on AWS, they could be using efs-provisioner. Or, if it's NFS subdir external provisioner itself, a new directory will be created, just once. It shouldn't matter which one is used here, as long as it points to some NFS share.

  2. They run nfs-client-provisioner in NS1 in alias mode (setting the NFS_SERVER variable to --alias), as a deployment named something like nfs-alias-provisioner, which mounts PVC1 and has its PROVISIONER_NAME variable set to nfs-alias.

  3. They create a new StorageClass, nfs-alias, which uses the new nfs-alias provisioner.

  4. One or more nfs-alias claims are created in namespaces A, B, C, etc. These could be created manually or as part of whatever automated namespace provisioning that the cluster administrator has setup.

  5. Each of these claims will now use the same server and path as PVC1. The provisioner does that by looking up its own mount entry for PVC1 and passing the details through to the new PV.

So, in order to export the same folder to N namespaces, N+1 PVCs and two provisioners will be needed.

Some NFS provisioners might remove recursively the PVC1 directory tree if the claim gets deleted and retainPolicy == Delete|Recycle. Thus, aliases might see data disappear if storage is not set up differently.

This change tries as much as possible NOT to disrupt existing setups.

The code is not very complicated. The real work is mostly a matter of agreeing on how to trigger this mode, how to test it and what kind of guarantees to offer (or not to offer). Questions still left:

  1. Is "alias" the right term? Previous choices I tossed out: clone (easy to mix with real volume cloning), proxy (there's no middle-man NFS server), passthrough (too nebulous).
  2. Is NFS_SERVER=--alias the best way to trigger this? I like it because you can't set a real server name by mistake, you are forced into making a choice. And server names can't start with dashes.
  3. Should one provisioner be able to handle both regular and alias volumes? If so, this could be done by registering a second provisioner named after a new ALIAS_PROVISIONER_NAME variable. It would also need to look up or track the parent PVs for their server:path.
  4. Should the new volume's reclaim policy be set to Retain?
  5. Is it possible to prevent the original directory tree from getting deleted, as long as any alias are still around?
  6. Should the auto-detection of the server host:path happen always, even when not in alias mode? It's a convenience feature that saves you from setting NFS_SERVER/NFS_PATH (at the cost of having to mount an existing volume).
  7. What if we want a similar behaviour, where we inherit the details from PVC1, but each new claim gets its own subdirectory?
  8. Should the parent's ReadOnly field be propagated to the newly provisioned PV?
  9. Should this be made more generic so that one deployment can expose N different claims/trees to the other namespaces? It could work like this:

Say we have existing claims: data, code, docs. We want them available in any number of other namespaces, as allowed by RBAC. We run a foo.com/nfs-alias provisioner that mounts the three claims under /persistentvolumes/ and we use it in storage class shared-volumes.
Now we create claim code in namespace developer1 under the new class. The provisioner will lookup the server:path for /persistentvolumes/code and stick those in a new volume developer1-code-BLABLABLA.

¹ old issues:
kubernetes-retired/external-storage#1210
kubernetes-retired/external-storage#1275

Although persistent volumes are cluster-wide, claims are namespaced. Sometimes it is desirable to expose the same NFS folder to an arbitrary number of namespaces. See two different issues in the old repo[1].

This PR introduces a new alias mode, in which the provisioner can expose to others an existing NFS claim in namespace X that it mounts. Then, the provisioner lets the administrator or the users create new claims for the same NFS tree in namespaces A, B, C, etc.

More detailed example:

1. The admin creates PVC1 in NS1 as before. If on AWS, they could be using efs-provisioner. Or, if it's NFS subdir external provisioner itself, a new directory will be created, just once. It shouldn't matter which one is used here, as long as it points to some NFS share.

1. They run nfs-client-provisioner in NS1 in alias mode (setting the `NFS_SERVER` variable to `--alias`), as a deployment named something like `nfs-alias-provisioner`, which mounts PVC1 and has its `PROVISIONER_NAME` variable set to `nfs-alias`.

1. They create a new StorageClass, `nfs-alias`, which uses the new `nfs-alias` provisioner.

1. One or more `nfs-alias` claims are created in namespaces A, B, C, etc. These could be created manually or as part of whatever automated namespace provisioning that the cluster administrator has setup.

1. Each of these claims will now use the same server and path as PVC1. The provisioner does that by looking up its own mount entry for PVC1 and passing the details through to the new PV.

So, in order to export the same folder to N namespaces, N+1 PVCs and two provisioners will be needed.

Some NFS provisioners might remove recursively the PVC1 directory tree if the claim gets deleted and retainPolicy == Delete|Recycle. Thus, aliases might see data disappear if storage is not set up differently.

This change tries as much as possible NOT to disrupt existing setups.

The code is not very complicated. The real work is mostly a matter of agreeing on how to trigger this mode, how to test it and what kind of guarantees to offer (or not to offer). Questions still left:

1. Is "alias" the right term? Previous choices I tossed out: clone (easy to mix with real volume cloning), proxy (there's no middle-man NFS server), passthrough (too nebulous).
1. Is NFS_SERVER=--alias the best way to trigger this? I like it because you can't set a real server name by mistake, you are forced into making a choice. And server names can't start with dashes.
1. Should one provisioner be able to handle both regular and alias volumes? If so, this could be done by registering a second provisioner named after the ALIAS_PROVISIONER_NAME variable. It would also need to look up or track the parent PVs for their `server:path`.
1. Should the new volume's reclaim policy be set to Retain?
1. Is it possible to prevent the original directory tree from getting delete as long as any alias are still around?
1. Should the auto-detection of the server host:path happen always, even when not in alias mode? It's a convenience feature that saves you from setting NFS_SERVER/NFS_PATH (at the cost of having to mount an existing volume).
1. What if we want a similar behaviour, where we inherit the details from PVC1, but each new claim gets its own subdirectory?
1. Should the parent's ReadOnly field be propagated to the newly provisioned PV?
1. Should this be made more generic so that one deployment can expose N different claims/trees to the other namespaces? It could work like this:

Say we have existing claims: `data`, `code`, `docs`. We want them available in any number of other namespaces, as allowed by RBAC. We run a foo.com/nfs-alias provisioner that mounts the three claims under /persistentvolumes/ and we use it in storage class `shared-volumes`.
Now we create claim `code` in namespace `developer1` under the new class. The provisioner will lookup the `server:path` for `/persistentvolumes/code` and stick those in a new volume `developer1-code-BLABLABLA`.

[1]
kubernetes-retired/external-storage#1210
kubernetes-retired/external-storage#1275
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 2, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: therc
To complete the pull request process, please assign ashishranjan738 after the PR has been reviewed.
You can assign the PR to them by writing /assign @ashishranjan738 in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Welcome @therc!

It looks like this is your first PR to kubernetes-sigs/nfs-subdir-external-provisioner 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/nfs-subdir-external-provisioner has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Feb 2, 2021
@@ -119,6 +119,8 @@ spec:
path: /var/nfs
```

**Alias mode:** use the provisioner in this mode to share the same existing NFS claim to multiple namespaces, without propagating manually the server/path in each namespace's claim. For example, first create a `data-original` claim as normal, through any provisioner such as `example.com/efs-aws` or the `fuseim.pri/ifs` example below. In the same namespace of your choice, run a new NFS client provisioner that uses the claim. Set NFS_SERVER to the magic value of `--alias`. Give the new deployment a clearer name, `nfs-alias-provisioner`, and set PROVISIONER_NAME to `foo.com/nfs-alias-provisioner`. Then create a StorageClass `nfs-alias` with its provisioner set to `foo.com/nfs-alias-provisioner`. Now, every new `nfs-alias` claim you create in any namespace will have the same `server:path` as the `data-original` volume.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the fuseim.pri/ifs soon will be changed to k8s-sigs.io/nfs-subdir-external-provisioner (#37)

@@ -129,14 +109,51 @@ func (p *nfsProvisioner) Provision(ctx context.Context, options controller.Provi
NFS: &v1.NFSVolumeSource{
Server: p.server,
Path: path,
ReadOnly: false,
ReadOnly: false, // Pass ReadOnly through if in alias mode?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

afaik it have no real effect, you can try to set it to true and it will be writable anyway.

// mounted in the provisioner's pod under /persistentvolumes and never
// make a new directory for each volume we are asked to provision.
var alias bool
if server == magicAliasHostname {
Copy link
Contributor

@yonatankahana yonatankahana Feb 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imo its not very intuitive way and --alias sounds like command line option.

what do you think about another environment variable?
or StorageClass parameter where the same provisioner can be used both with alias and without?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was something I cooked up quickly last year as a proof of concept. I agree that supporting both modes with a class parameter is better, but if we use that to reference a volume claim rather than a server:path pair, resolving that reference might involve another round trip to the API server with potential failure modes. I'll rework the code and try both kinds of class parameters.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 5, 2021
@k8s-ci-robot
Copy link
Contributor

@therc: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@therc therc marked this pull request as draft February 6, 2021 14:49
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 6, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 7, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 6, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

support multiple (many) namespaces
4 participants