Skip to content
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.

NFS-Client Provisioner work across all namespaces ? #1275

Closed
yacota opened this issue Jan 28, 2020 · 6 comments
Closed

NFS-Client Provisioner work across all namespaces ? #1275

yacota opened this issue Jan 28, 2020 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@yacota
Copy link

yacota commented Jan 28, 2020

I came across this issue #1210 and was wondering whether this could be possible with nfs-client-provisioner

I mean, is it possible to have a single namespace with one pod/provisioner per each nfs mount point in use and just declare a persistent volume claim in another namespace that use these storageclassnames ?

For the time being I have 2 namespaces that are mounting the same nfs export, both are working fine (both use 'default' serviceAccount in each namespace), but I'd like to share storargeclassname/provisioner between them(is this possible?)

StorageClasses

$  kubectl get storageclass
AME                       PROVISIONER                    AGE
gp2 (default)             kubernetes.io/aws-ebs          362d
ssditto-nfs-live-client   ssditto-nfs-live-provisioner   3h35m
xfiles-nfs-live-client    xfiles-nfs-live-provisioner    170m

Pods per namespace xfiles

$  kubectl get pods -n xfiles
NAME                                                  READY   STATUS    RESTARTS   AGE
xfiles-669c7bd646-jlhbk                               1/1     Running   0          166m
xfiles-nfs-client-provisioner-live-78c9f59fd5-9jb8z   1/1     Running   0          137m

Pods per namespace ssditto

$  kubectl get pods -n ssditto
NAME                                                  READY   STATUS    RESTARTS   AGE
sditto-69dc99bb7-6fdfx                                1/1     Running   0          26m
sditto-nfs-client-provisioner-live-85484fdddb-rzjlb   1/1     Running   0          3h30m

Persitent volumes

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                             STORAGECLASS   REASON   AGE
pv-ssditto-nfs-client-provisioner-live     10Mi       RWO            Delete           Bound    ssditto/pvc-ssditto-nfs-client-provisioner-live                           3h29m
pv-xfiles-nfs-client-provisioner-live      10Mi       RWO            Delete           Bound    xfiles/pvc-xfiles-nfs-client-provisioner-live                             165m

Persistent volume claim for xfiles

$ kubectl get pvc -n xfiles
NAME                                     STATUS   VOLUME                                  CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-xfiles-nfs-client-provisioner-live   Bound    pv-xfiles-nfs-client-provisioner-live   10Mi       RWO                           167m

Persistent volume claim for ssditto

$ kubectl get pvc -n ssditto
NAME                                     STATUS   VOLUME                                  CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-ssditto-nfs-client-provisioner-live   Bound    pv-ssditto-nfs-client-provisioner-live   10Mi       RWO                           3h33m

So far so good, pods deployed into one of these namespaces have the nfs dir properly mounted as "read-only" and files can be read from the application.

But if I create a dedicated namespace to nfs-client-provisioner, although pods deployed there have access to the nfs mounted dir, when from another namespace a persistent volume claim uses the storageclass of nfs-client-provisioner I get the following output error

$ kubectl describe pvc nfs-live-claim

Name:          nfs-live-claim
Namespace:     default
StorageClass:  nfs-live-client
Status:        Pending
Volume:
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"nfs-live-claim","namespace":"default"},"spec":{"acc...
               volume.beta.kubernetes.io/storage-provisioner: nfs-live-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
  Type       Reason                Age              From                                                                                                                  Message
  ----       ------                ----             ----                                                                                                                  -------
  Normal     ExternalProvisioning  4s (x2 over 4s)  persistentvolume-controller                                                                                           waiting for a volume to be created, either by external provisioner "xfiles-nfs-live-provisioner" or manually created by system administrator
  Normal     Provisioning          4s               xfiles-nfs-live-provisioner_xfiles-nfs-client-provisioner-live-78c9f59fd5-9jb8z_ee8623e2-41b2-11ea-b10a-366bb0fc9d95  External provisioner is provisioning volume for claim "default/nfs-live-claim"
  Warning    ProvisioningFailed    4s               xfiles-nfs-live-provisioner_xfiles-nfs-client-provisioner-live-78c9f59fd5-9jb8z_ee8623e2-41b2-11ea-b10a-366bb0fc9d95  failed to provision volume with StorageClass "xfiles-nfs-live-client": unable to create directory to provision new pv: mkdir /persistentvolumes/default-nfs-live-claim-pvc-589c3dc8-41ce-11ea-b7ae-067609729396: **read-only file system**
Mounted By:  test-pod-sleep

I am using the following config files to test this in default namesspace

kind: Pod
apiVersion: v1
metadata:
  name: test-pod-sleep
spec:
  containers:
  - name: test-pod-sleep
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "ls -al /mnt/  && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: nfs-live-claim
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-live-claim
spec:
  storageClassName: xfiles-nfs-live-client
  accessModes:
   - ReadOnlyMany
  resources:
    requests:
      storage: 500Mi

Why I am getting this read-only file system error when trying to use it across namespaces and not when using nfs-client-provisioner on a namespace basis ?

I tried several things without success

  • mounting nfs as "rw" and then facing permission error with the following message :
unable to create directory to provision new pv: mkdir /persistentvolumes/default-nfs-live-claim-pvc-173e5766-4130-11ea-93c3-0a2c04cddff2: permission denied

Maybe as read-only does not work across different namespaces ?

Thanks!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2020
@therc
Copy link

therc commented May 9, 2020

If I understand what you're trying to do and you want the pods in the namespaces share the same directory (not just the NFS server), the solution I described in #1210 (comment) might work for you.

You'd need to use TWO provisioners with two different storage classes. One is the stock nfs-client-provisioner. This will create the directory on the NFS server. Then you run a modified nfs-client-provisioner, with the changes I described, that mounts the PVC from the first provisioner.

Even if nothing in what I just said applies to you, maybe I know what's broken with your new provisioner. Do you run it in the namespace using the nfs-live-claim claim? That's a R/O claim, but when it runs, nfs-client-provisioner wants to create a new directory at the root of the server for the "new" volume. That should be responsible for the "read-only file system error" messages.

@therc
Copy link

therc commented May 9, 2020

BTW, the second provisioner would be able to create new clone volumes in as many namespaces as you want, not just the second one.

therc added a commit to therc/external-storage that referenced this issue May 10, 2020
This might be a solution for kubernetes-retired#1210 and kubernetes-retired#1275

In combination with an existing NFS claim in namespace X, allow the administrator or the users to create new claims for the same NFS tree in namespaces A, B, C, etc.

This change tries as much as possible NOT to disrupt existing setups.

A few things still left:

1. Is "alias" the right term? Previous choices I tossed out: clone (easy to mix with real volume cloning), proxy (there's no middle-man NFS server), passthrough (too nebulous).
1. Is NFS_SERVER=--alias the best way to trigger this?
1. Should the auto-detection of the server host:path happen always, even when not in alias mode?
1. Should the parent's ReadOnly field be propagated to the newly provisioned PV?
1. Should this be made more generic so that one deployment can expose N different claims/trees to the other namespaces? See comment below for more details.
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 8, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants