Skip to content
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.

NFS demo does not work with kubernetes 1.6.6 #223

Closed
dcowden opened this issue Jul 10, 2017 · 15 comments
Closed

NFS demo does not work with kubernetes 1.6.6 #223

dcowden opened this issue Jul 10, 2017 · 15 comments
Assignees
Labels
area/nfs lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@dcowden
Copy link

dcowden commented Jul 10, 2017

I followed the NFS provisioner demo, and I'm having a [probably subtle] problem getting it working. I'm using k8s version 1.6.6:

Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}

all of the pods launch correctly, and the PV and PVCs are created as well:

NAME                              READY     STATUS    RESTARTS   AGE
nfs-provisioner-225303794-7cxh7   1/1       Running   0          5m
nfs-test-1nn61                    1/1       Running   0          4m

NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
nfs       Bound     pvc-524a260e-650b-11e7-b217-0800274d0770   1Mi        RWX           example-nfs    5m

NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM         STORAGECLASS   REASON    AGE
pvc-524a260e-650b-11e7-b217-0800274d0770   1Mi        RWX           Delete          Bound     default/nfs   example-nfs              5m

However, the busybox image is unable to write data into the mounted volume, with error "Invalid argument":

sh: can't create /mnt/index.html: Invalid argument
sh: can't create /mnt/index.html: Invalid argument
sh: can't create /mnt/index.html: Invalid argument
sh: can't create /mnt/index.html: Invalid argument
sh: can't create /mnt/index.html: Invalid argument

getting into the pod, the volume is mounted ( other non-relevant mounts removed for clarity):

/ # mount
10.100.228.116:/export/pvc-524a260e-650b-11e7-b217-0800274d0770 on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.15,local_lock=none,addr=10.100.228.116)

but manually trying to do anything on that volume doesn't work:

/ # touch /mnt/foobar
touch: /mnt/foobar: Invalid argument

curiously, its also impossible to umount the volume too, even though i'm root. I was going to try to manually re-mount it:

/ # umount /mnt
umount: can't unmount /mnt: Operation not permitted

I have also tried manipulating the rootSquash and gid parameters on the provisioner with no luck. Does anyone have ideas on how this could be going wrong?

@shadycuz
Copy link
Contributor

@wongma7 @dcowden I'm using 1.6.6 and I had 0 problems. I just followed the steps, which seems unclear at some point's but I think I will just make a PR for it.

Thanks,
Levi

@wongma7
Copy link
Contributor

wongma7 commented Jul 12, 2017

I couldn't reproduce this issue either locally nor via Travis CI. However, I don't doubt that it exists. There are a lot of potential culprits... I'd say (the version of) kubernetes is unlikely to be it here. It's probably a weird interaction between NFS-Ganesha and docker.

What storage driver is docker using (output of docker info | grep Storage)?

What happens if you try to mount the nfs server on the host itself, if possible, e.g. sudo mount $NFS_PROVISIONER_CONTAINER_IP:/export/pvc-524a260e-650b-11e7-b217-0800274d0770 /tmp/asdf? If you try to mount it with nfsv4 instead of 4.2 like sudo mount -t nfs4 $NFS_PROVISIONER_CONTAINER_IP:/export/pvc-524a260e-650b-11e7-b217-0800274d0770 /tmp/asdf ?

Root_squash is no_root_squash by default which should be fine. And it's also normal that you're not allowed to unmount it from within the container, that requires more privileges.

@shadycuz thanks for confirming; and yeah, I wrote everything myself and I'm hardly an experienced writer so please feel free to make clarifications :)

@shadycuz
Copy link
Contributor

@wongma7 Well you did a great job. I'm brand new to kubernetes and never messed with PV's until today. Though I have some questions, but I will post them when I make a PR.

@dcowden
Copy link
Author

dcowden commented Jul 12, 2017

hi guys thanks, let me try out a couple of these suggestions. In reading, my hunch is that it could be an nfs4 vs nfs3 thing, but i wasnt able to prove this.

@r7vme
Copy link

r7vme commented Aug 24, 2017

This is probably the reason nfs-ganesha/nfs-ganesha#192

In my case, remounting with vers=4.1 helped. But still not clear how to make it work in kubernetes by default.

For one specific PV setting volume.beta.kubernetes.io/mount-options: "vers=4.1" works.

@wongma7
Copy link
Contributor

wongma7 commented Aug 24, 2017 via email

@r7vme
Copy link

r7vme commented Aug 24, 2017

just found it ;) https://github.com/kubernetes-incubator/external-
storage/blob/master/nfs/docs/usage.md#parameters

Thanks

@abrenneke
Copy link

I could not get either NFS 4.1 or 4.0 working either unfortunately. A write as soon as the container started caused an Invalid Argument error and the container (redis) crashed. NFS 3 is working alright. (vers=3)

@adamcharnock
Copy link

It took me a while to figure out the above, but in the end my StorageClass looked like the following:

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: nfs
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: developerapp.com/nfs
parameters:
  mountOptions: "vers=4.1"

It seems there is also a new mountOptions field which has been introduced to StorageClasses recently (rather then as a field under parameters). This confused me for a bit, but hopefully the above will save someone some pain in future.

@simmessa
Copy link

@adamcharnock Thanks a lot man! Your suggestion about using the parameters / mountOptions field saved the day :)

@wongma7 wongma7 self-assigned this Jun 15, 2018
@agasbzj
Copy link

agasbzj commented Jun 16, 2018

@adamcharnock You saved my life! mountOptions: "vers=4.1" resolves everything! Thank you so much!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 24, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/nfs lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

10 participants