-
Notifications
You must be signed in to change notification settings - Fork 1.6k
NFS demo does not work with kubernetes 1.6.6 #223
Comments
I couldn't reproduce this issue either locally nor via Travis CI. However, I don't doubt that it exists. There are a lot of potential culprits... I'd say (the version of) kubernetes is unlikely to be it here. It's probably a weird interaction between NFS-Ganesha and docker. What storage driver is docker using (output of What happens if you try to mount the nfs server on the host itself, if possible, e.g. Root_squash is no_root_squash by default which should be fine. And it's also normal that you're not allowed to unmount it from within the container, that requires more privileges. @shadycuz thanks for confirming; and yeah, I wrote everything myself and I'm hardly an experienced writer so please feel free to make clarifications :) |
@wongma7 Well you did a great job. I'm brand new to kubernetes and never messed with PV's until today. Though I have some questions, but I will post them when I make a PR. |
hi guys thanks, let me try out a couple of these suggestions. In reading, my hunch is that it could be an nfs4 vs nfs3 thing, but i wasnt able to prove this. |
This is probably the reason nfs-ganesha/nfs-ganesha#192 In my case, remounting with vers=4.1 helped. But still not clear how to make it work in kubernetes by default. For one specific PV setting |
Yes sorry, I’ve known about this for a bit but neglected to document. You
can try to use the mount option storage parameter in the storage class so
that all pvs get that set automatically
…On Thu, Aug 24, 2017 at 11:03 AM Roman Sokolkov ***@***.***> wrote:
This is probably the reason nfs-ganesha/nfs-ganesha#192
<nfs-ganesha/nfs-ganesha#192>
In my case, remounting with vers=4.1 helped. But still not clear how to
make it work in kubernetes by default.
For one specific PV setting volume.beta.kubernetes.io/mount-options:
"vers=4.1" works.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#223 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AMgP-G-55fTVVw_uX00_QNoR5A0XtGNWks5sbZDRgaJpZM4OSQOj>
.
|
just found it ;) https://github.com/kubernetes-incubator/external- Thanks |
I could not get either NFS 4.1 or 4.0 working either unfortunately. A write as soon as the container started caused an |
It took me a while to figure out the above, but in the end my kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: nfs
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: developerapp.com/nfs
parameters:
mountOptions: "vers=4.1" It seems there is also a new |
@adamcharnock Thanks a lot man! Your suggestion about using the parameters / mountOptions field saved the day :) |
@adamcharnock You saved my life! mountOptions: "vers=4.1" resolves everything! Thank you so much! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I followed the NFS provisioner demo, and I'm having a [probably subtle] problem getting it working. I'm using k8s version 1.6.6:
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
all of the pods launch correctly, and the PV and PVCs are created as well:
However, the busybox image is unable to write data into the mounted volume, with error "Invalid argument":
getting into the pod, the volume is mounted ( other non-relevant mounts removed for clarity):
but manually trying to do anything on that volume doesn't work:
/ # touch /mnt/foobar
touch: /mnt/foobar: Invalid argument
curiously, its also impossible to umount the volume too, even though i'm root. I was going to try to manually re-mount it:
I have also tried manipulating the rootSquash and gid parameters on the provisioner with no luck. Does anyone have ideas on how this could be going wrong?
The text was updated successfully, but these errors were encountered: