-
-
Notifications
You must be signed in to change notification settings - Fork 466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] cannot mount nfs shares from inside pods #1109
Comments
update: tried ganesha nfs provisioner, too, same setup as above... even thats, fails to create usable nfs shares... my pods are now in a different state (CreateContainerConfigError), and i get this in events...
all the other stuff is the same, the storageclass is always "nfs", this is my helmrelease:
|
i just tried even the rook nfs provisioner, that too does not work on k3d 5.4.4 with errors:
pvc are regularly created and bound, but pods cannot mount them and write... |
It's not a bug of k3d but a defect of k3s docker image. K3s docker image is build from scratch with no nfs support.Dockerfile As as result of that, both k3s node container and pods inside of the node can not mount nfs. There is a workaround: rebase k3s image with alpine and install nfs-utils. FROM alpine:latest
RUN set -ex; \
apk add --no-cache iptables ip6tables nfs-utils; \
echo 'hosts: files dns' > /etc/nsswitch.conf
COPY --from=rancher/k3s:v1.24.3-k3s1 /bin /opt/k3s/bin
VOLUME /var/lib/kubelet
VOLUME /var/lib/rancher/k3s
VOLUME /var/lib/cni
VOLUME /var/log
ENV PATH="$PATH:/opt/k3s/bin:/opt/k3s/bin/aux"
ENV CRI_CONFIG_FILE="/var/lib/rancher/k3s/agent/etc/crictl.yaml"
ENTRYPOINT ["/opt/k3s/bin/k3s"]
CMD ["agent"] Build it yourself or have a look at mine. maoxuner/k3s (not managed frequently) I don't known how to patch nfs-utils into official k3s image. Anyone know it please tell me. |
@pawmaster tried that, but didn't work for me... hints?
|
I created a similar image, based on the version i need (1.22) and have same issues... something missing in dockerfile?
|
I've run into same issue before. I tried clean up all resources (image container volume network), then create cluster. Again and again, repeat it and finally succeed. But I don't known what happend. That's why I'm looking for some way to patch origin image. |
@pawmaster think i fixed it... take a look at my repo, i just left the paths as in original image (no /opt...), and image comes now up no problem... now let's see if nfs works :D try: |
@fragolinux It's not a good practice to override alpine binaries with original image(scratch binaries) directly, there may be incompatible between binary files. A better way is replace all binaries with alpine packages. I can't find packages including bin files such as Anyway, if it works, it's still a good idea. By the way, do you know any method to backup and restore clusters (multiple nodes) created by k3d? I've tried to backup |
a template creating a Dockerfile to allow use nfs
Hey I got NFS to work in k3d for GitHub codespace based on the info from this thread. https://github.com/jlian/k3d-nfs It's mostly the same as @marcoaraujojunior's commit marcodearaujo/k3s-docker@914c6f8 with Try with export K3D_FIX_CGROUPV2=false
k3d cluster create -i ghcr.io/jlian/k3d-nfs:v1.25.3-k3s1 |
@jlian , instead of disabling k3d's entrypoints (there are actually multiple), just add your script to the list by putting it here |
@iwilltry42 Ok thanks, got it to work. Now just needs Took me a while to find the entrypoint logs in |
I am not currently experiencing the issues @jlian is experiencing. I have created a repository with the latest images from the 1.25, 1.26, and 1.27 channels, as well as the "stable" channel at https://github.com/ryan-mcd/k3s-containers Feel free to utilize these images. |
OUaouh !!! thanks !!! I lost 5 hours for my first test to try create a nfs share for my pods on synology !! +1 to update this, even if k3d purpose is mainly test, not have nfs for storage is a strange pain ! any idea to warn k3d company about that more drectly ? Thks @jlian for the 1.25 image ; no 1.26 or + ? |
@ryan-mcd you got NFS to work in codespaces without using openrc? It's been a while but I kind of remember when I first tried it without openrc it kept not working. Can you show me which part in your Dockerfile that makes it work? EDIT: hmm, I tried your image and it didn't work for me, getting
|
@dcpc007 there is no company behind k3d. There's SUSE Rancher behind K3s though, which is what's inside k3d, so feel free to open issues/PRs on https://github.com/k3s-io/k3s or ask them via Slack. |
I don't use codespaces. Perhaps that's why I didn't have an issue without openrc. In my local environment it worked fine without it, so I didn't include it. I can certainly add it back. Which version were you planning/attempting to use? |
What did you do
i initially tested openebs nfs-provisioner, on top of k3d default local-path storage class... pvc where created, but pods could not mount them, saying "not permitted" or "not supported"... i could mount the shares from inside the openebs nfs sharing pods, even between them (a pod could mount its own shares, AND the shares of the other pod, sharing a different pvc)... but NO OTHER pods could mount them, they all remain in containerCreating state, and i've those errors in events...
so i tried a different solution, an nfs server docker container running on my host machine, and connect to it using the nfs subdir provisioner, with identical results, so it seems i cannot get an RWX volume on k3d right now, whatever solution i do... tested on both my dev machine (macbook pro, big sur latest) AND on an ubuntu 22.04 vm (with of course the nfs-common package installed on it)
the pod stays in "containerCreating" state, and in events i get:
so, let's try from an ubuntu pod
test from host to see if the share works: it does...
What did you expect to happen
share should be mountable, to create rwx volumes
Which OS & Architecture
k3d runtime-info
Which version of
k3d
k3d version
Which version of docker
docker version
anddocker info
The text was updated successfully, but these errors were encountered: