Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mounts using go9p report 0 byte capacity (breaking Persistent Local Volumes) #3794

Closed
3 tasks done
john-tipper opened this issue Mar 5, 2019 · 9 comments
Closed
3 tasks done
Labels
area/mount cause/go9p-limitation Issues related to our go9p implementation help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@john-tipper
Copy link

john-tipper commented Mar 5, 2019

When using OSX and mounting directories into Minikube, the capacity/size of the mounts is reported as 0 by df:

  • How to replicate the error, including the exact command-lines used.

    mkdir ~/mymount
    minikube start

    minikube mount ~/mymount:/data/mymount &
    minikube ssh
    df /data/mymount

  • The full output of the command that failed

    df /data/mymount/
    Filesystem 1K-blocks Used Available Use% Mounted on
    192.168.99.1 0 0 0 - /data/mymount

The background is that I'm trying to use Persistent Local Volumes with Minikube and OSX, with the static provisioner to create the PVs. This provisioner fails to create the PVs because the capacity of the mounts is seen by Minikube as being 0. (StackOverflow link to issue here: https://stackoverflow.com/questions/54993532/how-to-use-kubernetes-persistent-local-volumes-with-minikube-on-osx).

I get the same error if I use:

minikube start --mount --mount-string="~/mymount:/data/mymount"
  • The operating system name and version used

OSX Mojave (10.14.3)
Minikube v0.34.1
Kubernetes v1.13.3

@tstromberg tstromberg changed the title Minikube mounts on OSX report 0 capacity (breaking Persistent Local Volumes) mounts on macOS report 0 byte capacity (breaking Persistent Local Volumes) Mar 5, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Mar 5, 2019

I can confirm with hyperkit and virtualbox drivers, as well as when supplying --9p-version=9p2000.L

Anyone have an idea how to get disk capacity plumbed through in 9p? Is it possible that 9p just doesn't allow for it?

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. os/macos area/mount labels Mar 5, 2019
@afbjorklund
Copy link
Collaborator

Same thing happens on linux.

192.168.99.1 on /mnt/sda1/data/mount type 9p (rw,relatime,sync,dirsync,dfltuid=1001,dfltgid=1001,access=user,msize=65536,trans=tcp,version=9p2000.u,port=41323)

$ df /data/mount
Filesystem     1K-blocks  Used Available Use% Mounted on
192.168.99.1           0     0         0    - /data/mount

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 5, 2019

It seems our 9p server simply doesn't implement the needed "statfs" call.

https://github.com/kubernetes/minikube/tree/master/third_party/go9p/

It seems to be a 9p2000.L feature

@afbjorklund afbjorklund changed the title mounts on macOS report 0 byte capacity (breaking Persistent Local Volumes) mounts using go9p report 0 byte capacity (breaking Persistent Local Volumes) Mar 6, 2019
@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Mar 8, 2019
@tstromberg tstromberg added cause/go9p-limitation Issues related to our go9p implementation r/2019q2 Issue was last reviewed 2019q2 labels May 23, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 21, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 20, 2019
@john-tipper
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 21, 2019
@john-tipper
Copy link
Author

/remove-lifecycle stale

@tstromberg tstromberg added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed r/2019q2 Issue was last reviewed 2019q2 labels Sep 23, 2019
@tstromberg
Copy link
Contributor

The clearest way to resolve this would be to implement #4324

@medyagh
Copy link
Member

medyagh commented Aug 12, 2020

closing this in favor of #4324

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/mount cause/go9p-limitation Issues related to our go9p implementation help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

6 participants