-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High HTTP latency when serving files from 9pfs #1370
Comments
As a follow up, I rebuild minikube and all my images using the xhyve driver and it takes 6s to get to From the look of it, it might be a VM driver issue (Virtualbox). The unfortunate thing about all this is that I have permissions issue with xhyve that prevents me from using it to develop (Xhyve has permission problems writing files back to the host through volumes). So I'm very much interested in finding out the issue with Virtualbox's driver. Also, I started moving our assets to webpack, which concatenate all our files into 1 even in development and the page load went down from 2 minutes to ms. My assumption would be that there is a waterfall event that start clugging the pipe when there are 20+ requests in a short amount of time. My knowledge is very limited when it comes to docker machines and virtualization so my apologies for not being more helpful :( |
I am currently seeing this, even with webpack. My speed is ~5kb/sec, which is making it basically unusable. If there is any info I could provide that might help just let me know! |
The problem seems to lie outside of Minikube. Docker uses osxfs which is a custom filesystem to try to bring native container capabilities to OS X. It works fine when communication are between container but things fall apart when trying to communicate with the host. From what I read, it's due to synching filesystem between the 2. One way to fix it is to use a nfs server to serve files from host to guest. Or rsync. |
We don't actually use osxfs in minikube. The host folder mount is done with a 9p filesystem. This is how the xhyve driver as well as he minikube mount command works. Virtualbox uses vboxfs, which is its own proprietary way to share files between the guest and the host. If find performance issues with vboxfs, you could try the minikube mount command, rsync, or nfs |
Having the same issue, also using 9p mont from minikube with Virtualbox. Ubuntu 16.04. |
My guess is that this is caused by poor I/O performance on the guest side of the 9p mount. I tried measuring the time to extract a 80MB tarball containing lots of small files as a simple test... Here are my findings: On the host machine, in the directory mounted via 9p: 0.09s Running on: Arch Linux, minikube 0.20, VirtualBox 5.1.22 It seems that the 9p mount is really unsuited for any kind of real-life, non-trivial workloads. Any kind of I/O-heavy build step run from the guest takes forever to finish (for example: webpack, npm install, composer install). |
I'm running into this issue as well. I had extremely poor performances for my PHP app too (HTTP request taking anywhere between 1 and 2 minutes to get a response). My code was hosted on my mac, mounted in an As as test, I copied my source code directly inside the pod's container instead of serving it through 9p. I am now getting responses in 800ms. Unfortunately, since this is for local development, I need to see my changes immediately and using a full copy of my source code instead of serving it through the network is not an option. I'm going to setup an NFS mount and see what the performances are compared to 9p. However my test goes, in the current state, 9p on minikube is definitely way too slow for any workload. |
@huguesalary: I'm very curious to find out how the NFS mount performance compared to 9p? Thanks! |
I just retried this week with NFS and while things are better, it's still taking ~20seconds to load a page due to i/o constraints |
How can I setup an NFS mount with minikube? |
@pothibo thanks for the write-up. so still unusable, unfortunately. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Hi guys! My configuration:
In deployment configuration you just add mount options for persistent volume claim. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/reopen |
@jpswade: you can't re-open an issue/PR unless you authored it or you are assigned to it. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This is still an issue. /remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is still an issue. /remove-lifecycle rotten |
/reopen |
@du86796922: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@pothibo: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
I'm going to go ahead and freeze this issue so it doesn't keep getting closed. That said, this is an issue with 9p itself and can't really be worked around unless we end up replacing it. |
BUG REPORT
I have a website that I'm trying to run through minikube and while everything works, loading a single page on my host browser take upwards of 2 minutes. Connections between Pods seem to be normal.
The problem might originate from VirtualBox, but I'm not sure. Here's the minikube config:
I did change the NIC to use the paravirtualized network but speed stayed the same.
I also tried #1353 but it didn't fix it for me. Here's a poorly representative screenshot of what's going on when I load the page and look at the network tab in Chrome:
minikube version: v0.18.0
Environment:
What you expected to happen:
Get the page load under 600ms would be acceptable.
How to reproduce it (as minimally and precisely as possible):
Start minikube with VirtualBox and run a rails server and try to access it from the host. That page needs to have external asset to increase the number of connection going through minikube.
Anything else do we need to know:
My setup might not be similar to what others do and while unlikely, it could be the cause of all my problems. Here's a gist of my Dockerfile and k8s config file
Notice how the image is "empty" and only loads the Gemfile and then when the image gets loaded into the pod, a volume from the host is mounted on that image. That allows me to develop on my host in the same folder as all my other project while running everything through minikube.
Let me know if you need extra information, I'd be glad to help!
The text was updated successfully, but these errors were encountered: