-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fairly distribute disk I/O #9406
Comments
Initial solve for this was done in #9440, we are currently monitoring. |
@aledbf @Furisto is there anything outstanding for cgroup v1 or v2 that you can think of as it pertains to IO limiting?
Would it make sense to document cgroup support here for self-hosted? |
@kylos101 IO limiting with cgroup v2 does not really work yet. The best we could say is that if you want IO limiting for self-hosted then you need a system with cgroup v1. |
Thanks for the heads up, @Furisto ! I've removed this from scheduled work for now, as we've resolved the saas issue with #9440. I think we need to think about the business value of cgroup V2. Given IOLimit and CPU limit are both not working as expected for cgroup V2, I wonder if we can do a skateboard of CC: @csweichel @atduarte for awareness |
Yes, that's right. It's working. |
Bug description
We tried to limit here, in
ws-daemon
, but it is not currently working.Steps to reproduce
Run a workload that demands fast read or write, you'll get speeds in excess of 300Mi! This can starve other workspaces (on the same node) of disk I/O.
Workspace affected
n/a
Expected behavior
Your workloads should have disk IO limited, to some extent, so that the node and other workspaces are not starved.
Example repository
No response
Anything else?
We are experimenting to see how nicely this works.
Also, getting the above to work may inform as to why our current IO limiter does not work.
Ultimately though, we need a solution, and it must fit nicely with workspace classes.
CC: @aledbf @Furisto @atduarte
The text was updated successfully, but these errors were encountered: