-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ws-daemon] Fix CPU limit annotation #9479
Conversation
/hold until figure out why we revert last time. |
|
||
// if we didn't get the max bandwidth, but were throttled last time | ||
// and there's still some bandwidth left to give, let's act as if had | ||
// never spent any CPU time and assume the workspace will spend their | ||
// entire bandwidth at once. | ||
var burst bool | ||
if totalBandwidth < d.TotalBandwidth && ws.Throttled() { | ||
limit = d.BurstLimiter.Limit(ws.Usage()) | ||
limiter := d.BurstLimiter | ||
if w := wsidx[id]; w.BaseLimit > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this be w.BurstLimiter > 0
?
@utam0k it was reverted because it was breaking CPU limiting for all workspaces. CPU limiting was not working at all with this change, not clear why. |
@utam0k my recollection is that it did not work with cgroup v2 and gen34. |
@utam0k I just updated #9407 slightly, my apologies for being unclear.
The problem was that cpu limiting was not working effectively after this PR was deployed to production. While we observed that CPU limiting was not working, I do not recall if this annotation was set on workspaces. @Furisto do you recall?
More than one workspace was getting 6 cores. cgroup was reflecting this, confirming that we were allocating too much.
Yes |
I was able to experience this once during my trial and error. However, I felt the dispatcher did not seem to be fully operational. I will investigate a little more, but the recovery of the state may not be due to the revert the PR but to restarting ws-daemon at the time. I will investigate a little more overhead. However, there are many riddles... |
Flipping this to draft, @utam0k , as you are still investigating. |
Please close this PR once because I investigate this branch. |
Description
Fix CPU limit annotation.
I actually annotated it manually and it worked.
Related Issue(s)
Fixes #9407
How to test
Annotate as follows and confirm that
cpu.max
of cgroup is set.Note: you have to enable CPU limit configuration in ws-daemon.
$ kubectl annotate --overwrite pods $ws-pod gitpod.io/cpuLimit=50000m
Release Notes
Documentation