-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit resources based on workspace class #11374
Conversation
started the job as gitpod-build-fo-class-daemon.1 because the annotations in the pull request description changed |
81a3b15
to
491cd95
Compare
@Furisto what are your thoughts about limiting disk IO bandwidth? I ask because on XL (large) nodes, we "could" grant the workspaces a higher bandwidth because there are less workspaces and more available bandwidth. For now, I think we should keep this as a 🛹, and continue to impose the same disk IO bandwidth limit like we do now (regardless of the node type). But, I am interested in your thoughts. 🤔 😄 Further, even when we go to PVC, we could initially impose the same disk IO bandwidth limit. And later, experiment with removing the disk IO bandwidth limit for |
Yes, that is the plan. For the initial implementation I only included cpu but in the future we can make other limits dependent on the workspace class like disk IO or egress.
👍 |
/hold Why? This is not protected by a feature flag, yet, and it would be ideal to have one. @Furisto is starting this work, and has socialized with @easyCZ. cc: @sagor999 this is something we'll want to include as part of |
@Furisto as a heads up, there's a conflict with |
1596b85
to
7dd8bf6
Compare
/werft run 👍 started the job as gitpod-build-fo-class-daemon.26 |
} | ||
|
||
func (a annotationLimiter) Limit(wsh *WorkspaceHistory) (Bandwidth, error) { | ||
value, ok := wsh.LastUpdate.Pod.Annotations[a.Annotation] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we storing the Pod and not only the annotations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was just convenient to do and having the pod object around can be useful when we want to do limiting based on other attributes besides annotations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do that in the future. Please try to use what we need now. The Pod object can be expensive to store (annotations, status, env vars, etc)
CPU string `json:"cpu"` | ||
Memory string `json:"memory"` | ||
EphemeralStorage string `json:"ephemeral-storage"` | ||
Storage string `json:"storage,omitempty"` | ||
} | ||
|
||
type ResourceLimitConfiguration struct { | ||
CPU *CpuResourceLimit `json:"cpu"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason why we don't set omitempty to all fields?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just kept it consistent with the previous behavior. It is also nice to not have type out the fields when editing the configmap in e.g. preview environments.
050783d
to
a3d6f4e
Compare
/werft run 👍 started the job as gitpod-build-fo-class-daemon.28 |
/unhold |
Description
This PR adds support for cpu limiting based on the workspace class of the workspace. The configuration for each workspace class contains a cpu limiting section which describes the minimal resources the workspaces gets and the burst limit. These values will be set as annotations on the workspace pod. ws-daemon will then use these annotations to distribute cpu resources
Related Issue(s)
Fixes #10981
How to test
gitpod.io/cpuMinLimit
andgitpod.io/cpuBurstLimit
stress-ng --cpu 4
Release Notes
Note
This relies on on a change in the werft files, that is why the main build failed. The custom build with the updated werft files (below) is successful however.
Werft options: