-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
workspaces get sluggish as you approach 9GB or the pod memory limit #11156
Comments
Testing on XL workspaces does not reveal sluggishness when approaching the memory limit. |
Default workspace get extremely sluggish if a lot of memory is consumed. The reason for this is different values for the cgroup compared to XL workspaces. Default XL Once the workspace gets over the memory.high limit, performance is abysmal. This also explains why we do not see this behavior in XL workspaces because there memory.high is unlimited. Looking at memory psi of the workspace confirms that memory is the culprit:
There is no pressure at all on cpu and io. |
This behavior is described here and in more detail here. The relevant parts from the KEP are:
The value of the throttling factor is 0.8 by default and can be influenced through the Kubelet configuration:
|
Bug description
When you do this, the IDE will become slow or inaccessible.
Steps to recreate
Run this internal sample program to recreate.
Workspace affected
No response
Expected behavior
The workspace would ideally not become sluggish.
Example repository
No response
Anything else?
Next steps:
Recreate the behavior and determine the cause. Is it simply exceeding 9GB, regardless of the pod limit being 12 or 16GB? Our XL nodes have a memory limit of 16GB...do they encounter the same trouble at 9GB or 12GB?
Internal Slack reference
The text was updated successfully, but these errors were encountered: