-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[prebuilds] Many prebuilds fail with "OutOfmemory: Pod Node didn't have enough resource: memory" #8594
Comments
Here's the number of prebuilds failed with "OutOfmemory ... " in the last days count(id),day |
This is related to this: #8238 For regular workspaces we retry until we get a node that has enough memory. On a positive side: k8s fix for it is already in the works and hopefully will be released soon in the patch version of 1.23 🙏 |
This is still happening regularly and actually twice as often since this week.
|
we hit the same issue today |
The real fix for this has not been merged into k8s yet. What we have is a workaround, though I am not entirely sure if our workaround is working for prebuilds, as we have been primarily testing it with regular workspaces. 🤔 |
@kylos101 Scheduled the issue, after seeing these numbers.
|
@atduarte the final fix for the issue in kubernetes v1.23.6 is scheduled for Tuesday 19th |
From what I saw in Kubernetes page, they haven't yet released the fix. Do we have any other alternative? |
@atduarte the fix is already deployed in gen42. |
Since actual fix was deployed and is in prod, I am going to close this issue now. |
Latest numbers related to the issue:
(gen42 was deployed on the 22nd and the traffic shift finished the 23rd) |
See also #8592
The text was updated successfully, but these errors were encountered: