From d48543b269a4e796a7cc1e4124700e07b1099644 Mon Sep 17 00:00:00 2001 From: Himanshu Sharma <79965161+himanshu-kun@users.noreply.github.com> Date: Tue, 3 Oct 2023 10:27:04 +0530 Subject: [PATCH] Update cluster-autoscaler/FAQ.md Co-authored-by: Rishabh Patel <66425093+rishabh-11@users.noreply.github.com> --- cluster-autoscaler/FAQ.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cluster-autoscaler/FAQ.md b/cluster-autoscaler/FAQ.md index e2d1aa3966a8..739963a1f14b 100644 --- a/cluster-autoscaler/FAQ.md +++ b/cluster-autoscaler/FAQ.md @@ -1128,7 +1128,7 @@ Case: then autoscaler will early backoff and try to remove the node, but the node removal won't succeed as currently CA is not allowed to perform any scale-down/delete node operation for a rolling update node-grp. In the above scenario, CA won't try to scale-up any other node-grp for `podA` as it still calculates `node1` to be a possible candidate to join(`ResourceExhausted` errors are recoverable errors). -Scale up would still work for any other new pods which can't fit on upcoming `node1` +Scale-up would still work for any new pods that can't fit on upcoming `node1` but can fit on some other node group. If you are sure that the capacity won't recover soon, then kindly re-create `podA`. This will allow CA to see it as a new pod and allow scale-up.