From 6b1d4fe1c8da335b1957a7689334949cfeaf9757 Mon Sep 17 00:00:00 2001 From: Himanshu Sharma Date: Tue, 3 Oct 2023 13:59:35 +0530 Subject: [PATCH] updated doc --- cluster-autoscaler/FAQ.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cluster-autoscaler/FAQ.md b/cluster-autoscaler/FAQ.md index 050a6e1c4ec4..32312fb3c030 100644 --- a/cluster-autoscaler/FAQ.md +++ b/cluster-autoscaler/FAQ.md @@ -1130,7 +1130,7 @@ then autoscaler will early backoff and try to remove the node, but the node remo In the above scenario, CA won't try to scale-up any other node-grp for `podA` as it still calculates `node1` to be a possible candidate to join(`ResourceExhausted` errors are recoverable errors). Scale-up would still work for any new pods that can't fit on upcoming `node1` but can fit on some other node group. -If you are sure that the capacity won't recover soon, then kindly re-create `podA`. This will allow CA to see it as a new pod and allow scale-up in some other node-grp as `ng-A` would be in backoff. +The scale-up would stay blocked for such pod(s) for maximum `max-node-provision-time` , because after that the node won't be considered an upcoming node Refer issue https://github.com/gardener/autoscaler/issues/154 to track changes made for early-backoff enablement