-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bug where a node that becomes ready after 2 #3924
Conversation
/assign @MaciekPytel |
} else if stillStarting := isNodeStillStarting(node); stillStarting && node.CreationTimestamp.Time.Add(MaxNodeStartupTime).Before(currentTime) { | ||
current.LongNotStarted++ | ||
} else if stillStarting { | ||
} else if !ready && node.CreationTimestamp.Time.Add(MaxNodeStartupTime).After(currentTime) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think with the new logic the order of conditions would be more readable if we handled all unready conditions one after another (like below). The logic is the same and I'm not feeling super strongly about this, it just looks more structured this way.
} else if ready {
current.Ready++
} else if node.CreationTimestamp.Time.Add(MaxNodeStartupTime).After(currentTime) {
current.NotStarted++
} else {
current.Unready++
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -554,9 +554,7 @@ func (csr *ClusterStateRegistry) updateReadinessStats(currentTime time.Time) { | |||
current.Registered++ | |||
if deletetaint.HasToBeDeletedTaint(node) { | |||
current.Deleted++ | |||
} else if stillStarting := isNodeStillStarting(node); stillStarting && node.CreationTimestamp.Time.Add(MaxNodeStartupTime).Before(currentTime) { | |||
current.LongNotStarted++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to completely remove LongNotStarted from Readiness if it's unused anyway. It's confusing to keep it in places like upcoming nodes calculation (I know it doesn't change anything since it cannot take value other than 0, but it's likely to surprise someone less familiar with clusterstate).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1}, nil, now) | ||
assert.NoError(t, err) | ||
assert.Equal(t, 1, clusterstate.GetClusterReadiness().Unready) | ||
assert.Equal(t, 0, clusterstate.GetClusterReadiness().LongNotStarted) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should assert NotStarted == 0. Non-zero LongNotStarted wouldn't have any practical implications (other than misreporting in configmap and, possibly, metrics). Non-zero NotStarted would result in unready node being treated as upcoming (as discussed offline).
Also maybe assert other readiness states too and Upcoming == 0 (upcoming == 0 is skirting the definition of unittest, but it is the most likely negative consequence of a bug in Readiness calculation and so I think worth checking).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya. This was mistake. Fixed
@@ -768,57 +838,6 @@ func TestUpdateScaleUp(t *testing.T) { | |||
assert.Nil(t, clusterstate.scaleUpRequests["ng1"]) | |||
} | |||
|
|||
func TestIsNodeStillStarting(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really part of this PR, but I noticed that there is no equivalent test for GetReadinessState (which has very similar logic and you effectively use it as a replacement). Maybe instead of deleting this test cut/paste it as a test for GetReadinessState (minus the recent/long part)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added tests
assert.NoError(t, err) | ||
assert.Equal(t, tc.expectedResult, isReady) | ||
}) | ||
t.Run("long "+tc.desc, func(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that part doesn't make sense, since GetReadinessState() doesn't care about age of the nodes? That's what I meant by "(minus the recent/long part)" in my previous comment (sorry for not being more clear).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
f9faf1c
to
5fed757
Compare
I think MaxStatusSettingDelayAfterCreation is no longer used. Please remove it. |
}, fakeLogRecorder, newBackoff()) | ||
err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1}, nil, now) | ||
assert.NoError(t, err) | ||
assert.Equal(t, 1, clusterstate.GetClusterReadiness().NotStarted) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Below you assert on both notStarted and ready. Maybe do it here too for consistency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
return node | ||
} | ||
t.Run("recent "+tc.desc, func(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove "recent" from description? it doesn't really apply in this context
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
/lgtm Left a few nits. Feel free to remove hold after addressing those. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: MaciekPytel, vivekbagade The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
treated as unready. Deprecated LongNotStarted In cases where node n1 would: 1) Be created at t=0min 2) Ready condition is true at t=2.5min 3) Not ready taint is removed at t=3min the ready node is counted as unready Tested cases after fix: 1) Case described above 2) Nodes not starting even after 15mins still treated as unready 3) Nodes created long ago that suddenly become unready are counted as unready.
/lgtm Thanks, that was a tricky one. |
kubernetes/autoscaler#3924 changed Cluster Autoscaler behavior to mark nodes as unhealthy only if at least 15m passed since node creation time.
…24-upstream-cluster-autoscaler-release-1.20 Automated cherry pick of #3924: Fix bug where a node that becomes ready after 2 mins can be
…ed to 1.20 in kubernetes#4319 The backport included unit tests using a function that changed signature after 1.20. This was not detected before merging because CI is not running correctly on 1.20.
…ick-of-#3924-upstream-cluster-autoscaler-release-1.20 Automated cherry pick of kubernetes#3924: Fix bug where a node that becomes ready after 2 mins can be
* Fix cluster-autoscaler clusterapi sample manifest This commit fixes sample manifest of cluster-autoscaler clusterapi provider. (cherry picked from commit a5fee21) * Adding functionality to cordon the node before destroying it. This helps load balancer to remove the node from healthy hosts (ALB does have this support). This won't fix the issue of 502 completely as there is some time node has to live even after cordoning as to serve In-Flight request but load balancer can be configured to remove Cordon nodes from healthy host list. This feature is enabled by cordon-node-before-terminating flag with default value as false to retain existing behavior. * Set maxAsgNamesPerDescribe to the new maximum value While this was previously effectively limited to 50, `DescribeAutoScalingGroups` now supports fetching 100 ASG per calls on all regions, matching what's documented: https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DescribeAutoScalingGroups.html ``` AutoScalingGroupNames.member.N The names of the Auto Scaling groups. By default, you can only specify up to 50 names. You can optionally increase this limit using the MaxRecords parameter. MaxRecords The maximum number of items to return with this call. The default value is 50 and the maximum value is 100. ``` Doubling this halves API calls on large clusters, which should help to prevent throttling. * Break out unmarshal from GenerateEC2InstanceTypes Refactor to allow for optimisation * Optimise GenerateEC2InstanceTypes unmarshal memory usage The pricing json for us-east-1 is currently 129MB. Currently fetching this into memory and parsing results in a large memory footprint on startup, and can lead to the autoscaler being OOMKilled. Change the ReadAll/Unmarshal logic to a stream decoder to significantly reduce the memory use. * use aws sdk to find region * update readme * Update cluster-autoscaler/cloudprovider/aws/README.md Co-authored-by: Guy Templeton <[email protected]> * Merge pull request kubernetes#4274 from kinvolk/imran/cloud-provider-packet-fix Cloud provider[Packet] fixes * Fix bug where a node that becomes ready after 2 mins can be treated as unready. Deprecated LongNotStarted In cases where node n1 would: 1) Be created at t=0min 2) Ready condition is true at t=2.5min 3) Not ready taint is removed at t=3min the ready node is counted as unready Tested cases after fix: 1) Case described above 2) Nodes not starting even after 15mins still treated as unready 3) Nodes created long ago that suddenly become unready are counted as unready. * Improve misleading log Signed-off-by: Sylvain Rabot <[email protected]> * dont proactively decrement azure cache for unregistered nodes * Cluster Autoscaler: fix unit tests after kubernetes#3924 was backported to 1.20 in kubernetes#4319 The backport included unit tests using a function that changed signature after 1.20. This was not detected before merging because CI is not running correctly on 1.20. * Cluster Autoscaler: backport Github Actions CI to 1.20 (kubernetes#4366) * annotate fakeNodes so that cloudprovider implementations can identify them if needed * move annotations to cloudprovider package * fix 1.19 test * remove flaky test that's removed in master * Cluster Autoscaler 1.20.1 * Make arch-specific releases use separate images instead of tags on the same image This seems to be the current convention in k8s. * Cluster Autoscaler: add arch-specific build targets to .gitignore * CA - AWS - Instance List Update 03-10-21 - 1.20 release branch * CA - AWS - Instance List Update 29-10-21 - 1.20 release branch * Cluster-Autoscaler update AWS EC2 instance types with g5, m6 and r6 * CA - AWS Instance List Update - 13/12/21 - 1.20 * Merge pull request kubernetes#4497 from marwanad/add-more-azure-instance-types add more azure instance types * Cluster Autoscaler 1.20.2 * Add `--feature-gates` flag to support scale up on volume limits (CSI migration enabled) Signed-off-by: ialidzhikov <[email protected]> * CA - AWS Cloud Provider - 1.20 Static Instance List Update 02-06-2022 * Cluster Autoscaler - 1.20.3 release * sync_file updates & other changes * Updating vendor against [email protected]:kubernetes/kubernetes.git:e3de62298a730415c5d2ab72607ef6adadd6304d (e3de622) * fixed some declaration errors Co-authored-by: Kubernetes Prow Robot <[email protected]> Co-authored-by: Hidekazu Nakamura <[email protected]> Co-authored-by: atul <[email protected]> Co-authored-by: Benjamin Pineau <[email protected]> Co-authored-by: Adrian Lai <[email protected]> Co-authored-by: darkpssngr <[email protected]> Co-authored-by: Guy Templeton <[email protected]> Co-authored-by: Vivek Bagade <[email protected]> Co-authored-by: Sylvain Rabot <[email protected]> Co-authored-by: Marwan Ahmed <[email protected]> Co-authored-by: Jakub Tużnik <[email protected]> Co-authored-by: GuyTempleton <[email protected]> Co-authored-by: sturman <[email protected]> Co-authored-by: Maciek Pytel <[email protected]> Co-authored-by: ialidzhikov <[email protected]>
Fix bug where a node that becomes ready after 2
mins can be treated as unready. Deprecated LongNotStarted
In cases where node n1 would:
the ready node is counted as unready
Tested cases after fix:
treated as unready
counted as unready.