-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Retry Node delete when CCT is locked #9570
🐛 Retry Node delete when CCT is locked #9570
Conversation
/test |
@killianmuldoon: The
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/area e2e-testing
/test pull-cluster-api-e2e-full-main
Signed-off-by: killianmuldoon <[email protected]>
67e3f45
to
fed54bf
Compare
/test pull-cluster-api-e2e-full-main |
1 similar comment
/test pull-cluster-api-e2e-full-main |
Looks like it hit one of the other flakes :facepalm |
/test pull-cluster-api-e2e-full-main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice finding
/lgtm
LGTM label has been added. Git tree hash: 75d37212322a800107b91c725d0cdf0dd6382c98
|
Nice catch & fix. I think something still doesn't add up though. The drainNode case should also happen on release-1.5 and the deleteNode case on release-1.5 & release-1.4. Anyway, makes sense to merge this fix. Please take a closer look at how far we want to backport which part of it. /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbueringer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Agreed. This is being tracked in #9522. I think there's some underlying change in the way upgrades are working - maybe related to MachinePools - which is bringing out these underlying errors. I still don't have a good theory as to what it though. This PR should solve 1 of the three underlying causes, but it doesn't address why these issues have just started popping up now. |
Sounds good. WDYT about backporting this to release-1.5 & release-1.4? |
Will try to use the bot and come back to it tomorrow if there's an issue. /cherry-pick release-1.5 |
/cherry-pick release-1.4 |
@killianmuldoon: new pull request created: #9582 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@killianmuldoon: new pull request created: #9583 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This PR is currently missing an area label, which is used to identify the modified component when generating release notes. Area labels can be added by org members by writing Please see the labels list for possible areas. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Retry Node deletion when the ClusterCacheTracker is currently locked. This will force this process to be retried instead of ignoring the error.
Currently ignoring this error means that the Node can stick around indefinitely in the API Server. This seems to be the root cause of an issue where the self-hosted cluster tests fail. I'm still collecting information on this to understand if it's common across all the failure cases.
Related to #9522