-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ignore detached nodes when doing validate cluster #11349
Ignore detached nodes when doing validate cluster #11349
Conversation
Hi @rajatjindal. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
I'm curious why a detached node should be ignored from validation? the node and underlying instance should still be healthy so why not consider it for validation? |
those nodes had no pods running on them and we ended up deleting them manually from AWS console. i do not understand detached nodes logic 100%, may be @johngmyers can shed some light here. i think from AWS and kops perspective those nodes still existed, but from k8s perspective the nodes were gone? |
If a node is detached then it is on its way out of the cluster. A lack of being in the cluster is not a reason to hold up further disruption to the cluster. /lgtm |
/retest |
To add more color, since detached nodes don't contribute to satisfying the IG target size check, the joined-to-cluster check is not not necessary to ensure the IG has enough capacity. |
Thinking about this further, we'll probably want to also exempt detached nodes from the node readiness and system-node-critical unready pod checks, at least for purposes of rolling update. |
can that be a separate PR or should I make that change as part of this PR? |
It could be separate, but the intent here is "don't let extra sick nodes hold up a rolling update" and there is value in getting all of the "sick node" validation failures taken care of at once. I have been thinking of a tighter integration between cluster validation and rolling update. There was previous work on ignoring "sick node" validation failures for nodes outside of the IG being updated. The next step would be to count sick nodes in the same IG against maxUnavailable, allowing |
Hi @rifelpet do you think its reasonable to move ahead with the PR. if so, could you please help with the next steps? thanks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I agree with the proposed logic now.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rifelpet The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
Thank you very much |
Ran into this issue while doing rolling-update on our cluster today. The rolling updated was cancelled (for unrelated reasons) before it could complete.