Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Consistent ordering for deletion priority #6300

Merged

Conversation

thedadams
Copy link
Contributor

@thedadams thedadams commented Mar 11, 2022

If two machines had the same deletion priority, then on subsequent syncReplicas
calls different machines could be deleted. For example, when using Oldest or
Newest deletion priorities, then machines that are already deleting and machines
without a node ref have the same priority. Then this could lead to multiple
machines getting deleted on different calls to syncReplicas.

That is, if a MachineSet is scaled down by 1, then one machine would get
deleted, but then a second call to syncReplicas could delete a machine with no
node ref instead of the machine that is already deleting.

What this PR does / why we need it:
By ensuring consistent ordering for machines with the same deletion priority, it can be assured that the correct number of machines get deleted when scaling down a MachineDeployment.

Which issue(s) this PR fixes
Fixes #6299

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Mar 11, 2022

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: thedadams / name: Donnie Adams (7dae429)

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Mar 11, 2022
@k8s-ci-robot
Copy link
Contributor

Welcome @thedadams!

It looks like this is your first PR to kubernetes-sigs/cluster-api 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Mar 11, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @thedadams. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Mar 11, 2022
@thedadams thedadams force-pushed the machine-sort-delete-policy branch from aad2560 to 7dae429 Compare March 11, 2022 21:08
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Mar 11, 2022
@enxebre
Copy link
Member

enxebre commented Mar 14, 2022

Thanks! Change seems reasonable to me.

That is, if a MachineSet is scaled down by 1, then one machine would get
deleted, but then a second call to syncReplicas could delete a machine with no
node ref instead of the machine that is already deleting.

I'm curious have you seen this actually happening? i.e two different machines being returned in different calls in case of a tie? Change seems reasonable to me but regardless I'm assuming there's already a hidden determinism preventing this from happening?

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 14, 2022
@thedadams
Copy link
Contributor Author

Yes, we have seen the behavior I described in the description: scale a deployment up by two, as machines are provisioning scale down by one, because missing node ref and deleting machines have the same priority (we use delete oldest by default) then both provisioning machines get deleted and another machine gets provisioned once they are gone.

@enxebre
Copy link
Member

enxebre commented Mar 15, 2022

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 15, 2022
Copy link
Contributor

@killianmuldoon killianmuldoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@fabriziopandini
Copy link
Member

/lgtm
@vincepri PTAL

@thedadams
Copy link
Contributor Author

@fabriziopandini @vincepri Is there anything I should be doing for this PR to move forward?

@sbueringer
Copy link
Member

sbueringer commented Mar 21, 2022

I wonder a bit if we can fix this more holistically by treating machines already in deleting differently. But as this PR already fixes a known bug:
/lgtm

@fabriziopandini @vincepri Is there anything I should be doing for this PR to move forward?

@thedadams Everything fine, just takes a bit (PTO) :)

@enxebre
Copy link
Member

enxebre commented Mar 22, 2022

I wonder a bit if we can fix this more holistically by treating machines already in deleting differently. But as this PR already fixes a known bug:

I'd be cautious about that since there're many nuances where we could impact by accident changing existing API behaviour unexpectedly. I agree this is a right bugfix with a reduced impact surface and further discussion can happen separately.

@sbueringer
Copy link
Member

I wonder a bit if we can fix this more holistically by treating machines already in deleting differently. But as this PR already fixes a known bug:

I'd be cautious about that since there're many nuances where we could impact by accident changing existing API behaviour unexpectedly. I agree this is a right bugfix with a reduced impact surface and further discussion can happen separately.

Yeah me too. I looked at the surrounding code and that's why I definitely prefer "just" a bugfix for now.
I was just wondering in general if we could avoid deleting machines which already have deletionTimestamps (I know it's a no-op). In cases where you're currently "deleting" Machines are not selected for deletion again you might end up with less Machines than you want to. But I'm not sure if that's a problem that can really occur.

so tl;dr I definitely prefer just a bugfix for now.

return m.priority(m.machines[j]) < m.priority(m.machines[i]) // high to low
priorityI, priorityJ := m.priority(m.machines[i]), m.priority(m.machines[j])
if priorityI == priorityJ {
return m.machines[i].Name < m.machines[j].Name
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a comment above this line to explain why we're falling back to name?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a comment, as you suggested.

If two machines had the same deletion priority, then on subsequent syncReplicas
calls different machines could be deleted. For example, when using Oldest or
Newest deletion priorities, then machines that are already deleting and machines
without a node ref have the same priority. Then this could lead to multiple
machines getting deleted on different calls to syncReplicas.

That is, if a MachineSet is scaled down by 1, then one machine would get
deleted, but then a second call to syncReplicas could delete a machine with no
node ref instead of the machine that is already deleting.

Signed-off-by: Donnie Adams <[email protected]>
@thedadams thedadams force-pushed the machine-sort-delete-policy branch from 7dae429 to 0c0db0c Compare March 29, 2022 17:03
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 29, 2022
@thedadams thedadams requested a review from vincepri March 29, 2022 17:04
@enxebre
Copy link
Member

enxebre commented Mar 29, 2022

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 29, 2022
Copy link
Member

@vincepri vincepri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: vincepri

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 30, 2022
@k8s-ci-robot k8s-ci-robot merged commit 2099b30 into kubernetes-sigs:main Mar 30, 2022
@k8s-ci-robot k8s-ci-robot added this to the v1.2 milestone Mar 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
7 participants