-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for FailureDomains to AzureMachinePool #667
Add support for FailureDomains to AzureMachinePool #667
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: fiunchinho The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @fiunchinho! |
Hi @fiunchinho. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
4380596
to
cb9a19f
Compare
cb9a19f
to
238f3cd
Compare
/ok-to-test |
Build is failing with
Not sure how to solve it or what does it mean. I need directions, please. |
@fiunchinho I believe this is informational to bring attention to the fact that there are breaking changes to the api |
I believe this was the choice since AzureMachineSpec is likely to have more settings in the VM specific type. One example is in AzureMachinePool, FailureDomains (AZs) would likely be set on the AMP rather than on the AzureMachineSpec since they are provider controlled. Another problem would be OSDisk. The name of the OSDisk is not honored in VirtualMachineScaleSets. If you specify a name, the PUT to VMSS fails. I believe there were other concerns where the template would differ. Should there be some base structure that are shared, perhaps. Though, I would prefer a little copy paste of data structure rather than more complex composition to reduce lines of code. My 2¢. |
So instead of removing |
I think FailureDomains should be FailureDomains on the Does this make sense to folks? |
We definitely want to leverage Scale Set AZ placement rather than defining our own logic for placing individual machines in zones, that was one of the motivations for implementing MachinePools / VMSS in the first place. Should it be of type I think the default behavior needs to try to spread instances on all the available failure domains (from the We also had a discussion in the CAPZ office hours with @richardcase about changing the existing logic and not needing the field in |
@CecileRobertMichon thank you for the correction and further explanation! |
ade8919
to
d66a625
Compare
d66a625
to
11dba83
Compare
11dba83
to
bd20079
Compare
Submitted kubernetes-sigs/cluster-api#3157 |
Given that kubernetes-sigs/cluster-api#3157 gets merged, any feedback about the implementation in this PR? |
via Machine Infra Provider Spec With the above guidance in mind, I would expect the following pending kubernetes-sigs/cluster-api#3157.
The @CecileRobertMichon, what do you think about having Zones on the status for Anyone have any other thoughts or fill in any blanks I missed? |
There should be a webhook validation on Update() that doesn't allow changes to FailureDomains if this applies to all providers (do we know of any cases where failure domains would be mutable?) that way it the machine pool never goes into failed state.
That makes sense to me but we don't do this for Machines currently, we should be consistent and do it for both? Maybe as a follow up? Keep in mind that AZs aren't supported in every region so that field won't always be set. |
The more I think about this, the less value I think it provides. If the
This would be great if we could say FailureDomains are immutable for all, but I don't think we can. For example, AWS AutoScale groups can add zones. |
@fiunchinho would you like to move forward with this PR now that capi v0.3.7 is in? It looks like it's very close |
@fiunchinho: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fiunchinho: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Superseeded by #1180 |
What this PR does / why we need it:
It was not possible to choose the
FailureDomains
when creating aMachinePool
because it was using a customAzureMachineTemplateSpec
. This PR changes the code so thatMachinePool
uses the sameAzureMachineTemplateSpec
than other parts of the code.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #663
Special notes for your reviewer:
Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
Release note: