-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MachineSet version changes, upgrading from v1alpha3 to v1alpha4 #5405
Comments
/assign @abhinavnagaraj |
I would assume defaulting in the MachineSet leads to an infinite rollout of the MD when the MD tries to rollout a MachineSet without the |
@abhinavnagaraj I tried the flow you explained in the issue and I have not been able to reproduce the issue.
At the end the machine deployments and the machine sets were updated to the new API version ( The version on the MachineDeployment and the MachineSet did not have the If possible, can you please share some more details about the state of the cluster and the yamls used before and after the upgrade. |
@ykakarap Yes, these steps are correct. Now, if we scale the MachineDeployment with API version |
Note: That the old MachineSet doesn't get deleted is intended behaviour because of From your description I would guess that the scale triggers the MD defaulting webhook (which runs on create/update) which adds the I wonder how adding the same defaulting to the MachineSet logic can fix the issue as the MS webhook is only run on create/update. |
/priority awaiting-more-evidence |
@vincepri I think you reopened the issue because of my comment here: #5406 (comment)
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
@fabriziopandini To summarize:
In my opinion now the issue can only occur if someone upgrades only core CAPI, without upgrading the bootstrap or infra provider. In my opinion that's as very uncommon case. So I personally wouldn't try to add handling for that case. Especially as we didn't get any reports that it is an issue. (The PR which closed this issue added the v-prefix defaulting in the MS webhook, which made it possible for the reconciler to automatically mitigate) |
If we want to mitigate that edge case ,we could add code which adds the prefix to the version field during each reconcile (if necessary). I think I would prefer avoiding that, if we don't have to. |
That's a use case we are not supporting, given that clusterctl ensures that all the providers move up at the same time |
As discussed in the grooming on 18th February, let's close this issue based on that it's an unsupported use case to only upgrade core CAPI. |
/close |
@sbueringer: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What steps did you take and what happened:
Launch a cluster in v1alpha3 (capi v0.3.19), with one MachineDeployment containing 2 replicas and with strategyType='RollingUpdate'.
Upgrade the cluster to v1alpha4(capi v0.4.0), without any changes in MachineDeployment spec.
This results in the creation of new MachineSets and hence new machines.
What did you expect to happen:
If there is no change in the MachineDeployment spec, there should be no rolling-upgrade of MachineSets.
Anything else you would like to add:
In v1alpha4 MachineDeployment and MachineSet template versions(
spec.template.spec.version
) is expected to be prefixed with a 'v'.A Default() function in MachineDeployment webhook adds this prefix - #4670
But this is missing in the MachineSet webhook.
Environment:
kubectl version
):v1.21.1/etc/os-release
):/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
The text was updated successfully, but these errors were encountered: