-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make vmType optional #3099
Comments
It seems we can remove the |
I think we could change the default vmType to "vmss" and hence all VM types are supported by default if vmType is not set in the config file. |
I see now in the docs that the
I'm confused about what this new See also kubernetes-sigs/cluster-api-provider-azure#3000, which frankly at this point I can't even tell if it's a good idea or not. Edit: actually the link @nilo19 posted explains the behavior better. CAPZ won't have a scenario where all nodes are VMSS Flex or VMSS Uniform, and currently we're always configured for scenario 4 (mixed nodes). But if machinepools are disabled, we fall into scenario 1 (all standalone or Availability Set VMs). In that case, is there any advantage to setting |
when setting |
@feiskyer do you see how that's confusing from a user's perspective though? Since "vmss" doesn't actually mean "vmss" only, but "standard" means standard only? It would be much more self-explanatory to have a separate value for "all" vm types (or accept empty string). |
This would be done in v1.28 as #2414 needs to be done first. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@feiskyer - since 2414 is now merged and we're in 1.28 timeframe, roughly when do you think you can get to this? /remove-lifecycle stale |
@dtzar thanks for reminding me. Yes, this would be in the v1.28 release. |
What would you like to be added:
Historically, "mixed" vm types (both VM and VMSS) in the same cluster were not allowed because Azure Basic load balancers didn't allow it (more context on this in #338).
With basic LBs soon to be deprecated (see #2414), we should consider making the
vmType
configuration optional. Currently, the way to do this and have "mixed" clusters is to usevmType: vmss
(as recommended here). This is confusing for users and not intuitive, as it implies the node pools are "vmss" when in fact they may have both VMs and VMSS.Cloud-provider-azure already has functionality to "detect" which compute type each node is and use the proper APIs for each, this would only be a UX change. For users that are concerned about extra API calls they don't need, we could keep the
vmType
configuration to be able to specify that a cluster is only using that VM type. So for example, if my cluster is VMs only, I can setvmType: vm
to tell cloud-provider-azure to skip all VMSS API calls and assume VM for every node. But if I want to enable all vm types, I would simply leave out vmType.Later, we might want to change the "vmss" vmType to actually mean "vmss", i.e. vmss only, as opposed to the current vmss is all-in-one.
Why is this needed:
Opened this issue as a result from this Slack thread: https://kubernetes.slack.com/archives/CEX9HENG7/p1673384255536689
The text was updated successfully, but these errors were encountered: