-
Notifications
You must be signed in to change notification settings - Fork 314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide opt-in for automatic cluster upgrades #1745
Comments
I'd agree with this. Furthermore, we have seen that Azure will perform the upgrade on a node pool with only one node and not scale up or create a new pool, resulting in a total loss of service until the node pool is back up and K8 downloads images to start pods back up again. |
@jmos5156 every nodepool upgrade creates a new node to drain applications too as described in the upgrade docs. If the nodepool has 1 node, it will create second one as buffer. Same for any number of nodes. Does your application run in HA with pod disruption budgets? Otherwise no matter how many new nodes or nodepools are created, the drain will drain all pods for the node in parallel |
@Azure/aks-pm issue needs labels |
2 similar comments
@Azure/aks-pm issue needs labels |
@Azure/aks-pm issue needs labels |
This is directly related to issue #1303, but this does bring in the nuance of how the upgrades should occur across the cluster. Closing this to consolidate with the auto upgrade feature, please share any more feedback on that issue. |
Provide opt-in for automatic cluster upgrades which handles the following upgrade process for us:
This could be based on #1744 where the impact on the current workload has to be none existing, otherwise it would not kick in.
This would be Kubernetes PaaS for me given we don't want to worry about the cluster version and keeping it up-to-date as long as our workload is considered to be non-impacted.
The text was updated successfully, but these errors were encountered: