-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Breakage in 3.12.0 for azurerm_kubernetes_cluster and azurerm_kubernetes_cluster_node_pool #17518
Comments
I'm experiencing this issue as well, with the following plan:
When using Spot node pools, this leads to errors like following:
|
Azure/AKS#2134 seems to be already released (although the issue is not closed yet :/), upgrade spot node pools should be possible |
This is an issue on terraform provider, I'm experiencing the same on a cluster that hasn't been modified for some time. Now terraform wants to "upgrade" the
|
Just go this issue with azurerm 3.16.0. Never had issue with this before and I did many update on this cluster before. |
This is still an issue with the latest provider version (3.18.0) |
We are also facing the same issue, where we are not able to upgrade the Spot node pool with current provider.
|
@Wiston999 I think you're just running a deprecated version of AKS:
It's worth trying to upgrade the control plane and then the node pool to a version available in your region. |
@hillen You can read my analysis here: #17833 (comment), I'll post further updates there. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Is there an existing issue for this?
Community Note
Terraform Version
1.2.4
AzureRM Provider Version
3.12.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster azurerm_kubernetes_cluster_node_pool
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
The plan should have shown no changes.
Actual Behaviour
The plan shows azurerm_kubernetes_cluster and and all azurerm_kubernetes_cluster_node_pools adding the orchestrator_version attribute. This attribute has been on the resource. I see it in the state file, it is in the GET of the resource from Azure, and it is in the terraform code. It should not need to be added.
I believe this is breakage from #17084
Pinning the azurerm provider to 3.11.0 fixes the issue.
Steps to Reproduce
Apply an azurerm_kubernetes_cluster specifying an orchestrator_version set to 1.23.5 with azurerm provider version 3.11.0
Update azurerm provider to 3.12.0
Run a plan and there will be changes to the resource to add the orchestrator_version
Important Factoids
No response
References
No response
The text was updated successfully, but these errors were encountered: