-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform replacing AKS nodepool cluster when changing VM count #3835
Comments
I would also like the ability to modify agent pools without having the cluster be recreated. All this can be done via the command line without having to delete the cluster https://docs.microsoft.com/en-us/cli/azure/ext/aks-preview/aks/nodepool?view=azure-cli-latest So it should be simple to modify the provider to do something similar. |
@titilambert here's another on that we could tackle at the same time |
Modifications to node pools causing the cluster to be destroyed and recreated are definitely a problem with the current version of the azurerm provider. That problem still needs to be fixed. However in this case I believe you are running into the same problem I ran into last week: the provider is not sorting the agent_pool_profile blocks from your code before comparing them to the node pools in the current state (which appear to be returned in alphabetical order by name). Two of the four agent_pool_profile blocks in my code were not in alphabetical order by name, and running It seems to be that the provider should be sorting both the elements from the code, and the elements from the query of current state, in the same way so they can be compared properly. Should this be considered another aspect of this issue, or should I open a separate issue for it? The workaround, until this sorting bug is fixed, is to make sure the names of your agent_pool_profile blocks are listed in your code in alphabetical order. |
Thanks Art, for both the heads up and for the fix in #4676. |
This has been released in version 1.37.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 1.37.0"
}
# ... other configuration ... |
I am facing the same kind of issue: Problem Statement : Terraform is causing Kubernetes cluster to be recreated everytime I execute the below command: Particulary this command is replacing my .kube/config file. I am not getting how by executing the above command changes the terraform state. Provider Versions I am using:
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Community Note
Terraform (and AzureRM Provider) Version
Terraform v0.12.4
provider.azurerm v1.31.0
Affected Resource(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
Expected Behavior
Changing "count" in one of the "agent_pool_profile" and running "terraform apply" should add one more node to cluster.
Actual Behavior
Terraform replaces whole cluster ad adds new one with new number of nodes in the given nodepool. It also seems to be chaining the nodepool name around from looking at the plan
terraform plan output:
The text was updated successfully, but these errors were encountered: