You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why does terraform attempt to change the kubernetes_service_account.helm_service_account resource?
kubernetes_cluster_role_binding.helm_cluster_role_binding has no dependencies at all besides the Kubernetes provider ones. Why does it try to get recreated?
More detailed explanation
I'm deploying an Azure AKS cluster using the Azurerm provider, and then using the Kubernetes provider I create a service account, 2 role cluster bindings, and some secrets. The problem is that on the first run everything succeeds. However, if I re run Terraform plan/apply without making any changes, Terraform keeps trying to re create some of the Kubernetes resources. Since they already exist, TF fails. Just for the record, I have verified that the resources TF attempts to re create do exist tfstate file, and no changes are done to the tf files or the deployed resouces in between runs.
What I have
I have 2 modules. One is azure-infra, the other is kubernetes-ground-config.
azure-infra takes care of creating a resource group, an AKS (Azure Kubernetes Services) cluster, and a public ip. It then runs az aks get-credentials --resource-group=${var.resource_group} --name=${azurerm_kubernetes_cluster.k8s.name} --file=${local.kube_config_path} --subscription=${var.subscription_id} --admin using null-resource in order to create a kube config file, and then I use null_data_source in order to set kube_config_path as an output of the module. I then use kube_config_path in order to configure the Kubernetes provider using the config_path field. Sadly, I cannot use another method to configure the Kubernetes provider, it has to be the config_path one.
The reason I use null_data_source is to ensure I'm not outputting the kube config path before running null_resource.credentials_acquisition . Since outputs cannot have "depends_on", I make a resource depend on null_resource.credentials_acquisition .
Then I have the kubernetes-ground-config module, which takes the output of the azure module as an input variable:
Among my tests I tried creating the AKS cluster separately using Terraform, then running the az aks get-credentials command using a separate script and only THEN using Terraform in order to create the Kubernetes stuff, and in that case everything works as expected.
Thus this leads me to believe that this is a problem between the Azure and Kubernetes providers interaction because if I take the Azure provider out of the equation, the Kubernetes provider works correctly.
The text was updated successfully, but these errors were encountered:
Terraform Version
0.11.10
0.11.8 as well
Affected Resource(s)
Terraform Configuration Files
Azure infra creation resources:
Kubernetes creation resources
https://gist.github.com/Urik/70249baa4a18d9b78b65380445fd4871
Expected Behavior
No resources should be recreated or changed (besides basic null_resource ones).
Actual Behavior
Terraform always attempts to recreate cluster role bindings, one of the secrets and tries to modify a service account.
terraform plan
output: https://gist.github.com/Urik/a46142dca856a5d1df008b33ea94358bThing to note:
More detailed explanation
I'm deploying an Azure AKS cluster using the Azurerm provider, and then using the Kubernetes provider I create a service account, 2 role cluster bindings, and some secrets. The problem is that on the first run everything succeeds. However, if I re run Terraform plan/apply without making any changes, Terraform keeps trying to re create some of the Kubernetes resources. Since they already exist, TF fails. Just for the record, I have verified that the resources TF attempts to re create do exist tfstate file, and no changes are done to the tf files or the deployed resouces in between runs.
What I have
I have 2 modules. One is azure-infra, the other is kubernetes-ground-config.
azure-infra takes care of creating a resource group, an AKS (Azure Kubernetes Services) cluster, and a public ip. It then runs
az aks get-credentials --resource-group=${var.resource_group} --name=${azurerm_kubernetes_cluster.k8s.name} --file=${local.kube_config_path} --subscription=${var.subscription_id} --admin
using null-resource in order to create a kube config file, and then I use null_data_source in order to set kube_config_path as an output of the module. I then use kube_config_path in order to configure the Kubernetes provider using the config_path field. Sadly, I cannot use another method to configure the Kubernetes provider, it has to be the config_path one.The reason I use null_data_source is to ensure I'm not outputting the kube config path before running null_resource.credentials_acquisition . Since outputs cannot have "depends_on", I make a resource depend on null_resource.credentials_acquisition .
Then I have the kubernetes-ground-config module, which takes the output of the azure module as an input variable:
and inside the kubernetes module, I configure the Kubernetes provider with that kube config file:
This is how most of kubernetes-ground-config looks like:
https://gist.github.com/Urik/70249baa4a18d9b78b65380445fd4871
THE PLOT THICKENS
Among my tests I tried creating the AKS cluster separately using Terraform, then running the
az aks get-credentials
command using a separate script and only THEN using Terraform in order to create the Kubernetes stuff, and in that case everything works as expected.Thus this leads me to believe that this is a problem between the Azure and Kubernetes providers interaction because if I take the Azure provider out of the equation, the Kubernetes provider works correctly.
The text was updated successfully, but these errors were encountered: