Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes provider keeps trying to recreate existing resources when working in conjunction with the Azure provider #92

Open
Urik opened this issue Dec 11, 2018 · 0 comments

Comments

@Urik
Copy link

Urik commented Dec 11, 2018

Terraform Version

0.11.10
0.11.8 as well

Affected Resource(s)

  • kubernetes_secret
  • kubernetes_cluster_role_binding
  • kubernetes_service_account

Terraform Configuration Files

Azure infra creation resources:

resource "null_resource" "login" {
  triggers {
    test = "${uuid()}"
  }

  provisioner "local-exec" {
    command = "az login --service-principal -u ${var.sp_client_id} -p ${var.sp_client_secret} --tenant ${var.tenant_id}"
  }
}

resource "null_resource" "credentials_acquisition" {
  depends_on = ["null_resource.login"]
  triggers {
    test = "${uuid()}"
  }
  provisioner "local-exec" {
    command = "az aks get-credentials --resource-group=${var.resource_group} --name=${azurerm_kubernetes_cluster.k8s.name} --file=${local.kube_config_path} --subscription=${var.subscription_id} --admin"
  }
}

data "null_data_source" "kube_config_path_output" {
  depends_on = ["null_resource.credentials_acquisition"]
  inputs {
    kube_config_path = "${local.kube_config_path}"
  }
}

output "kube_config_path" {
  value = "${data.null_data_source.kube_config_path_output.outputs["kube_config_path"]}"
}

Kubernetes creation resources
https://gist.github.com/Urik/70249baa4a18d9b78b65380445fd4871

Expected Behavior

No resources should be recreated or changed (besides basic null_resource ones).

Actual Behavior

Terraform always attempts to recreate cluster role bindings, one of the secrets and tries to modify a service account.
terraform plan output: https://gist.github.com/Urik/a46142dca856a5d1df008b33ea94358b
Thing to note:

  1. Why does terraform attempt to change the kubernetes_service_account.helm_service_account resource?
  2. kubernetes_cluster_role_binding.helm_cluster_role_binding has no dependencies at all besides the Kubernetes provider ones. Why does it try to get recreated?

More detailed explanation

I'm deploying an Azure AKS cluster using the Azurerm provider, and then using the Kubernetes provider I create a service account, 2 role cluster bindings, and some secrets. The problem is that on the first run everything succeeds. However, if I re run Terraform plan/apply without making any changes, Terraform keeps trying to re create some of the Kubernetes resources. Since they already exist, TF fails. Just for the record, I have verified that the resources TF attempts to re create do exist tfstate file, and no changes are done to the tf files or the deployed resouces in between runs.

What I have

I have 2 modules. One is azure-infra, the other is kubernetes-ground-config.

azure-infra takes care of creating a resource group, an AKS (Azure Kubernetes Services) cluster, and a public ip. It then runs az aks get-credentials --resource-group=${var.resource_group} --name=${azurerm_kubernetes_cluster.k8s.name} --file=${local.kube_config_path} --subscription=${var.subscription_id} --admin using null-resource in order to create a kube config file, and then I use null_data_source in order to set kube_config_path as an output of the module. I then use kube_config_path in order to configure the Kubernetes provider using the config_path field. Sadly, I cannot use another method to configure the Kubernetes provider, it has to be the config_path one.

The reason I use null_data_source is to ensure I'm not outputting the kube config path before running null_resource.credentials_acquisition . Since outputs cannot have "depends_on", I make a resource depend on null_resource.credentials_acquisition .

Then I have the kubernetes-ground-config module, which takes the output of the azure module as an input variable:

module "kubernetes-ground-config" {
  source = "./kubernetes"
  kube_config_path = "${module.azure-infra.kube_config_path}"
  ...

and inside the kubernetes module, I configure the Kubernetes provider with that kube config file:

provider "kubernetes" {
  load_config_file = true
  config_path = "${var.kube_config_path}"
}

This is how most of kubernetes-ground-config looks like:
https://gist.github.com/Urik/70249baa4a18d9b78b65380445fd4871

THE PLOT THICKENS

Among my tests I tried creating the AKS cluster separately using Terraform, then running the az aks get-credentials command using a separate script and only THEN using Terraform in order to create the Kubernetes stuff, and in that case everything works as expected.
Thus this leads me to believe that this is a problem between the Azure and Kubernetes providers interaction because if I take the Azure provider out of the equation, the Kubernetes provider works correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant