Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attribute change for kube_admin_config and server_app_secret with provider 1.34.0 when running terraform plan #4428

Closed
rudolphjacksonm opened this issue Sep 25, 2019 · 3 comments

Comments

@rudolphjacksonm
Copy link
Contributor

rudolphjacksonm commented Sep 25, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

terraform: v0.12.6
provider: v1.34.0

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "aks_with_aad_parameters" {
  count               = var.aks_aad_enabled == "true" ? 1 : 0
  name                = var.aks_cluster_name
  resource_group_name = var.aks_rg_name
  location            = var.aks_location
  dns_prefix          = var.aks_dns_prefix
  kubernetes_version  = var.aks_kubernetes_version

  agent_pool_profile {
    name            = var.aks_agentpool_name
    max_pods        = var.aks_max_pods
    count           = var.aks_node_count
    os_disk_size_gb = var.aks_node_os_disk_size_gb
    vm_size         = var.aks_agent_vm_sku
    vnet_subnet_id  = var.aks_subnet_id
  }

  linux_profile {
    admin_username = var.aks_agent_admin_user
    ssh_key {
      key_data = var.aks_public_key_data
    }
  }

  network_profile {
    network_plugin     = var.aks_network_plugin
    network_policy     = var.aks_network_policy
    dns_service_ip     = var.aks_dnsServiceIP
    docker_bridge_cidr = var.aks_dockerBridgeCidr
    service_cidr       = var.aks_serviceCidr
  }

  service_principal {
    client_id     = data.azurerm_key_vault_secret.cluster_sp_id.value
    client_secret = data.azurerm_key_vault_secret.cluster_sp_secret.value
  }

  role_based_access_control {
    enabled = true
    azure_active_directory {
      client_app_id     = var.aks_aad_clientapp_id
      server_app_id     = var.aks_aad_serverapp_id
      server_app_secret = var.aks_aad_serverapp_secret
      tenant_id         = data.azurerm_client_config.current.tenant_id
    }
  }
}

Expected Behavior

We should be able to run terraform plan and apply on an existing cluster with the newest version of the provider without having to recreate the cluster entirely. We don't see this behavior on 1.33.1, although on that version the kube_admin_config is displayed instead of being marked sensitive.

Actual Behavior

terraform plan says it will destroy and recreate the AKS cluster as the server_app_secret will be computed.

-/+ module.aks-cluster.azurerm_kubernetes_cluster.aks_with_aad_parameters (new resource required)
kube_admin_config_raw:         <sensitive> => <computed> (attribute changed)
kube_config.#:                     "1" => <computed>
kube_config_raw:                 <sensitive> => <computed> (attribute changed)
role_based_access_control.0.azure_active_directory.0.server_app_secret: <sensitive> => <sensitive> (attribute changed)
service_principal.0.client_secret:                              <sensitive> => <sensitive> (forces new resource)

Steps to Reproduce

  1. terraform plan

References

@tombuildsstuff
Copy link
Contributor

hi @rudolphjacksonm

Thanks for opening this issue :)

Taking a look through this appears to be a duplicate of #4356 - rather than having multiple issues open tracking the same thing I'm going to close this issue in favour of that one; would you mind subscribing to #4356 for updates?

Thanks!

@rudolphjacksonm
Copy link
Contributor Author

rudolphjacksonm commented Sep 25, 2019

Cheers @tombuildsstuff , my colleague just spotted that and I was going to close this issue but you beat me to it! Thanks for the fast response.

@ghost
Copy link

ghost commented Mar 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants