Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

default_node_pool availability_zones changes in "azurerm_kubernetes_cluster" does not apply correctly, it should recreate the cluster #7780

Closed
piraces opened this issue Jul 16, 2020 · 3 comments · Fixed by #8814

Comments

@piraces
Copy link

piraces commented Jul 16, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.28
provider.azurerm v2.18.0

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "aks" {
  name                    = local.aks_name
  resource_group_name     = azurerm_resource_group.core.name
  location                = azurerm_resource_group.core.location
  dns_prefix              = "${local.aks_name}-dns"
  kubernetes_version      = var.AKS_VERSION
  sku_tier                = "Free"
  private_cluster_enabled = var.AKS_AS_PRIVATE_CLUSTER

  addon_profile {
    aci_connector_linux {
      enabled = false
    }

    azure_policy {
      enabled = false
    }

    http_application_routing {
      enabled = false
    }

    kube_dashboard {
      enabled = false
    }

    oms_agent {
      enabled = false
    }
  }

  default_node_pool {
    name                 = "default"
    vm_size              = var.AKS_DEFAULT_NODE_POOL_VM_SIZE
    availability_zones   = ["1", "2", "3"]
    orchestrator_version = var.AKS_VERSION

    enable_node_public_ip = false
    enable_auto_scaling   = false
    # Only if enable_auto_scaling = false
    node_count = var.AKS_DEFAULT_NODE_POOL_NODE_COUNT
    # Only if enable_auto_scaling = true
    # max_count           = value
    # min_count           = value
    # node_count          = value

    max_pods       = var.AKS_DEFAULT_NODE_POOL_NODE_MAX_PODS
    type           = "VirtualMachineScaleSets"
    vnet_subnet_id = data.azurerm_subnet.aks-subnet.id
  }

  identity {
    type = "SystemAssigned"
  }

  network_profile {
    dns_service_ip     = var.AKS_NETWORK_DNS_SERVICE_IP
    docker_bridge_cidr = "172.17.0.1/16"
    load_balancer_sku  = "Standard"
    network_plugin     = "azure"
    outbound_type      = "loadBalancer"
    service_cidr       = var.AKS_NETWORK_SERVICE_CIDR
  }

  role_based_access_control {
    enabled = true
    azure_active_directory {
      client_app_id     = var.AKS_RBAC_CLIENT_APP_ID
      server_app_id     = var.AKS_RBAC_SERVER_APP_ID
      server_app_secret = var.AKS_RBAC_SERVER_APP_SECRET
      tenant_id         = var.AZURE_TENANT_ID
    }
  }

  tags = local.common_tags
}

Debug Output

  # azurerm_kubernetes_cluster.aks will be updated in-place
  ~ resource "azurerm_kubernetes_cluster" "aks" {
        api_server_authorized_ip_ranges = []
        dns_prefix                      = "***"
        enable_pod_security_policy      = false
        id                              = "/subscriptions/***/resourcegroups/***/providers/Microsoft.ContainerService/managedClusters/***"
        kube_admin_config               = [
            {
                client_certificate     = "***"
                client_key             = "***"
                cluster_ca_certificate = "***"
                host                   = "https://***.privatelink.northeurope.azmk8s.io:443"
                password               = "***"
                username               = "***"
            },
        ]
        kube_admin_config_raw           = (sensitive value)
        kube_config                     = [
            {
                client_certificate     = ""
                client_key             = ""
                cluster_ca_certificate = "***"
                host                   = "https://***.privatelink.northeurope.azmk8s.io:443"
                password               = ""
                username               = "***"
            },
        ]
        kube_config_raw                 = (sensitive value)
        kubelet_identity                = [
            {
                client_id                 = "***"
                object_id                 = "***"
                user_assigned_identity_id = "/subscriptions/***/resourcegroups/***/providers/Microsoft.ManagedIdentity/userAssignedIdentities/***"
            },
        ]
        kubernetes_version              = "1.17.7"
        location                        = "northeurope"
        name                            = "***"
        node_resource_group             = "***"
        private_cluster_enabled         = true
        private_fqdn                    = "***.privatelink.northeurope.azmk8s.io"
        private_link_enabled            = true
        resource_group_name             = "***"
        sku_tier                        = "Free"
        tags                            = {
            "PROJECT"       = "***"
            "IT" = "***"
        }

        addon_profile {
            aci_connector_linux {
                enabled = false
            }

            azure_policy {
                enabled = false
            }

            http_application_routing {
                enabled = false
            }

            kube_dashboard {
                enabled = false
            }

            oms_agent {
                enabled            = false
                oms_agent_identity = []
            }
        }

    ~ default_node_pool {
          ~ availability_zones    = [
              + "1",
              + "2",
              + "3",
            ]
            enable_auto_scaling   = false
            enable_node_public_ip = false
            max_count             = 0
            max_pods              = 30
            min_count             = 0
            name                  = "default"
            node_count            = 2
            node_labels           = {}
            node_taints           = []
            orchestrator_version  = "1.17.7"
            os_disk_size_gb       = 128
            tags                  = {}
            type                  = "VirtualMachineScaleSets"
            vm_size               = "Standard_DS2_v2"
            vnet_subnet_id        = "/subscriptions/***/resourceGroups/***/providers/Microsoft.Network/virtualNetworks/***/subnets/***"
        }

        identity {
            principal_id = "***"
            tenant_id    = "***"
            type         = "SystemAssigned"
        }

        network_profile {
            dns_service_ip     = "192.168.0.10"
            docker_bridge_cidr = "172.17.0.1/16"
            load_balancer_sku  = "Standard"
            network_plugin     = "azure"
            outbound_type      = "loadBalancer"
            service_cidr       = "192.168.0.0/23"

            load_balancer_profile {
                effective_outbound_ips    = [
                    "/subscriptions/***/resourceGroups/***/providers/Microsoft.Network/publicIPAddresses/***",
                ]
                idle_timeout_in_minutes   = 0
                managed_outbound_ip_count = 1
                outbound_ip_address_ids   = []
                outbound_ip_prefix_ids    = []
                outbound_ports_allocated  = 0
            }
        }

        role_based_access_control {
            enabled = true

            azure_active_directory {
                admin_group_object_ids = []
                client_app_id          = "***"
                managed                = false
                server_app_id          = "***"
                server_app_secret      = (sensitive value)
                tenant_id              = "***"
            }
        }

        windows_profile {
            admin_username = "***"
        }
    }

Panic Output

No panic output.

Expected Behavior

I expect the cluster to be recreated since availability zones cannot be changed without destroying the cluster. It should warn about the cluster recreation and create the new with the availability zones. Azure does not support changing the availability zones of a created cluster.

See limitations in the Availability Zones for AKS documentation of Microsoft here.

Actual Behavior

Terraform detect changes that could be applied to the cluster without recreating it (which is incorrect, regarding the docs).

Due to this incorrect change detection, it attempts to apply impossible changes and it results either in timeout or applying the changes without doing nothing.

Steps to Reproduce

  1. terraform apply of the config file attached.
  2. It detect changes as the attached output.
  3. The apply sometimes results in a timeout or in "changes applied" (but there are no changes), it detect changes in subsequent terraform apply.

Important Factoids

No important Factoids.

References

No references.

@njuCZ
Copy link
Contributor

njuCZ commented Oct 9, 2020

@piraces thanks for pointing out this issue, I have submit a PR to make this field force new

@tombuildsstuff tombuildsstuff added this to the v2.36.0 milestone Nov 6, 2020
@ghost
Copy link

ghost commented Nov 12, 2020

This has been released in version 2.36.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.36.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Dec 7, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Dec 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
4 participants