Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_kubernetes_cluster: changing number of nodes in agent_pool_profile replaces cluster instead of scaling node pool #4819

Closed
antonmatsiuk opened this issue Nov 6, 2019 · 2 comments

Comments

@antonmatsiuk
Copy link

antonmatsiuk commented Nov 6, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.4

  • provider.azurerm v1.36.0
  • provider.template v2.1.2

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "k8s" {
  name                = var.cluster_name
  location            = var.location
  resource_group_name = data.azurerm_resource_group.rg.name
  dns_prefix          = var.dns_prefix

  network_profile {
    network_plugin     = "azure"
    service_cidr       = "10.248.0.0/14"
    dns_service_ip     = "10.248.0.10"
    docker_bridge_cidr = "172.17.0.1/16"
    load_balancer_sku  = "standard"
  }

  linux_profile {
    admin_username = "kubeadmin"

    ssh_key {
      key_data = file(var.ssh_public_key)
    }
  }

  agent_pool_profile {
    name            = "staging"
    type            = "VirtualMachineScaleSets"
    count           = 3
    vm_size         = var.vm_node_size
    os_type         = "Linux"
    os_disk_size_gb = 30
    vnet_subnet_id  = data.azurerm_subnet.k8s_subnet.id
    max_pods        = 250
  }

  agent_pool_profile {
    name            = "prod"
    type            = "VirtualMachineScaleSets"
    count           = 2
    vm_size         = var.vm_node_size
    os_type         = "Linux"
    os_disk_size_gb = 30
    vnet_subnet_id  = data.azurerm_subnet.k8s_subnet.id
    max_pods        = 250
  }

  service_principal {
    client_id     = var.client_id
    client_secret = var.client_secret
  }

  tags = {
    environment = "test"
  }
}

Debug Output

azurerm_kubernetes_cluster.k8s must be replaced

Expected Behavior

When changing count in agent_pool_profile terraform scales (up/down) AKS nodepool (agent_pool_profile) instead of recreating the cluster

Actual Behavior

Terraform destroys the cluster and creates a new one with desired number of nodes

Steps to Reproduce

  1. terraform apply
@tombuildsstuff
Copy link
Contributor

hi @antonmatsiuk

Thanks for opening this issue :)

Taking a look through this appears to be a duplicate of #3835 - rather than having multiple issues open tracking the same thing I'm going to close this issue in favour of that one; would you mind subscribing to #3835 for updates?

Thanks!

@ghost
Copy link

ghost commented Mar 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants