Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Breakage in 3.12.0 for azurerm_kubernetes_cluster and azurerm_kubernetes_cluster_node_pool #17518

Closed
1 task done
hillen opened this issue Jul 6, 2022 · 9 comments · Fixed by #18130
Closed
1 task done

Comments

@hillen
Copy link
Contributor

hillen commented Jul 6, 2022

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

1.2.4

AzureRM Provider Version

3.12.0

Affected Resource(s)/Data Source(s)

azurerm_kubernetes_cluster azurerm_kubernetes_cluster_node_pool

Terraform Configuration Files

resource "azurerm_kubernetes_cluster_node_pool" "general1" {
  name                  = "general1"
  kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
  enable_auto_scaling   = true
  enable_node_public_ip = false
  max_count             = 100
  max_pods              = 30
  min_count             = 3
  node_count            = 3
  node_taints           = []
  os_disk_size_gb       = 128
  vm_size               = "Standard_D4s_v4"
  vnet_subnet_id        = var.aks_vnet_subnet_id
  orchestrator_version  = "1.23.5"

  lifecycle {
    ignore_changes = [node_count]
  }
}

Debug Output/Panic Output

Plan shows addition of orchestrator_version attribute that already exists.

Expected Behaviour

The plan should have shown no changes.

Actual Behaviour

The plan shows azurerm_kubernetes_cluster and and all azurerm_kubernetes_cluster_node_pools adding the orchestrator_version attribute. This attribute has been on the resource. I see it in the state file, it is in the GET of the resource from Azure, and it is in the terraform code. It should not need to be added.

I believe this is breakage from #17084

Pinning the azurerm provider to 3.11.0 fixes the issue.

Steps to Reproduce

Apply an azurerm_kubernetes_cluster specifying an orchestrator_version set to 1.23.5 with azurerm provider version 3.11.0
Update azurerm provider to 3.12.0
Run a plan and there will be changes to the resource to add the orchestrator_version

Important Factoids

No response

References

No response

@hillen hillen added the bug label Jul 6, 2022
@github-actions github-actions bot removed the bug label Jul 6, 2022
@zerodayyy
Copy link

I'm experiencing this issue as well, with the following plan:

Terraform will perform the following actions:

  # module.cluster["redacted"].azurerm_kubernetes_cluster_node_pool.user["apps"] will be updated in-place
  ~ resource "azurerm_kubernetes_cluster_node_pool" "user" {
        id                     = "/subscriptions/redacted/resourceGroups/redacted/providers/Microsoft.ContainerService/managedClusters/redacted/agentPools/apps"
        name                   = "apps"
      + orchestrator_version   = "1.23.5"
        tags                   = {}
        # (25 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

When using Spot node pools, this leads to errors like following:

│ Error: the Orchestrator Version cannot be updated when using a Spot Node Pool
│ 
│   with module.cluster["redacted"].azurerm_kubernetes_cluster_node_pool.user["apps"],
│   on cluster/main.tf line 135, in resource "azurerm_kubernetes_cluster_node_pool" "user":
│  135: resource "azurerm_kubernetes_cluster_node_pool" "user" {

@yriveiro
Copy link

yriveiro commented Jul 7, 2022

Azure/AKS#2134 seems to be already released (although the issue is not closed yet :/), upgrade spot node pools should be possible

@Wiston999
Copy link

This is an issue on terraform provider, I'm experiencing the same on a cluster that hasn't been modified for some time. Now terraform wants to "upgrade" the orchestrator_version data field giving this weird error:

│ Error: 
│ The Kubernetes/Orchestrator Version "1.22.4" is not available for Node Pool "green".
│ 
│ Please confirm that this version is supported by the Kubernetes Cluster "pre-o2uk-uksouth01-aks"
│ (Resource Group "pre-o2uk-uksouth01-resource-group") - which may need to be upgraded first.
│ 
│ The Kubernetes Cluster is running version "1.22.4".

@uncycler
Copy link

│ Error: the Orchestrator Version cannot be updated when using a Spot Node Pool

Just go this issue with azurerm 3.16.0. Never had issue with this before and I did many update on this cluster before.

@zerodayyy
Copy link

This is still an issue with the latest provider version (3.18.0)

@rinoabraham
Copy link

rinoabraham commented Aug 24, 2022

We are also facing the same issue, where we are not able to upgrade the Spot node pool with current provider.

Error: the Orchestrator Version cannot be updated when using a Spot Node Pool

@weisdd
Copy link
Contributor

weisdd commented Aug 25, 2022

@Wiston999 I think you're just running a deprecated version of AKS:

$ az aks get-versions --location eastus --output table
KubernetesVersion    Upgrades
-------------------  -----------------------
1.24.3               None available
1.24.0               1.24.3
1.23.8               1.24.0, 1.24.3
1.23.5               1.23.8, 1.24.0, 1.24.3
1.22.11              1.23.5, 1.23.8
1.22.6               1.22.11, 1.23.5, 1.23.8

It's worth trying to upgrade the control plane and then the node pool to a version available in your region.

@weisdd
Copy link
Contributor

weisdd commented Aug 25, 2022

@hillen You can read my analysis here: #17833 (comment), I'll post further updates there.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 10, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
9 participants