Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend multiple_node_pools example to test AKS upgrades #501

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
113 changes: 113 additions & 0 deletions examples/multiple_node_pools/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# Testing the upgrade scenario

You can use this example to manually test the upgrade scenario.

See existing AKS versions:

```
% az aks get-versions --location centralus
KubernetesVersion Upgrades
------------------- ------------------------
1.28.3 None available
1.28.0 1.28.3
1.27.7 1.28.0, 1.28.3
1.27.3 1.27.7, 1.28.0, 1.28.3
1.26.10 1.27.3, 1.27.7
1.26.6 1.26.10, 1.27.3, 1.27.7
1.25.15 1.26.6, 1.26.10
1.25.11 1.25.15, 1.26.6, 1.26.10
```

In this example we test an upgrade from 1.26.10 to 1.27.7.

## Create the AKS cluster at version 1.26.10:

```
terraform init -upgrade
terraform apply -var="kubernetes_version=1.26.10" -var="orchestrator_version=1.26.10"
```

Verify the AKS cluster version:

```
az aks list -o table # check AKS version
az aks get-credentials --resource-group <rg> --name <name>
kubectl version # check api server version
kubectl get nodes # check nodes version
```

In the `az aks list` output you will have `KubernetesVersion` and `CurrentKubernetesVersion` both at 1.26.10

## Upgrade the AKS cluster control plane only to version 1.27.7

```
terraform apply -var="kubernetes_version=1.27.7" -var="orchestrator_version=1.26.10"
```

Check the new versions:


```
az aks list -o table # check AKS version
kubectl version # check api server version
kubectl get nodes # check nodes version
```

In the `az aks list` output you will have `KubernetesVersion` and `CurrentKubernetesVersion` both at 1.27.7
The control plane version will be 1.27.7 and the nodes will be 1.26.10.

## Upgrade the AKS cluster node pools to version 1.27.7

```
terraform apply -var="kubernetes_version=1.27.7" -var="orchestrator_version=1.27.7"
```

Check the new versions:

```
az aks list -o table # check AKS version
kubectl version # check api server version
kubectl get nodes # check nodes version
```

In the `az aks list` output you will have `KubernetesVersion` and `CurrentKubernetesVersion` both at 1.27.7
The control plane version will be 1.27.7 and the nodes will be 1.27.7.

## Note on Issue #465

The current implementation does not allow to upgrade `var.kubernetes_version` and `var.orchestrator_version` at the same time.

We can test at this point a simultaneous upgrade to 1.28.3:

```
terraform apply -var="kubernetes_version=1.28.3" -var="orchestrator_version=1.28.3"
```
This will generate a plan where the azure_kubernetes_cluster resource is updated in place and the system node pool is updated.

```
# module.aks.azurerm_kubernetes_cluster.main will be updated in-place
~ resource "azurerm_kubernetes_cluster" "main" {
id = "/subscriptions/<redacted>/resourceGroups/4c273d71bc7898d6-rg/providers/Microsoft.ContainerService/managedClusters/prefix-4c273d71bc7898d6-aks"
name = "prefix-4c273d71bc7898d6-aks"
tags = {}
# (29 unchanged attributes hidden)

~ default_node_pool {
name = "nodepool"
~ orchestrator_version = "1.27.7" -> "1.28.3"
tags = {}
# (22 unchanged attributes hidden)
}

# (4 unchanged blocks hidden)
}
```

that will fail with the following error:

```
│ Error: updating Default Node Pool Agent Pool (Subscription: "<redacted>"
│ Resource Group Name: "4c273d71bc7898d6-rg"
│ Managed Cluster Name: "prefix-4c273d71bc7898d6-aks"
│ Agent Pool Name: "nodepool") performing CreateOrUpdate: agentpools.AgentPoolsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="NodePoolMcVersionIncompatible" Message="Node pool version 1.28.3 and control plane version 1.27.7 are incompatible. Minor version of node pool version 28 is bigger than control plane version 27. For more information, please check https://aka.ms/aks/UpgradeVersionRules"
```
25 changes: 14 additions & 11 deletions examples/multiple_node_pools/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -34,22 +34,25 @@ resource "azurerm_subnet" "test" {
locals {
nodes = {
for i in range(3) : "worker${i}" => {
name = substr("worker${i}${random_id.prefix.hex}", 0, 8)
vm_size = "Standard_D2s_v3"
node_count = 1
vnet_subnet_id = azurerm_subnet.test.id
name = substr("worker${i}${random_id.prefix.hex}", 0, 8)
vm_size = "Standard_D2s_v3"
node_count = 1
vnet_subnet_id = azurerm_subnet.test.id
orchestrator_version = var.orchestrator_version
}
}
}

module "aks" {
source = "../.."

prefix = "prefix-${random_id.prefix.hex}"
resource_group_name = local.resource_group.name
os_disk_size_gb = 60
sku_tier = "Standard"
rbac_aad = false
vnet_subnet_id = azurerm_subnet.test.id
node_pools = local.nodes
prefix = "prefix-${random_id.prefix.hex}"
resource_group_name = local.resource_group.name
os_disk_size_gb = 60
sku_tier = "Standard"
rbac_aad = false
vnet_subnet_id = azurerm_subnet.test.id
node_pools = local.nodes
kubernetes_version = var.kubernetes_version
orchestrator_version = var.orchestrator_version
}
10 changes: 10 additions & 0 deletions examples/multiple_node_pools/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,13 @@ variable "resource_group_name" {
type = string
default = null
}

variable "kubernetes_version" {
type = string
default = null
}

variable "orchestrator_version" {
type = string
default = null
}
Loading