-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
r/kubernetes_cluster: supporting conditional updates / introducing default_node_pool
#4898
Conversation
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
40224b6
to
a153867
Compare
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
Also adding a note about using ignore_changes with auto-scaling
…thod since it's different Create&Update
…the top level object not
Also adding a new test to confirm removing a node pool
a153867
to
ae21a2b
Compare
default_node_pool
default_node_pool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll need to defer to your implementation around certain bits of knowledge/logic but I found quite a few places we should check to confirm that something isn't nil. Also, I called out a few todos that might need to be done before this PR is merged but I'll defer that to you as well.
…d `availabilityZones`
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
name = "default" | ||
node_count = 1 | ||
vm_size = "Standard_DS2_v2" | ||
vnet_subnet_id = azurerm_subnet.test.id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tombuildsstuff – I was wondering, would it be worth surfacing the fact that the vnet_subnet_id
on the default_node_pool
is also the subnet that the entire cluster gets launched into?
az aks create -h | grep vnet-subnet -A1
--vnet-subnet-id : The ID of a subnet in an existing VNet into which to
deploy the cluster.
https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW we plan to support assigning a separate subnet per agentpool, but haven't implemented it quite yet. Hence today the subnet property on additional agentpools doesn't do much yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh interesting. I hadn't even considered that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you surfacing that info is helpful, just want to make sure it's easily updated when that new func is available :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jluk just to ensure I've got this right - from testing IIRC at this time all of the Subnet ID's need to be set to the same value for the meantime; rather than just being defined on the top level resource?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct today all of the subnetIds need to be the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When creating a new agent pool, if the subnetid is not provided, the default will be inferred from the primary node pool's subnetid. So you can either input the same id across all of them or not provide a value for it to be inferred, any other inputs will receive an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jluk thanks for confirming - will get that updated 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR: #4946
Resolves this comment: #4898 (comment)
This has been released in version 1.37.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 1.37.0"
}
# ... other configuration ... |
@tombuildsstuff Should the data source for azurerm_kubernetes_cluster have been changed to represent this over agent_pool_profile as well? Is there a data source for the individual node pools? |
@maarek the Data Source intentionally still uses |
Looks like @maarek has not submitted an issue, I'll write one up now. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
This PR introduces support for Conditional Updates for the
azurerm_kubernetes_cluster
resource - allowing users to use theignore_changes
functionality on thelifecycle
block.Since the resource was introduced there's been a breaking changes in the Azure Kubernetes Service API which requires that this resource behave differently - and there's another on the way:
type
for AKS Nodes is changing fromAvailabilitySet
toVirtualMachineScaleSets
- as tracked in Support planned AKS API default swap to VMSS + SLB (from VMAS + BLB) #4465After spending a ton of time trying to reconcile the different AKS PR's (#4676, #4543, #4472 and #4256) this PR supersedes them by:
default_node_pool
block which supersedes theagent_pool_profile
block (which can then be removed in 2.0)azurerm_kubernetes_cluster_node_pool
resource being tracked in Add kubernetes_cluster_agent_pool #4046ignore_changes
block can be used on fields / configuration updates are applied prior to kubernetes updatesWhilst superseding PR's isn't ideal - introducing a replacement block with these changes allows this block to handle conditional updates from the start, rather than introducing a breaking behavioural change/waiting until 2.0.
Once this PR is merged, I'll rebase both #4046 (which adds support the
azurerm_kubernetes_cluster_node_pool
resource) and #4472 (which supports using Outbound IP's with a Standard Load Balancer) on top of this PR so that all of these changes can be shipped in the same release; and then open a separate PR which updates the examples to use the newdefault_node_pool
block.Fixes #3428
Fixes #3549
Fixes #3835
Fixes #3971
Fixes #4075
Fixes #4465
Fixes #4560
Fixes #4830
Supersedes #4256
Supersedes #4543
Supersedes #4676
Waiting on Azure/azure-rest-api-specs#7746Waiting on Azure/azure-sdk-for-go#6325 to be merged/released - at which point we'll upgrade the PR in another PR which should unblock this