-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Resource: azurerm_kubernetes_cluster_node_pool
#4899
Conversation
957029b
to
3e9082c
Compare
azurerm_kubernetes_cluster_node_pool
azurerm_kubernetes_cluster_node_pool
4207452
to
aa60de6
Compare
67babcc
to
5cdb3b7
Compare
azurerm_kubernetes_cluster_node_pool
azurerm_kubernetes_cluster_node_pool
Co-authored-by: titilambert <[email protected]> Co-authored-by: djsly <[email protected]> This commit rebases @titilambert's original commits on top of #4898 In addition it also makes a couple of changes: - The resource has been renamed `azurerm_kubernetes_cluster_node_pool` to match the latest terminology used in the Portal - The tests have been updated to use a Default Node Pool block in the AKS resource - During creation we now check for the presence of the parent Cluster and then confirm that the Default Node Pool is a VirtualMachineScaleSet type - Removes support for `orchestrator_version` temporarily - since this wants to be added to both the Default Node Pool and this resource at the same time/in the same PR since this wants some more specific tests (which'd be easier to review in a separate PR) - Matches the name/schema for all fields with the `default_node_pool` block in the AKS resource I've ended up `git reset HEAD~N` here to work around the merge conflict due to the large changes within the AKS resource in #4898, but this seemed like the most pragmatic way to ship these changes.
5cdb3b7
to
a1d6926
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
This has been released in version 1.37.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 1.37.0"
}
# ... other configuration ... |
Can someone please provide a code snippet of using both azurerm_kubernetes_cluster and azurerm_kubernetes_cluster_node_pool with a default node pool and an additional node pool. I'm trying to make the switch now but unfortunately I'm not clear on the example provided in the terraform documentation. https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster_node_pool.html However no reference to the additional node pool. Is it safe to assume that the parameter passed to azurerm_kubernetes_cluster_node_pool represent the non default option node pool option? Thanks! |
Yes - only the default node pool is defined within the |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
This PR re-opens @titilambert's original work from #4046 which I've rebased on top of #4898 - unfortunately pushing a rebase caused Github closed the PR and gives no way of reopening it, hence re-opening this PR
This introduces a new resource
azurerm_kubernetes_cluster_node_pool
which allows for managing Virtual Machine Scale Set Node Pools independently of the AKS Cluster.Fixes #4001
Dependent on #4898 - once that's merged this'll want to be rebased on top of master which should make the diff more relevant