-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unclear when to use google_container_cluster or google_container_node_pool #475
Unclear when to use google_container_cluster or google_container_node_pool #475
Comments
I think we can probably resolve this by updating the documentation pages for both these resources to explain that |
@paddycarver Just to confirm my understanding, in order to manage node pools using
My reasoning is that by using Are there other ways to manage updateable node pools that can be managed externally? |
I think I'm finally successful with:
This will create an extra node pool (I don't understand how **EDIT: not so sure anymore, I'm giving up with separate ***EDIT2: well, that prevents me from removing a node pool in the future without recreating the cluster. |
I may be in the minority for this, but I do think that in production you should almost always be managing your cluster and your node pool separately. Primarily because of @matti's second edit, that any changes to the node pool will require the entire cluster to go down and come back up, no zero-downtime deploys are possible. That means you're left with that pesky default node pool though. Terraform is kind of in a tough spot here I think, I think the fault really lies with GCP's inability to launch a cluster without any node pool (despite the fact that you can delete all of the node pools?). Anyways, I posted it here too, but here's an example of how to use a resource "google_container_cluster" "cluster" {
name = "my-cluster"
zone = "us-west1-a"
initial_node_count = 1
}
resource "google_container_node_pool" "pool" {
name = "my-cluster-nodes"
node_count = "3"
zone = "us-west1-a"
cluster = "${google_container_cluster.cluster.name}"
node_config {
machine_type = "n1-standard-1"
}
# Delete the default node pool before spinning this one up
depends_on = ["null_resource.default_cluster_deleter"]
}
resource "null_resource" "default_cluster_deleter" {
provisioner "local-exec" {
command = <<EOF
gcloud container node-pools \
--project my-project \
--quiet \
delete default-pool \
--cluster ${google_container_cluster.cluster.name}
EOF
}
} |
For anyone else who finds this issue it looks like there is now the The following config will create a cluster (
Updating node pool properties and adding/deleting node pools to the cluster seems to behave as expected. I think this issue is probably still valid as it's not really clear from the docs whether this is the preferred method for managing node pools or not. |
According to the docs, GKE chooses the master VM’s size based on the initial number of nodes, so if you’re going to have a large cluster, you may want that initial number to be bigger than 1, even though you’re going to delete it! |
@michaelbannister This only seems to apply when using the |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
The
google_container_cluster
resource has anode_pool
field that can be used to define the node pools of the cluster. But there's also agoogle_container_node_pool
resource that can also define node pools in a cluster. But there's no guidance on when/how to use these, whether they should be used together, or why they're separated in the first place.The text was updated successfully, but these errors were encountered: