-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix!: enable private nodes with specified pod ip range #1514
fix!: enable private nodes with specified pod ip range #1514
Conversation
b8af263
to
a185443
Compare
587daac
to
c312230
Compare
c312230
to
d67a31a
Compare
@splichy |
@bharathkkb I have finally found what was wrong with the tests, so it's ready to be merged. BTW is it correct that integration tests are not sending emails what part failed? I had to create a test project under my personal GCP account, hopefully, it will not cost me hundreds of $ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @splichy
We usually don't do backports unless it is a critical issue. However reading the API docs, it seems like even if this not set it should now default to the cluster config. Are you still able to reproduce the error? https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools#nodenetworkconfig |
I'm still able to reproduce the error
It's probably a bug in Google API which is not deriving from [cluster.privateClusterConfig.enablePrivateNodes] when NodePool.NodeNetworkConfig.podRange is defined/different from cluster default. There are also other bugs with https://cloud.google.com/kubernetes-engine/docs/how-to/multi-pod-cidr - which for example states that you can use subnet smaller than /24 for node pool override, and you can actually use e.g. /25 as cluster default, but when you try to use anything smaller than /24 for node pool override then you will get an error. |
@splichy @bharathkkb Im facing the exact same issue that you mentioned above. How do I fix it? |
@sivadeepN you have to set enable_private_nodes on both: cluster & node_pool, or if you are talking about an inability to use a subnet smaller than /24 for node pool override, then this doesn't have a solution yet - I have tried to solve it with GCP support, spent a few days mailing with them, then I gave up. Anyway you can use a smaller subnet cluster-wide and then add /24 as node pool overrides. |
@splichy thank you for your work. When I set Error: error creating NodePool: googleapi: Error 400: EnablePrivateNodes must be enabled for private clusters with valid masterIpv4Cidr., badRequest What I am trying to achieve is a private cluster with mixed node pools (public and private node pools), but it seems it does not work. Although according to the documentation, it should: https://cloud.google.com/blog/products/containers-kubernetes/understanding-gkes-new-control-plane-connectivity#:~:text=Allow%20toggling%20and%20mixed%2Dmode%20clusters%20with%20public%20and%20private%20node%20pools |
I'm able to get around this, and create public node pools with a different pod IP range, when I pin the google-beta version: terraform {
required_version = ">=0.13"
provider_meta "google-beta" {
module_name = "blueprints/terraform/terraform-google-kubernetes-engine:safer-cluster-update-variant/v16.0.1"
}
required_providers {
google-beta = "~> 4.44.1"
}
} This issue was introduced in |
Fixes #1493
enable_private_nodes attribute was introduced in google provider 4.45.0 and Google API now requires
enable_private_nodes
to be set on both cluster and node_pool when pod_ip_range is specified on the node_pool level.