-
Notifications
You must be signed in to change notification settings - Fork 237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GKE Cluster with some misconfigurations never reaches READY state but does not provide any error messages #601
Comments
It's worth noting that my cluster will reach the up to date state, but never be ready
|
So this was eventually do to some configuration conflict. I wanted to have public IPs, but I a config section which had some of the configs for a private cluster. As a result, the cluster could never reach a ready state. However, I would have expected a more detail error message or possibly some issues raised by Config Sync. The config bits I removed
As far as I am concerned, the issue is resolved for me. I think there is a less urgent request to improve the error messaging for an issue like this. Let me know if you need more details. |
So with that config removed the cluster became ready with the default pool deleted? |
I actually added the default pool back because I was trying to limit the issues associated with it. I was able to create it, but I left the default pool. I will likely remove that default pool and try again in a few days. Will report back |
If leaving the default pool up solves the problem, then the root cause probably still needs fixing in KCC. |
THe cluster also started without the default node pool |
Confirmed with customer this is not impacting them anymore after removing privateClusterConfig. |
Checklist
Bug Description
I am creating a GKE Cluster more or less following the provided blueprint. I have
cnrm.cloud.google.com/remove-default-node-pool
set to "true." My cluster is created without any issues with the default nodepool. Then, after the cluster reaches a ready state it removes the node pool. After removing the default nodepool, my nodepool is stuck in a pending state and is never added to the cluster as it is waiting for the cluster to be readynodepool
cluster
Additional Diagnostic Information
Kubernetes Cluster Version
Config Connector Version
Config Connector Mode
No result was returned from this call
Log Output
Steps to Reproduce
Steps to reproduce the issue
Create cluster with remove-default node pool set to true and a custom nodepool. Watch the cluster start up in the GCP Console
YAML snippets
Nodepool config
Cluster Config
The text was updated successfully, but these errors were encountered: