-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wait until container cluster can be operated on. #2021
Conversation
Hi! I'm the modular magician, I work on Magic Modules. Pull request statusesNo diff detected in terraform-google-conversion. New Pull RequestsI built this PR into one or more new PRs on other repositories, and when those are closed, this PR will also be merged and closed. |
Hi! I'm the modular magician, I work on Magic Modules. Pull request statusesterraform-provider-google-beta already has an open PR. New Pull RequestsI didn't open any new pull requests because of this PR. |
No new failures in Container. |
Tracked submodules are build/terraform-beta build/terraform-mapper build/terraform build/ansible build/inspec.
5d7f478
to
5d8733f
Compare
return err | ||
} | ||
|
||
if err := waitForContainerClusterReady(config, project, location, clusterName, d.Timeout(schema.TimeoutCreate)); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even though we are passing the create timeout here, when the calling operation is Read then the entire Read request will be killed at the read timeout including this call even though the individual call will have a timeout that is longer. In practice this means Read will fail after 2 mins.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's okay, but I'm not sure. Do you think we should do something about that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm - I was in the wrong place in code reading this timeout. It's setting the correct timeout.
if err != nil { | ||
return handleNotFoundError(err, d, fmt.Sprintf("Container Cluster %q", d.Get("name").(string))) | ||
} | ||
if cluster.Status == "ERROR" || cluster.Status == "DEGRADED" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this means that if a cluster gets into one of these states that it will be impossible to fix them with terraform. Since plan/apply/destroy will all error on refresh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep! This is sad and also effectively what we did before at https://github.com/GoogleCloudPlatform/magic-modules/pull/2021/files#diff-29c7fa35b303cd012accf212d201bd5dL1155 if the state the cluster was in was terminal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, just ruminating out loud.
Release Note for Downstream PRs (will be copied)