Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wait until container cluster can be operated on. #2021

Merged
merged 3 commits into from
Jul 9, 2019

Conversation

nat-henderson
Copy link
Contributor

Release Note for Downstream PRs (will be copied)

`google_container_cluster` will now wait to act until the cluster can be operated on, respecting timeouts.

@modular-magician
Copy link
Collaborator

Hi! I'm the modular magician, I work on Magic Modules.
This PR seems not to have generated downstream PRs before, as of b67adb9.

Pull request statuses

No diff detected in terraform-google-conversion.
No diff detected in Ansible.
No diff detected in Inspec.

New Pull Requests

I built this PR into one or more new PRs on other repositories, and when those are closed, this PR will also be merged and closed.
depends: hashicorp/terraform-provider-google-beta#927
depends: hashicorp/terraform-provider-google#3989

@modular-magician
Copy link
Collaborator

Hi! I'm the modular magician, I work on Magic Modules.
I see that this PR has already had some downstream PRs generated. Any open downstreams are already updated to your most recent commit, 5d7f478.

Pull request statuses

terraform-provider-google-beta already has an open PR.
No diff detected in terraform-google-conversion.
terraform-provider-google already has an open PR.
No diff detected in Ansible.
No diff detected in Inspec.

New Pull Requests

I didn't open any new pull requests because of this PR.

@nat-henderson
Copy link
Contributor Author

No new failures in Container.

@nat-henderson nat-henderson requested a review from rileykarson July 9, 2019 00:14
nat-henderson and others added 3 commits July 9, 2019 17:45
return err
}

if err := waitForContainerClusterReady(config, project, location, clusterName, d.Timeout(schema.TimeoutCreate)); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even though we are passing the create timeout here, when the calling operation is Read then the entire Read request will be killed at the read timeout including this call even though the individual call will have a timeout that is longer. In practice this means Read will fail after 2 mins.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's okay, but I'm not sure. Do you think we should do something about that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nvm - I was in the wrong place in code reading this timeout. It's setting the correct timeout.

if err != nil {
return handleNotFoundError(err, d, fmt.Sprintf("Container Cluster %q", d.Get("name").(string)))
}
if cluster.Status == "ERROR" || cluster.Status == "DEGRADED" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this means that if a cluster gets into one of these states that it will be impossible to fix them with terraform. Since plan/apply/destroy will all error on refresh.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep! This is sad and also effectively what we did before at https://github.com/GoogleCloudPlatform/magic-modules/pull/2021/files#diff-29c7fa35b303cd012accf212d201bd5dL1155 if the state the cluster was in was terminal.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, just ruminating out loud.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants