Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_bigtable_instance force replacement of development instance_type #5492

Closed
cynful opened this issue Jan 24, 2020 · 13 comments · Fixed by GoogleCloudPlatform/magic-modules#3057, #5557 or hashicorp/terraform-provider-google-beta#1704
Assignees
Labels

Comments

@cynful
Copy link
Contributor

cynful commented Jan 24, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

terraform -v
Terraform v0.12.20
+ provider.google v2.17.0
+ provider.google-beta v2.14.0

Affected Resource(s)

  • google_bigtable_instance

Terraform Configuration Files

resource "google_bigtable_instance" "development-instance" {
  name          = "tf-instance"
  instance_type = "DEVELOPMENT"

  cluster {
    cluster_id   = "tf-instance-cluster"
    zone         = "us-central1-b"
    storage_type = "HDD"
  }
}

Debug Output

https://gist.github.com/cynful/442a4274bfe06a295302f7966311bfd1

Panic Output

Expected Behavior

There should be no changes in the plan. This should be a no-op.
Nothing in the resource has been changed between creation and re-plan

Actual Behavior

The plan does not recognize the state of the 'DEVELOPMENT' num_nodes (which in console creation is 1)
and is let unset (via terraform, as per documention)
However validation forces this update of num_nodes to at least 3

Steps to Reproduce

create a googe_bigtable_instance in developlment, leave the number of nodes unset

  1. terraform apply
  2. 'terraform plan`
    the plan will try to change after the apply, which should not happen

Important Factoids

References

GoogleCloudPlatform/magic-modules#679

@ghost ghost added the bug label Jan 24, 2020
@venkykuberan venkykuberan self-assigned this Jan 24, 2020
@venkykuberan
Copy link
Contributor

@cynful as long num_nodes is unset you don't get any error message about num_nodes.
Can you please confirm that when you are changing the instance_type from PRODUCTION to DEVELOPMENT that you are removing the num_nodes entry in the config.
the below config created Dev cluster successfully.

resource "google_bigtable_instance" "development-instance" {
  name          = "tf-instance"
  instance_type = "DEVELOPMENT"

  cluster {
    cluster_id   = "tf-instance-cluster"
    zone         = "us-central1-b"
    storage_type = "HDD"
  }
}

@cynful
Copy link
Contributor Author

cynful commented Jan 25, 2020

@venkykuberan I did not re-assign it to PRODUCTION.
I created a DEVELOPMENT instance. I would like it to keep it as is.
However, when you do a terraform plan in the same state, you get a plan that suggests that you're changing instance types, and that the number of nodes needs to be increased.

@ghost ghost removed the waiting-response label Jan 25, 2020
@wyardley
Copy link

We've done this two different ways, we've tested without num_nodes set

We see different behavior with 2.17 provider (in which case we hadn't changed anything on our end other than updating from terraform 0.12.19 to 0.12.20), and things had been stable for a while, so maybe an API change on Google's end?). With 2.17, it wanted to change the instance from development to production, with 2.20.1, even if we removed the state item and re-imported the instance, it would want to change the number of nodes from 1 -> 0 (maybe related to 2cee193 according to @rileykarson), but then not actually be able to do this:

2020/01/24 11:04:51 [DEBUG] module.bigtable-instance-foo.google_bigtable_instance.this: applying the planned Update change
2020/01/24 11:04:57 [DEBUG] module.bigtable-instance-foo.google_bigtable_instance.this: apply errored, but we're indicating that via the Error pointer rather than returning it: Error updating cluster search for instance foo
2020/01/24 11:04:57 [ERROR] module.bigtable-instance-foo: eval: *terraform.EvalApplyPost, err: Error updating cluster search for instance foo
2020/01/24 11:04:57 [ERROR] module.bigtable-instance-foo: eval: *terraform.EvalSequence, err: Error updating cluster search for instance foo

@wyardley
Copy link

creating a new instance (with 2.20.1):

resource "google_bigtable_instance" "development-instance" {
  name          = "tf-instance"
  instance_type = "DEVELOPMENT"

  cluster {
    cluster_id   = "tf-instance-cluster"
    zone         = "us-central1-b"
    storage_type = "HDD"
  }
}
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_bigtable_instance.development-instance will be created
  + resource "google_bigtable_instance" "development-instance" {
      + cluster_id    = (known after apply)
      + display_name  = (known after apply)
      + id            = (known after apply)
      + instance_type = "DEVELOPMENT"
      + name          = "tf-instance"
      + num_nodes     = (known after apply)
      + project       = (known after apply)
      + storage_type  = (known after apply)
      + zone          = (known after apply)

      + cluster {
          + cluster_id   = "tf-instance-cluster"
          + storage_type = "HDD"
          + zone         = "us-central1-b"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
% tf state show google_bigtable_instance.development-instance
# google_bigtable_instance.development-instance:
resource "google_bigtable_instance" "development-instance" {
    display_name  = "tf-instance"
    id            = "tf-instance"
    instance_type = "DEVELOPMENT"
    name          = "tf-instance"
    project       = "foo"

    cluster {
        cluster_id   = "tf-instance-cluster"
        num_nodes    = 1
        storage_type = "HDD"
        zone         = "us-central1-b"
    }
}

however, the next plan shows:

  # google_bigtable_instance.development-instance will be updated in-place
  ~ resource "google_bigtable_instance" "development-instance" {
        display_name  = "tf-instance"
        id            = "tf-instance"
        instance_type = "DEVELOPMENT"
        name          = "tf-instance"
        project       = "foo"

      ~ cluster {
            cluster_id   = "tf-instance-cluster"
          ~ num_nodes    = 1 -> 0
            storage_type = "HDD"
            zone         = "us-central1-b"
        }
    }

@sonots
Copy link

sonots commented Jan 28, 2020

same here

@ScubaDrew
Copy link

ScubaDrew commented Jan 29, 2020

Terraform will perform the following actions:

  # google_bigtable_instance.hddbta-prd-zone will be updated in-place
  ~ resource "google_bigtable_instance" "hddbta-prd-zone" {
        display_name  = "hddbta-prd-zone"
        id            = "hddbta-prd-zone"
        instance_type = "DEVELOPMENT"
        name          = "hddbta-prd-zone"
        project       = "prd"

      ~ cluster {
            cluster_id   = "hddbta-cluster-west1b"
          ~ num_nodes    = 1 -> 0
            storage_type = "HDD"
            zone         = "us-west1-b"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.
terraform version
Terraform v0.12.20
+ provider.aws v2.46.0
+ provider.google v2.20.1
+ provider.google-beta v2.20.1
+ provider.helm v0.10.4
+ provider.kubernetes v1.10.0
+ provider.local v1.4.0
+ provider.template v2.1.2

This was/is already a Development instance

@fproulx-dfuse
Copy link

Also coming here for same reason .... annoying regression.

@danawillow
Copy link
Contributor

This is indeed an API change that caused this and not a provider one. I'm going to work on a fix now on our side that lets Terraform still behave correctly. In the meantime, if you're affected by this, please make sure you're upgraded to the latest version of the Google provider so that you can get the fix once it appears in a release.

@ScubaDrew
Copy link

Wow, thanks for the quick fix @danawillow ! How long does it take for this merged fix to be available to us?

@danawillow
Copy link
Contributor

If all goes according to plan, it'll be released on Monday as part of 3.7.0.
Some more potential good news- the fix merged cleanly into the 2.X series, and since the resource is effectively unusable without the fix for affected DEVELOPMENT instances, we might be doing a 2.20.2 with it as well.

@wyardley
Copy link

@danawillow would be really, really appreciated if we can get this into 2.20.2 - we reported the issue originally, and won't be jumping to 3.x for a bit longer.

@ScubaDrew
Copy link

Thank you so so much @danawillow -- this was really killing us.

Version 2.20.2 Published 5 hours ago

@ghost
Copy link

ghost commented Mar 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.