-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
google_bigtable_instance force replacement of development instance_type #5492
google_bigtable_instance force replacement of development instance_type #5492
Comments
@cynful as long num_nodes is unset you don't get any error message about
|
@venkykuberan I did not re-assign it to PRODUCTION. |
We've done this two different ways, we've tested without We see different behavior with 2.17 provider (in which case we hadn't changed anything on our end other than updating from terraform 0.12.19 to 0.12.20), and things had been stable for a while, so maybe an API change on Google's end?). With 2.17, it wanted to change the instance from development to production, with 2.20.1, even if we removed the state item and re-imported the instance, it would want to change the number of nodes from 1 -> 0 (maybe related to 2cee193 according to @rileykarson), but then not actually be able to do this:
|
creating a new instance (with 2.20.1): resource "google_bigtable_instance" "development-instance" {
name = "tf-instance"
instance_type = "DEVELOPMENT"
cluster {
cluster_id = "tf-instance-cluster"
zone = "us-central1-b"
storage_type = "HDD"
}
} An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_bigtable_instance.development-instance will be created
+ resource "google_bigtable_instance" "development-instance" {
+ cluster_id = (known after apply)
+ display_name = (known after apply)
+ id = (known after apply)
+ instance_type = "DEVELOPMENT"
+ name = "tf-instance"
+ num_nodes = (known after apply)
+ project = (known after apply)
+ storage_type = (known after apply)
+ zone = (known after apply)
+ cluster {
+ cluster_id = "tf-instance-cluster"
+ storage_type = "HDD"
+ zone = "us-central1-b"
}
}
Plan: 1 to add, 0 to change, 0 to destroy. % tf state show google_bigtable_instance.development-instance
# google_bigtable_instance.development-instance:
resource "google_bigtable_instance" "development-instance" {
display_name = "tf-instance"
id = "tf-instance"
instance_type = "DEVELOPMENT"
name = "tf-instance"
project = "foo"
cluster {
cluster_id = "tf-instance-cluster"
num_nodes = 1
storage_type = "HDD"
zone = "us-central1-b"
}
} however, the next plan shows: # google_bigtable_instance.development-instance will be updated in-place
~ resource "google_bigtable_instance" "development-instance" {
display_name = "tf-instance"
id = "tf-instance"
instance_type = "DEVELOPMENT"
name = "tf-instance"
project = "foo"
~ cluster {
cluster_id = "tf-instance-cluster"
~ num_nodes = 1 -> 0
storage_type = "HDD"
zone = "us-central1-b"
}
} |
same here |
This was/is already a Development instance |
Also coming here for same reason .... annoying regression. |
This is indeed an API change that caused this and not a provider one. I'm going to work on a fix now on our side that lets Terraform still behave correctly. In the meantime, if you're affected by this, please make sure you're upgraded to the latest version of the Google provider so that you can get the fix once it appears in a release. |
Wow, thanks for the quick fix @danawillow ! How long does it take for this merged fix to be available to us? |
If all goes according to plan, it'll be released on Monday as part of 3.7.0. |
@danawillow would be really, really appreciated if we can get this into 2.20.2 - we reported the issue originally, and won't be jumping to 3.x for a bit longer. |
Thank you so so much @danawillow -- this was really killing us.
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
https://gist.github.com/cynful/442a4274bfe06a295302f7966311bfd1
Panic Output
Expected Behavior
There should be no changes in the plan. This should be a no-op.
Nothing in the resource has been changed between creation and re-plan
Actual Behavior
The plan does not recognize the state of the 'DEVELOPMENT' num_nodes (which in console creation is 1)
and is let unset (via terraform, as per documention)
However validation forces this update of num_nodes to at least 3
Steps to Reproduce
create a googe_bigtable_instance in developlment, leave the number of nodes unset
terraform apply
the plan will try to change after the apply, which should not happen
Important Factoids
References
GoogleCloudPlatform/magic-modules#679
The text was updated successfully, but these errors were encountered: