-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Previously null gcfs_config forcing replacement with false #2085
Comments
Hi @JerkyTreats - Which version of the Google Terraform Provider are you using? Context: Looks like the |
Ah, we are running provider 5.44. Google provider 6.2.0 should fix the issue. Closing this, thanks for linking to upstream issue. |
@apeabody - I updated to So I
It appears that regardless of apply, plan continuously wants to update this. Not sure if this hashicorp/terraform-provider-google/ side or terraform-google-modules/terraform-google-kubernetes-engine/ Can you confirm if this is something reproducible on your end? |
Hi @JerkyTreats - So the force replacement was fixed by the updated provider, but I suspect your remaining issue might be resolved by (upcoming) #2093? |
I see, yes probably that fixes. I had expected a workaround would be to add an explicit #2093 should definitely fix it though. |
@apeabody I'm seeing new behavior with 33.0.2: with enable_gcfs = false (which I think is the correct way, based on the module docs, and what I already had working), I'm getting:
Maybe this is a result of #2093? if I remove that line temporarily, I get back to the plan wanting to do this again:
Let me know if you want me to file a new issue or need more information. |
@apeabody I think I have a potential fix incoming diff --git a/autogen/main/cluster.tf.tmpl b/autogen/main/cluster.tf.tmpl
index 901de66e..0f39c57c 100644
--- a/autogen/main/cluster.tf.tmpl
+++ b/autogen/main/cluster.tf.tmpl
@@ -516,7 +516,7 @@ resource "google_container_cluster" "primary" {
min_cpu_platform = lookup(var.node_pools[0], "min_cpu_platform", "")
enable_confidential_storage = lookup(var.node_pools[0], "enable_confidential_storage", false)
dynamic "gcfs_config" {
- for_each = lookup(var.node_pools[0], "enable_gcfs", false) ? [true] : [false]
+ for_each = lookup(var.node_pools[0], "enable_gcfs", null) != null ? [var.node_pools[0].enable_gcfs] : [false]
content {
enabled = gcfs_config.value
} |
Thanks @wyardley - I just opened this PR (#2095), but happy to use your PR. We should also add |
Yeah, mine would have done the wrong thing in the null case, I think. Yours looks good, and seems to fix the problem in a quick test. |
FWIW, I'm still getting a permadiff on this after updating to I'd already had I did see some potential problems with GoogleCloudPlatform/magic-modules#11553 in the case where the |
Thanks @wyardley - When you get a chance open a new issue (tag me) specific to the current permadiff, in particular a snip of your config that reproduces it and the proposed plan. Cheers! |
TL;DR
Where previously we had no
gcfs_config
, where I assume it was null, we now have terraform plans forcing node_pool replacement on gcfs_config enabled=false.Docs mention this is optional, so null should equal false?
Image streaming
in Google console node_pool is confirmedDisabled
I'm assuming this is some change with
33.0.0
?Expected behavior
No destructive node pool replacement where gcfs_config was previously null and is now explicitly false
Observed behavior
destructive nodepool replacement
Terraform Configuration
Terraform Version
Additional information
No response
The text was updated successfully, but these errors were encountered: