-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Google container node pool, clobber duplicated resource and destroying wrong resource #9402
Google container node pool, clobber duplicated resource and destroying wrong resource #9402
Comments
@lucasteligioridis this is an interesting use case. Because the way the gke & node pool resources are designed, it needs a series of calls in order to fix the problem caused by the initial apply in your case. You were just stopped in the middle of these steps. If you continue |
I think we can do a prefetch on the resource to make sure it doesn't exist during create before tainting the resource. @edwardmedia https://github.com/GoogleCloudPlatform/magic-modules/pull/4904/files |
@ScottSuarez good idea |
Thanks for the quick turn around team ❤️ 👏🏼 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform Version
Terraform v0.14.6
Affected Resource(s)
Terraform Configuration Files
Expected Behavior
When creating the new "temp" node pools after running a
terraform apply
, the expectation was that there would have been a conflict with the name.Since the original
pool_1
resource was already created and running.There was a conflict error that was exactly like this:
After this, I changed the name within the resource to
temp_pool_1
, so the resource would have looked like:After this, I would have expected new resources to be created with the above settings and would have been left at that.
Actual Behavior
The actual behavior was when the
terraform apply
was done after the first initial 409 error and the name change of the resource was made was that there was adiff
when I expected a new resource that looked like this:As you can see here, the
id
is the important aspect to probably look at. It looks like the state was saved previously in the terraform apply that failed with the 409 error and clobbered my existing resource. So now terraform thinks I have completely renamed that resource and will proceed to actually destroypool_1
and replace withtemp_pool_1
instead of creating the resources on the side.Since I was in development I went ahead and went through the apply and that's exactly what happened.
After that successfully went through and did what I predicted, I saw that my original node pool,
pool_1
was now destroyed. Running aterraform apply
immediately after that now wants to create mypool_1
resources again from scratch.I'm sure there is a race condition or a clobbering of the state resources where there shouldn't be in this specific case, but just glad I caught this one in development and the diff clearly explained an issue anyway. But this could definitely trip someone up in production or accidentally cause a wrong node pool to be destroyed unintentionally.
Look forward to hearing back :)
The text was updated successfully, but these errors were encountered: