Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid atlas_artifact destroy error #4988

Closed
bensojona opened this issue Feb 4, 2016 · 2 comments · Fixed by #6557
Closed

Invalid atlas_artifact destroy error #4988

bensojona opened this issue Feb 4, 2016 · 2 comments · Fixed by #6557

Comments

@bensojona
Copy link
Contributor

When doing a terraform destroy with GCE Terraform templates that also contain atlas_artifact's, it shows an error that it couldn't destroy the artifact(s) on destroy.

In reality, it appears to successfully destroy the artifact as the next destroy plan shows no changes to be made.

This doesn't appear to be an issue with the AWS provider, just the Google Cloud provider.

Relevant output:

Running 0.6.11, but saw this same behavior on 0.6.10 and 0.6.9.

@bensojona
Copy link
Contributor Author

#4989 might be related

phinze added a commit that referenced this issue May 9, 2016
Wow this one was tricky!

This bug presents itself only when using planfiles, because when doing a
straight `terraform apply` the interpolations are left in place from the
Plan graph walk and paper over the issue. (This detail is what made it
so hard to reproduce initially.)

Basically, graph nodes for module variables are visited during the apply
walk and attempt to interpolate. During a destroy walk, no attributes
are interpolated from resource nodes, so these interpolations fail.

This scenario is supposed to be handled by the `PruneNoopTransformer` -
in fact it's described as the example use case in the comment above it!

So the bug had to do with the actual behavor of the Noop transformer.
The resource nodes were not properly reporting themselves as Noops
during a destroy, so they were being left in the graph.

This in turn triggered the module variable nodes to see that they had
another node depending on them, so they also reported that they could
not be pruned.

Therefore we had two nodes in the graph that were effectively noops but
were being visited anyways. The module variable nodes were already graph
leaves, which is why this error presented itself as just stray messages
instead of actual failure to destroy.

Fixes #5440
Fixes #5708
Fixes #4988
Fixes #3268
cristicalin pushed a commit to cristicalin/terraform that referenced this issue May 24, 2016
Wow this one was tricky!

This bug presents itself only when using planfiles, because when doing a
straight `terraform apply` the interpolations are left in place from the
Plan graph walk and paper over the issue. (This detail is what made it
so hard to reproduce initially.)

Basically, graph nodes for module variables are visited during the apply
walk and attempt to interpolate. During a destroy walk, no attributes
are interpolated from resource nodes, so these interpolations fail.

This scenario is supposed to be handled by the `PruneNoopTransformer` -
in fact it's described as the example use case in the comment above it!

So the bug had to do with the actual behavor of the Noop transformer.
The resource nodes were not properly reporting themselves as Noops
during a destroy, so they were being left in the graph.

This in turn triggered the module variable nodes to see that they had
another node depending on them, so they also reported that they could
not be pruned.

Therefore we had two nodes in the graph that were effectively noops but
were being visited anyways. The module variable nodes were already graph
leaves, which is why this error presented itself as just stray messages
instead of actual failure to destroy.

Fixes hashicorp#5440
Fixes hashicorp#5708
Fixes hashicorp#4988
Fixes hashicorp#3268
@ghost
Copy link

ghost commented Aug 16, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Aug 16, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants