Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix destroy-time handling of outputs and local values #17241

Merged
merged 9 commits into from
Jan 31, 2018

Conversation

jbardin
Copy link
Member

@jbardin jbardin commented Jan 30, 2018

Because destroy time provisioners are not evaluated until the destroy operation, and outputs and locals are "dynamic" in the sense that they can be evaluated on demand, we always need to evaluate them, even during destroy.

Since the destroy-time local and output nodes are no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.

Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove outputs during an apply, we add a new node -
NodeDestroyableOutput.

This new node is added to the graph by the DestroyOutputTransformer,
which makes the new destroy node depend on all descendants of the output
node. This ensures that the output remains in the state as long as
everything which may interpolate the output still exists

Also, since outputs and local nodes are always evaluated, interpolation can fail
if they reference a resource form the configuration that isn't in the state. The new PruneUnusedValuesTransformer removes unused value nodes from the graph.

Throughout this PR, a number of unrelated tests went through failure, due to the attempted interpolation of "id" attributes which weren't in the state. Add a failsafe in the interpolation function to fallback on the ID field when the attribute no longer exists. We should plan in the future to only have a single location for a resource ID.

Fixes #17158, #17181, #16907, #16420, #14450, probably others

Destroy-time provisioners require us to re-evaluate during destroy.

Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.

Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
Add a complex destroy provisioner testcase using locals, outputs and
variables.

Add that pesky "id" attribute to the instance states for interpolation.
Always evaluate outputs during destroy, just like we did for locals.
This breaks existing tests, which we will handle separately.

Don't reverse output/local node evaluation order during destroy, as they
are both being evaluated.
Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.

Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.

This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node.  This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
Using destroy provisioners again for edge cases during destroy.
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.

Prune any local or output values that have no references in the graph.
@jbardin jbardin force-pushed the jbardin/destroy-with-locals branch 2 times, most recently from 325b397 to 8492e43 Compare January 30, 2018 16:35
The id attribute can be missing during the destroy operation.
While the new destroy-time ordering of outputs and locals should prevent
resources from having their id attributes set to an empty string,
there's no reason to error out if we have the canonical ID field
available.

This still interrogates the attributes map first to retain any previous
behavior, but in the future we should settle on a single ID location.
@jbardin jbardin force-pushed the jbardin/destroy-with-locals branch from 8492e43 to ca4178b Compare January 30, 2018 20:46
Copy link
Contributor

@apparentlymart apparentlymart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is, thankfully, what I was expecting based on your internal notes to me over the last few days, so no scary surprises! 😀

I have a few thoughs/questions inline on some details here.

@@ -518,6 +518,16 @@ func (i *Interpolater) computeResourceVariable(
return &v, err
}

// special case for the "id" field which is usually also an attribute
if v.Field == "id" && r.Primary.ID != "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this something that can really happen in practice, or was this more just a result of us tending to be lazy about fully-constructing state in tests?

Not objecting to doing it to be safe in the mean time, but just curious as to what tripped this up and prompted you to add the branch here.

(The new provider protocols and state format no longer have this special ID field, as you know, so an end to this odd special-case is not too far in the future but of course we'll need to watch out during that change for situations like this were we're doing something special to keep the two aligned.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, it's a destroy-time issue, and we do run into it quite often. I'm actually not sure what order of operations through helper/schema get us in this situation, but as you said it is hopefully going away so this would only be temporary. I added the check now because the extra local/output evaluations during destroy are much more likely to encounter the situation.

},
},
},
return &EvalLocal{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is much nicer!


// NodeDestroyableOutput represents an output that is "destroybale":
// its application will remove the output from the state.
type NodeDestroyableOutput struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a special behavior for outputs under -target that ensures that they stay in the graph as long as any dependency is present. Do you think we need to do the same for destroy here too?

My initial instinct is yes, so that if I run terraform destroy -target=aws_instance.foo we'd still visit an output whose expression is "${aws_instance.foo.id}" and remove it, though it's tricky if the expression is instead "${aws_instance.foo.id} ${aws_instance.bar.id}".

In that multi-dependency case, I suppose maybe it's right that we'd remove it from state anyway, because the expression is no longer resolvable anyway (there would be an error if we tried to eval it). Not totally sure, though... 🤔

BTW, the TargetDownstream function implemented by NodeApplyableOutput is what triggers this special behavior, and so that's what we'd add here if we want to retain the destroy nodes too.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought this was doing the right thing here, but now you have me questioning whether I understood the TargetDownstream code correctly. Let me run some tests to confirm here.

If we intend to remove all but the root module outputs from the state, it's kind of a moot point, since the targeted destroy is going to at least break the output temporarily until there's another apply to reify the missing resources for interpolation.

I think removing the root output is OK too. If a remote state is looking for it, it's probably better that it appear as missing and require fixing rather than using a known stale value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Descendants, Ancestors, Upstream, Downstream, (Up|Down|In|Out)Edges 🤕

You're right, this was missed in some cases by targeting. GraphNodeTargetDownstream is an easy fix, since the dependency set of the destroy node is just the output and its ancestors.

Similar to NodeApplyableOuptut, NodeDestroyableOutputs also need to stay
in the graph if any ancestor nodes

Use the same GraphNodeTargetDownstream method to keep them from being
pruned, since they are dependent on the output node and all its
descendants.
Copy link
Contributor

@apparentlymart apparentlymart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking good to me.

One small nit: your latest commit message summary seems to say the opposite of what the commit actually does ("make sure attributes are removed"), which had me confused for a moment. 😀

@jbardin
Copy link
Member Author

jbardin commented Jan 31, 2018

Well, the commit message tells you what the result is, rather than what it's doing! It's removing the output, by keeping the NodeDestroyableOutput in the graph ;)

@ghost
Copy link

ghost commented Apr 5, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 5, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Wrong parsing ot the destroy-time provisioner command, during a targeted destroy
2 participants