Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform error on destroy when evaluating output referring to uncreated resource #18356

Closed
mildred opened this issue Jun 29, 2018 · 2 comments · Fixed by #24083
Closed

terraform error on destroy when evaluating output referring to uncreated resource #18356

mildred opened this issue Jun 29, 2018 · 2 comments · Fixed by #24083
Labels
bug core v0.11 Issues (primarily bugs) reported against v0.11 releases

Comments

@mildred
Copy link
Contributor

mildred commented Jun 29, 2018

Terraform Version

v0.11.7

Terraform Configuration Files

variable "cluster_size" {
}

resource "aws_instance" "coreos" {
  count = "${var.cluster_size}"
  ...
  // for some reason this resource will fail to be created
  ...
  lifecycle {
    create_before_destroy = true
  }
}

// Don't look at the strange things there, that's not the point of the bug:

resource "random_pet" "cluster_size" {
  keepers {
    token = "${var.cluster_size}"
  }
}
resource "random_pet" "memory" {
  keepers {
    token = "${element(aws_instance.coreos.*.id, 0)}"
  }
  length = "${var.cluster_size}"
  prefix = "${random_pet.cluster_size.id}"
  lifecycle {
    ignore_changes = ["prefix", "length"]
  }
}

output "consul_bootstrap_expect" {
  value = "${(random_pet.memory.length == var.cluster_size && random_pet.memory.prefix == random_pet.cluster_size.id) ? "-bootstrap-expect=${random_pet.memory.length}" : ""}"
}

Output

terraform plan && terraform apply was executed, the aws_instance failed to create (for some obscure AWS reason):

[vision-heat-redhold][f71065b2-9743-4331-9c9e-7bbb62491365] 2018/06/26 18:25:20 * aws_instance.coreos.1: Error waiting for instance (i-05c95dfbead0dc9f0) to become ready: Failed to reach target state. Reason: Server.InternalError: Internal error on launch

At that point the tfstate contains random_pet.cluster_size, does not contain random_pet.memory and does not contains output.consul_bootstrap_expect

Then we executed terraform refresh because of bug terraform shows too much items in outputs when reducing the count of a create_before_destroy resource #16473

At that point the tfstate contains random_pet.cluster_size, does not contain random_pet.memory and contains output.consul_bootstrap_expect which is illogical because the output needs the random_pet.cluster_size to evaluate. So one cannot exist without the other...

Then we run terraform plan -destroy and terraform apply to execute the destruction of all resources. The apply fails with:

* output.consul_bootstrap_expect: Resource 'random_pet.memory' does not have attribute 'length' for variable 'random_pet.memory.length'

which is normal because random_pet.memory does not exists in tfstate (but the output does)

Expected Behavior

destruction should work correctly

Actual Behavior

destruction failed because it tried to evaluate an output with non existing resource. This is perhaps a more problematic instance of terraform destroy tries to evaluate outputs that can refer to non existing resources #18026

Steps to Reproduce

difficult to tell because it triggers only when terraform apply fails for the first time and stops before all resources are created

Additional Context

all run in automated processes

References

@voroniys
Copy link

I believe the real issue is even more bigger - it is affecting not only outputs. In my configuration I'm using
locals blocks to workaround another bugs in terraform. All local variables assignements, that are using interpolations are reported failed when refered resources not exists.

@hashibot hashibot added the v0.11 Issues (primarily bugs) reported against v0.11 releases label Aug 29, 2019
@ghost
Copy link

ghost commented Apr 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 1, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug core v0.11 Issues (primarily bugs) reported against v0.11 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants