Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform destroy fail #8229

Closed
BogdanSorlea opened this issue Aug 16, 2016 · 8 comments
Closed

Terraform destroy fail #8229

BogdanSorlea opened this issue Aug 16, 2016 · 8 comments

Comments

@BogdanSorlea
Copy link

Version: 0.7

Error creating plan: 1 error(s) occurred:

* variable "consul_servers" is nil, but no error was reported

The only way I was able to get it to finish was to get from the terraform.tfstate.backup the relevant "aws_security_group.consul_servers" resource hash (with the ID, description and tags mentioned there, but not with any of the underlying dependencies - i.e. the SG rules) and put it in the correct position (same one/corresponding one) in the hash inside terraform.tfstate file. Running terraform destroy afterwards got it to finish completely and successfully.

I noticed that the same kind of behaviour happens if I issue terraform destroy after the resources were removed. A subsequent run gave me an output like

Error creating plan: 411 error(s) occurred:

* variable "core-db" is nil, but no error was reported
* variable "it-stack-node" is nil, but no error was reported
* variable "it-stack-node" is nil, but no error was reported
* variable "core-db" is nil, but no error was reported
* variable "it-stack-node" is nil, but no error was reported
* variable "it-stack-node" is nil, but no error was reported
[...]

Now it is funny that it failed for only 411 resources, as I have a total of 1004 resources in this run - this indicating that it might be the case that it fails for only aws_security_group and aws_security_group_rule resources (I don't know the exact number I have, but I know it's somewhere over 300-ish in total, i.e. groups + rules).

This should be fixed, I should be able to destroy everything forcefully if needed, not have to do crazy voodoo with the state file.

On a related note, can someone please answer my comment here: #3019 (comment) ? We are seeing the same problem with Terraform v0.7 - even if we do a terraform run from scratch (no previous state existing in state file or the state file at all), e.g. error:

* aws_security_group_rule.outbound_all: [WARN] A duplicate Security Group rule was found on (sg-d48dafb3). This may be
a side effect of a now-fixed Terraform issue causing two security groups with
identical attributes but different source_security_group_ids to overwrite each
other in the state. See https://github.com/hashicorp/terraform/pull/2376 for more
information and instructions for recovery. Error message: the specified rule "peer: 0.0.0.0/0, ALL, ALLOW" already exists
@chiefy
Copy link

chiefy commented Aug 23, 2016

Having the same issue with 0.7.1

@BogdanSorlea
Copy link
Author

on another run that failed this way I was able to just remove the tfstate file as to consider the destroy done, as it looked like it did not have any of the resources around anymore anyways (seen through the AWS Console)

@tomelliff
Copy link
Contributor

tomelliff commented Aug 25, 2016

We're also seeing this issue (on 0.7.1) when we destroy something that's already been destroyed on anything with a dependency.

Looking at it it looks like the plan is attempting to build the dependency chain and pull in the data for the dependent resource but it's coming back as nil because the resource no longer exists. I've not looked through the code yet to see properly but I'd say that Terraform should either check whether the resource that depends on the dependent resource is there before checking the dependency chain or catch the nil and then check to see if it even needs that dependent resource.

@chiefy
Copy link

chiefy commented Aug 25, 2016

This is blocking us from destroying our stacks, so it's kind of a big deal. Has anyone come up with a workaround?

@Bowbaq
Copy link
Contributor

Bowbaq commented Aug 25, 2016

This seems related to #7993

@brikis98
Copy link
Contributor

I'm getting this issue with Terraform 0.7.2. First, when trying to create a VPC with lots of subnets, routes, gateways, etc, I hit some sort of eventual consistency bug as reported in #8530:

* aws_route.origin_to_destination.1: Error creating route: RouteAlreadyExists: The route identified by 10.2.0.0/18 already exists.
    status code: 400, request id: 632fc036-ec35-441c-be0e-4616c2ff8067
* aws_route.origin_to_destination.0: Error creating route: RouteAlreadyExists: The route identified by 10.2.0.0/18 already exists.
    status code: 400, request id: 90b06123-714c-4037-b604-5043e7b9a2f9
* aws_route.origin_to_destination.2: Error creating route: RouteAlreadyExists: The route identified by 10.2.0.0/18 already exists.
    status code: 400, request id: ad67ab36-1d39-4e04-a35a-847195eb80e3
* aws_route.origin_to_destination.3: Error creating route: RouteAlreadyExists: The route identified by 10.2.0.0/18 already exists.
    status code: 400, request id: 1db124b2-a7fb-4b2d-a7ec-58ee5c009388
* aws_route.nat.0: Error finding route after creating it: error finding matching route for Route table (rtb-11fd5977) and destination CIDR block (0.0.0.0/0)

Then, when I try to do terraform destroy, I hit this bug:

Error creating plan: 4 error(s) occurred:

* variable "private_app_subnets" is nil, but no error was reported
* variable "private_persistence_subnets" is nil, but no error was reported
* variable "public_subnets" is nil, but no error was reported
* variable "private_persistence_subnets" is nil, but no error was reported

I'm now stuck, unable to create my infrastructure fully or destroy the partially created stuff that's sitting in my AWS account.

@mitchellh
Copy link
Contributor

I'm going to close this as a dup of #7993. I'm trying to reproduce this now...

@ghost
Copy link

ghost commented Apr 21, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants