Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.11.4 Outputs forcing failures during apply (destroy) #17655

Closed
hawksight opened this issue Mar 21, 2018 · 9 comments · Fixed by #17768
Closed

0.11.4 Outputs forcing failures during apply (destroy) #17655

hawksight opened this issue Mar 21, 2018 · 9 comments · Fixed by #17768

Comments

@hawksight
Copy link

hawksight commented Mar 21, 2018

Hello, since using 0.11.4 I have noticed issues with destroying infrastructure.
Essentially where I have outputs configured, my apply commands seem to bomb out at point where it can no longer fulfill the output data.

The issues are mainly in these three examples:

* module.cluster.output.K8S_INSTANCE_GROUP_URLS: Resource 'data.google_container_cluster.information' does not have attribute 'instance_group_urls' for variable 'data.google_container_cluster.information.instance_group_urls'
* module.network.output.SUB_SL: cannot parse "${var.VPC_SUB_COUNT}" as an integer
* module.network.output.SUB_NAME: cannot parse "${var.VPC_SUB_COUNT}" as an integer

In the module cluster I have a data object to retrieve all the instance URLS once the k8s cluster and node pool(s) have been created. In the errors below you can see that the cluster and node pool have just been destroyed before this error raises

In module network I use a count fed from a var to create subnets in gcloud.

The common theme is that the errors are based around outputs. These outputs still work fine when creating the infrastructure, but seem to be erroring when tearing down the infrastructure.

eg:

terraform plan -var-file=inputs.tfvars -out plan -destroy
terraform apply plan

Terraform Version

Terraform v0.11.4
+ provider.google v1.7.0

Terraform Configuration Files

I have tried to include the main stuff and the relevant parts of the 'module' code we use. Eg. the outputs mainly. These are here

Sorry its not a full code example which is hard to share in gist.

Debug Output

Enabled the env var and retried from initial failure point. Eg. this is just after the failure shown in Actual behaviour below.

➜  development git:(master) ✗ TF_LOG=trace

➜  development git:(master) ✗ terraform plan -var-file=inputs.tfvars -out plan -destroy
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

google_compute_network.vpc: Refreshing state... (ID: vpc-du)
google_compute_subnetwork.subnet: Refreshing state... (ID: us-central1/vpc-du-subnet1)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - module.network.google_compute_network.vpc

  - module.network.google_compute_subnetwork.subnet


Plan: 0 to add, 0 to change, 2 to destroy.

------------------------------------------------------------------------

This plan was saved to: plan

To perform exactly these actions, run the following command to apply:
    terraform apply "plan"


➜  development git:(master) ✗ terraform apply plan

Error: Error applying plan:

6 error(s) occurred:

* module.cluster.output.K8S_INSTANCE_GROUP_URLS: variable "information" is nil, but no error was reported
* module.network.output.SUB_SL: cannot parse "${var.VPC_SUB_COUNT}" as an integer
* module.cluster.output.K8S_ZONE: variable "cluster" is nil, but no error was reported
* module.cluster.output.K8S_ENDPOINT: variable "cluster" is nil, but no error was reported
* module.cluster.output.K8S_MASTER_VERSION: variable "cluster" is nil, but no error was reported
* module.network.output.SUB_NAME: cannot parse "${var.VPC_SUB_COUNT}" as an integer

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Crash Output

No crash and now once I hit error point, I set TF_LOG=DEBUG but nothing is logging as nothing runs yet.

Expected Behavior

I expected the apply to destroy the resources as per the plan, as it has done in all prior versions (back to 0.10.something).

I would not expect the destroy behaviour to be concerned with outputs.

Actual Behavior

The plan worked as normal, however mid way through the destroy the command errored with the following errors:

➜  development git:(master) ✗ terraform apply plan
module.cluster.google_compute_firewall.ingress: Destroying... (ID: vpc-du-k8s-ingress)
module.cluster.google_container_node_pool.pools: Destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1)
module.cluster.google_compute_firewall.ingress: Still destroying... (ID: vpc-du-k8s-ingress, 10s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 10s elapsed)
module.cluster.google_compute_firewall.ingress: Destruction complete after 11s
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 20s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 30s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 40s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 50s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 1m0s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 1m10s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 1m20s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 1m30s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 1m40s elapsed)
module.cluster.google_container_node_pool.pools: Destruction complete after 1m47s
module.cluster.google_container_cluster.cluster: Destroying... (ID: k8s-du)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 10s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 20s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 30s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 40s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 50s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m0s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m10s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m20s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m30s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m40s elapsed)
module.cluster.google_container_cluster.cluster: Destruction complete after 1m41s

Error: Error applying plan:

3 error(s) occurred:

* module.cluster.output.K8S_INSTANCE_GROUP_URLS: Resource 'data.google_container_cluster.information' does not have attribute 'instance_group_urls' for variable 'data.google_container_cluster.information.instance_group_urls'
* module.network.output.SUB_SL: cannot parse "${var.VPC_SUB_COUNT}" as an integer
* module.network.output.SUB_NAME: cannot parse "${var.VPC_SUB_COUNT}" as an integer

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Initially asked around on google cloud terraform slack channel regarding the following two:

* module.network.output.SUB_SL: cannot parse "${var.VPC_SUB_COUNT}" as an integer
* module.network.output.SUB_NAME: cannot parse "${var.VPC_SUB_COUNT}" as an integer

After recreating im more concerned that the issue is that terraform is even looking at outputs when doing a destroy plan.

Steps to Reproduce

  1. Make sure your on 0.11.4 terraform.
  2. Create a resource using count.
  3. Create an output to list say all the names of the objects created with count. I was creating google_compute_subnetwork via a count.
  4. Create the infra.
  5. Plan destroy eg. terraform plan -var-file=inputs.tfvars -out plan -destroy
  6. Apply said plan and see what errors occur: terraform apply plan

Optionally.

  1. Also create a data lookup object and have an output based on it.

Additional Context

First noticed this late on friday (16/03/2018) when trying to pull down a dev environment. Think brew had updated that morning to the latest terraform.

This only happens on destroy, creation seems to work fine. I think with timing it is 0.11.4 related and the fact everything works as expected on 0.11.3.

References

After some discussions with Paddy on Slack, he referred me to some issues / PRs that may have touched or affected the relevant area of code.

Not necessarily to link but could aid in investigation.

Potentially related: #16782

@hawksight
Copy link
Author

Actually just reviewed changelog and found a way around the issues described above:
https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md#0111-november-30-2017

➜  development git:(master) ✗ export TF_WARN_OUTPUT_ERRORS=1
➜  development git:(master) ✗ terraform plan -var-file=inputs.tfvars -out plan -destroy
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

google_compute_network.vpc: Refreshing state... (ID: vpc-du)
google_compute_subnetwork.subnet: Refreshing state... (ID: us-central1/vpc-du-subnet1)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - module.network.google_compute_network.vpc

  - module.network.google_compute_subnetwork.subnet


Plan: 0 to add, 0 to change, 2 to destroy.

------------------------------------------------------------------------

This plan was saved to: plan

To perform exactly these actions, run the following command to apply:
    terraform apply "plan"


➜  development git:(master) ✗ terraform apply plan
module.network.google_compute_subnetwork.subnet: Destroying... (ID: us-central1/vpc-du-subnet1)
module.network.google_compute_subnetwork.subnet: Still destroying... (ID: us-central1/vpc-du-subnet1, 10s elapsed)
module.network.google_compute_subnetwork.subnet: Destruction complete after 18s
module.network.google_compute_network.vpc: Destroying... (ID: vpc-du)
module.network.google_compute_network.vpc: Still destroying... (ID: vpc-du, 10s elapsed)
module.network.google_compute_network.vpc: Still destroying... (ID: vpc-du, 20s elapsed)
module.network.google_compute_network.vpc: Destruction complete after 25s

Apply complete! Resources: 0 added, 0 changed, 2 destroyed.

@jbardin
Copy link
Member

jbardin commented Mar 21, 2018

Hi @hawksight,

Thanks for filing the issue, and for all the detailed information. This looks very similar to the issue described in #17548, and might have found one of the cases where that PR can't cover.

Can you try re-running your example, but instead of saving the plan, run terraform destroy directly? I think it's probably related to plan serialization, but I'd like to be sure first.

Thanks!

@hawksight
Copy link
Author

hawksight commented Mar 21, 2018

I've just given that a try and the count issue doesn't appear but the output reliant on the data provider still errors:

➜  development git:(0-11-4-bug) terraform destroy -var-file=inputs.tfvars
google_compute_network.vpc: Refreshing state... (ID: vpc-du)
google_compute_subnetwork.subnet: Refreshing state... (ID: us-central1/vpc-du-subnet1)
google_compute_firewall.ingress: Refreshing state... (ID: vpc-du-k8s-ingress)
google_container_cluster.cluster: Refreshing state... (ID: k8s-du)
google_container_node_pool.pools: Refreshing state... (ID: us-central1-a/k8s-du/vpc-du-np-1)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - module.cluster.google_compute_firewall.ingress

  - module.cluster.google_container_cluster.cluster

  - module.cluster.google_container_node_pool.pools

  - module.network.google_compute_network.vpc

  - module.network.google_compute_subnetwork.subnet


Plan: 0 to add, 0 to change, 5 to destroy.

Do you really want to destroy?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

module.cluster.google_compute_firewall.ingress: Destroying... (ID: vpc-du-k8s-ingress)
module.cluster.google_container_node_pool.pools: Destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 10s elapsed)
module.cluster.google_compute_firewall.ingress: Still destroying... (ID: vpc-du-k8s-ingress, 10s elapsed)
module.cluster.google_compute_firewall.ingress: Destruction complete after 11s
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 20s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 30s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 40s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 50s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 1m0s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 1m10s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 1m20s elapsed)
module.cluster.google_container_node_pool.pools: Still destroying... (ID: us-central1-a/k8s-du/vpc-du-np-1, 1m30s elapsed)
module.cluster.google_container_node_pool.pools: Destruction complete after 1m36s
module.cluster.google_container_cluster.cluster: Destroying... (ID: k8s-du)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 10s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 20s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 30s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 40s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 50s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m0s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m10s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m20s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m30s elapsed)
module.cluster.google_container_cluster.cluster: Still destroying... (ID: k8s-du, 1m40s elapsed)
module.cluster.google_container_cluster.cluster: Destruction complete after 1m42s
module.network.google_compute_subnetwork.subnet: Destroying... (ID: us-central1/vpc-du-subnet1)
module.network.google_compute_subnetwork.subnet: Still destroying... (ID: us-central1/vpc-du-subnet1, 10s elapsed)
module.network.google_compute_subnetwork.subnet: Destruction complete after 17s
module.network.google_compute_network.vpc: Destroying... (ID: vpc-du)
module.network.google_compute_network.vpc: Still destroying... (ID: vpc-du, 10s elapsed)
module.network.google_compute_network.vpc: Still destroying... (ID: vpc-du, 20s elapsed)
module.network.google_compute_network.vpc: Destruction complete after 25s

Error: Error applying plan:

1 error(s) occurred:

* module.cluster.output.K8S_INSTANCE_GROUP_URLS: Resource 'data.google_container_cluster.information' does not have attribute 'instance_group_urls' for variable 'data.google_container_cluster.information.instance_group_urls'

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

For reference, in my module for k8s cluster, I have the following data step, which allows me to get all the instance_urls (al though recent google-provider plugin should allow me to change this)

data "google_container_cluster" "information" {
  name = "${google_container_cluster.cluster.name}"
  zone = "${google_container_cluster.cluster.zone}"

  depends_on = ["google_container_node_pool.pools"]
}

The output error is grabbing the relevant information from that step. I guess this is something I wouldn't expect to pop up in a destroy given I no longer care about the output.

@hawksight
Copy link
Author

hawksight commented Mar 21, 2018

Oh wait, I just realised... terraform actually did remove everything as the output suggests, just it popped that error at the bottom, so I assumed it hadn't.

Using the TF_WARN_OUTPUT_ERRORS=1 I guess would remove the error:

➜  development git:(0-11-4-bug) export TF_WARN_OUTPUT_ERRORS=1

➜  development git:(0-11-4-bug) terraform destroy -var-file=inputs.tfvars
Do you really want to destroy?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes


Destroy complete! Resources: 0 destroyed.

@jbardin
Copy link
Member

jbardin commented Mar 21, 2018

Thanks for the follow up. It looks like there are actually 2 issues here, the other being #17425, though I thought that was taken care of when a full destroy was being run.

@xocasdashdash
Copy link
Contributor

I can confirm that I'm hitting this issue too with terraform 0.11.4
also that the TF_WARN_OUTPUT_ERRORS=1 workaround works.

@andrew-best-diaxion
Copy link

Can confirm I'm seeing this on 0.11.5 on MacOS and that the workaround with TF_WARN_OUTPUT_ERRORS=1 works.

@edrzmr
Copy link

edrzmr commented Apr 5, 2018

For me, on 0.11.6 same happens here, TF_WARN_OUTPUT_ERRORS=1 are really magic! Before this works, I made some investigations, maybe this help some one, on tag v0.11.6applying this patch

$ git diff
diff --git a/config/config.go b/config/config.go
index 1772fd7e3..afd61212a 100644
--- a/config/config.go
+++ b/config/config.go
@@ -231,10 +231,13 @@ func (r *Resource) Count() (int, error) {
 
        v, err := strconv.ParseInt(count, 0, 0)
        if err != nil {
-               return 0, fmt.Errorf(
-                       "cannot parse %q as an integer",
-                       count,
-               )
+               panic(err)
+               /*
+                       return 0, fmt.Errorf(
+                               "cannot parse %q as an integer",
+                               count,
+                       )
+               */
        }
 
        return int(v), nil

I got this:

$ terraform destroy -target=azurerm_network_security_group.nsg
Acquiring state lock. This may take a few moments...
azurerm_resource_group.vnet: Refreshing state... (ID: /subscriptions/5514de8f-37ea-4238-8242-4944d2a55379/resourceGroups/dev-network)
azurerm_network_security_group.nsg: Refreshing state... (ID: /subscriptions/5514de8f-37ea-4238-8242-...curityGroups/dev-public-security-group)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - module.public-network.module.bastion.azurerm_network_interface.vm

  - module.public-network.module.bastion.azurerm_virtual_machine.vm-linux

  - module.public-network.module.subnet.azurerm_subnet.subnet

  - module.public-network.module.subnet.module.nsg.azurerm_network_security_group.nsg


Plan: 0 to add, 0 to change, 4 to destroy.

Do you really want to destroy?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

panic: strconv.ParseInt: parsing "${var.nb_instances}": invalid syntax

goroutine 2020 [running]:
github.com/hashicorp/terraform/config.(*Resource).Count(0xc420418dc0, 0xc4206427e0, 0x1c, 0x0)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/config/config.go:234 +0x15d
github.com/hashicorp/terraform/terraform.(*Interpolater).resourceCountMax(0xc42022bfc0, 0xc420356a00, 0xc420418dc0, 0xc420418f00, 0xc420418dc0, 0x0, 0x0)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/interpolate.go:797 +0x535
github.com/hashicorp/terraform/terraform.(*Interpolater).computeResourceMultiVariable(0xc42022bfc0, 0xc420c29528, 0xc420418f00, 0x0, 0x0, 0x0)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/interpolate.go:625 +0x15e
github.com/hashicorp/terraform/terraform.(*Interpolater).valueResourceVar(0xc42022bfc0, 0xc420c29528, 0xc420045622, 0x21, 0xc420418f00, 0xc42074cea0, 0x0, 0x300000002)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/interpolate.go:250 +0x28d
github.com/hashicorp/terraform/terraform.(*Interpolater).Values(0xc42022bfc0, 0xc420c29528, 0xc42022c840, 0x49b77e, 0xc4200bc0a0, 0xc4206a6000)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/interpolate.go:86 +0x7d0
github.com/hashicorp/terraform/terraform.(*BuiltinEvalContext).Interpolate(0xc4200b16c0, 0xc420275ab0, 0x0, 0xc000, 0xc420c295c8, 0x429184)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/eval_context_builtin.go:243 +0xc8
github.com/hashicorp/terraform/terraform.(*EvalWriteOutput).Eval(0xc42074cd80, 0x2ba13c0, 0xc4200b16c0, 0x0, 0x0, 0x0, 0x0)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/eval_output.go:52 +0x79
github.com/hashicorp/terraform/terraform.EvalRaw(0x2b7c5e0, 0xc42074cd80, 0x2ba13c0, 0xc4200b16c0, 0x42, 0x0, 0x0, 0x42)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/eval.go:53 +0x156
github.com/hashicorp/terraform/terraform.(*EvalOpFilter).Eval(0xc42074cde0, 0x2ba13c0, 0xc4200b16c0, 0x2, 0x2, 0xc4206424c0, 0x1b)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/eval_filter_operation.go:37 +0x4c
github.com/hashicorp/terraform/terraform.EvalRaw(0x2b7c320, 0xc42074cde0, 0x2ba13c0, 0xc4200b16c0, 0x0, 0x0, 0x0, 0x0)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/eval.go:53 +0x156
github.com/hashicorp/terraform/terraform.(*EvalSequence).Eval(0xc4204f6ba0, 0x2ba13c0, 0xc4200b16c0, 0x2, 0x2, 0xc420642480, 0x1b)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/eval_sequence.go:14 +0x7a
github.com/hashicorp/terraform/terraform.EvalRaw(0x2b7c440, 0xc4204f6ba0, 0x2ba13c0, 0xc4200b16c0, 0x22fd1a0, 0x44d9e47, 0x2075e20, 0xc42022eef0)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/eval.go:53 +0x156
github.com/hashicorp/terraform/terraform.Eval(0x2b7c440, 0xc4204f6ba0, 0x2ba13c0, 0xc4200b16c0, 0xc4204f6ba0, 0x2b7c440, 0xc4204f6ba0, 0xc42048eaa0)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/eval.go:34 +0x4d
github.com/hashicorp/terraform/terraform.(*Graph).walk.func1(0x26c6420, 0xc4205c8c40, 0x0, 0x0)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/terraform/graph.go:126 +0xc26
github.com/hashicorp/terraform/dag.(*Walker).walkVertex(0xc420274e00, 0x26c6420, 0xc4205c8c40, 0xc42022a540)
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/dag/walk.go:387 +0x3a0
created by github.com/hashicorp/terraform/dag.(*Walker).Update
	/home/drrzmr/devel/workspace.go/src/github.com/hashicorp/terraform/dag/walk.go:310 +0x1248



!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

I guess that terraform just try parse data to int, without evaluate the expression

ncs-alane added a commit to newcontext-oss/kitchen-terraform that referenced this issue Jun 12, 2018
ncs-alane added a commit to newcontext-oss/kitchen-terraform that referenced this issue Jun 26, 2018
ncs-alane added a commit to newcontext-oss/kitchen-terraform that referenced this issue Jun 26, 2018
ncs-alane added a commit to newcontext-oss/kitchen-terraform that referenced this issue Jul 25, 2018
ncs-alane added a commit to newcontext-oss/kitchen-terraform that referenced this issue Aug 15, 2018
@ghost
Copy link

ghost commented Apr 3, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 3, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants