Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unnecessary template_file recreation when no template variables have changed #8871

Closed
iandelahorne opened this issue Sep 15, 2016 · 5 comments

Comments

@iandelahorne
Copy link
Contributor

Hi there, I'm seeing an issue where a data.template_file thinks it's output has changed and the resource will be recreated on subsequent terraform runs, despite no variable inputs to the template_file resource changing.

In our case, we're using the template output as input to an S3 bucket name. Since aws_s3_bucket is using data.template_file.aws_config_bucket_name.rendered as the bucket name, and terraform believes the template resource has changed, terraform wants to destroy and recreate the bucket even though the name is exactly the same.

This only happens if template variables are interpolated - i e:

data "template_file" "aws_config_bucket_name" {
    template = "aws_config-${var.region}-4711"
}

will not force recreation, but

data "template_file" "aws_config_bucket_name" {
    template = "aws_config-${region}-4711"
    vars {
        region = "${var.region}"
    }
}

will force recreation.

Terraform Version

0.7.3

Affected Resource(s)

  • data.template_file
  • aws_s3_bucket

Terraform Configuration Files

variable "region" {
    default = "us-west-2"
}

variable "account_id" {
    default = "4711"
}

data "template_file" "aws_config_bucket_name" {
    template = "${purpose}-${region}-${account_id}"

    vars {
        region = "${var.region}"
        account_id = "${var.account_id}"
        purpose = "aws-config"
    }
}

resource "aws_s3_bucket" "aws_config" {
    bucket = "${data.template_file.aws_config_bucket_name.rendered}"
    acl = "private"
}

Expected Behavior

Subsequent plans/applies after first application don't want to recreate the bucket

Actual Behavior

Terraform plan output:

$ terraform plan -out plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

aws_s3_bucket.aws_config: Refreshing state... (ID: aws-config-us-west-2-4711)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Your plan was also saved to the path below. Call the "apply" subcommand
with this plan file and Terraform will exactly execute this execution
plan.

Path: plan

-/+ aws_s3_bucket.aws_config
    acceleration_status: "" => "<computed>"
    acl:                 "private" => "private"
    arn:                 "arn:aws:s3:::aws-config-us-west-2-4711" => "<computed>"
    bucket:              "aws-config-us-west-2-4711" => "${data.template_file.aws_config_bucket_name.rendered}" (forces new resource)
    force_destroy:       "false" => "false"
    hosted_zone_id:      "Z3BJ6K6RIION7M" => "<computed>"
    policy:              "" => "<computed>"
    region:              "us-west-2" => "<computed>"
    request_payer:       "BucketOwner" => "<computed>"
    website_domain:      "" => "<computed>"
    website_endpoint:    "" => "<computed>"

<= data.template_file.aws_config_bucket_name
    rendered:        "<computed>"
    template:        "${purpose}-${region}-${account_id}"
    vars.%:          "3"
    vars.account_id: "4711"
    vars.purpose:    "aws-config"
    vars.region:     "us-west-2"

Terraform apply output:

$ terraform apply
aws_s3_bucket.aws_config: Refreshing state... (ID: aws-config-us-west-2-4711)
data.template_file.aws_config_bucket_name: Refreshing state...
aws_s3_bucket.aws_config: Destroying...
aws_s3_bucket.aws_config: Destruction complete
aws_s3_bucket.aws_config: Creating...
acceleration_status: "" => ""
acl: "" => "private"
arn: "" => ""
bucket: "" => "aws-config-us-west-2-4711"
force_destroy: "" => "false"
hosted_zone_id: "" => ""
policy: "" => ""
region: "" => ""
request_payer: "" => ""
website_domain: "" => ""
website_endpoint: "" => ""
aws_s3_bucket.aws_config: Creation complete

Apply complete! Resources: 1 added, 0 changed, 1 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the terraform show command.

State path: terraform.tfstate


Plan: 1 to add, 0 to change, 1 to destroy.

Steps to Reproduce

  1. terraform apply - will create the bucket
  2. terraform apply - will destroy and recreate the bucket
@apparentlymart
Copy link
Contributor

Hi @lflux. Sorry for the issue here.

The bug here is actually not what it first appears... what you're hitting is a known issue with Terraform's validation of interpolations. In your template_file data block you have the template specified as "${purpose}-${region}-${account_id}", which means that Terraform thinks those variables are supposed to be resolved by the core interpolator mechanism. However, what you want here is for these variables to be resolved by the template engine (which happens to use the same interpolation syntax).

So the fix is to escape those variables, ensuring that Terraform core won't try to deal with them itself:

data "template_file" "aws_config_bucket_name" {
    template = "$${purpose}-$${region}-$${account_id}"

    vars {
        region = "${var.region}"
        account_id = "${var.account_id}"
        purpose = "aws-config"
    }
}

The correct behavior for your original config would be for Terraform to emit an error saying that purpose is not a valid interpolation expression for core Terraform, but instead Terraform allows it but treats it as if it were "computed", which causes the data source to defer its processing until apply time.

If you add the extra dollar signs to escape as in my above example, you should find that the template gets rendered during the "refresh" phase of terraform plan, and so Terraform will know that the value hasn't changed and thus won't try to recreate your S3 bucket.

@apparentlymart
Copy link
Contributor

I'm going to close this issue to consolidate the discussion of this bug into #8077, but feel free to reopen it if my suggested config fix doesn't work, since that would indicate a new bug.

@iandelahorne
Copy link
Contributor Author

Thanks. This is a regression to how inline template_file's worked in 0.6, I'll double-quote for now until #8077 is fixed.

@apparentlymart
Copy link
Contributor

@lflux specifically it is an effect of it being a data source rather than a resource. Data sources have a different lifecycle where Terraform will deal with them very early on (during refresh) if the configuration doesn't contain any unknown variables, so you're triggering an unintended consequence of that behavior in conjunction with the validation bug.

This was actually always a core bug, even in 0.6... it just wasn't observable in this precise way until we added a new branch in behavior for computed attributes.

Note that even when #8077 it will still be required to double-dollar these. That issue is about making Terraform produce a suitable error message when you only have a single dollar sign.

@ghost
Copy link

ghost commented Apr 22, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 22, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants