Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for dynamic blocks and meta-arguments #24188

Open
ausfestivus opened this issue Feb 23, 2020 · 65 comments
Open

Support for dynamic blocks and meta-arguments #24188

ausfestivus opened this issue Feb 23, 2020 · 65 comments

Comments

@ausfestivus
Copy link

ausfestivus commented Feb 23, 2020

Afternoon,

FR to allow the dynamic blocks capability to work with resource meta-arguments.

Current Terraform Version

0.12.20+

Use-cases

The use case Im trying to implement is a simple one.
I would like to add lifecycle meta-argument to a resouce when our var.ENVIRONMENT == "prod". ie, stop the pipeline destroying prod resources.

Attempted Solutions

  # Here we set a lifecycle block to include `prevent_destroy=true` when `var.ENVIRONMENT=prod`.
  dynamic "lifecycle" {
    for_each = var.ENVIRONMENT == "prod" ? [{}] : []

    content {
      prevent_destroy = true
    }
  }

Result of the above is:

Error: Unsupported block type

  on main.tf line 25, in resource "azurerm_resource_group" "rgnamegoeshere":
  25:   dynamic "lifecycle" {

Blocks of type "lifecycle" are not expected here.

Proposal

Support meta-arguments for use with dynamic blocks. Im sure its really easy to. jk.

References

Similar request in Terraform Core discussion: https://discuss.hashicorp.com/t/dynamic-lifecycle-ignore-changes/4579/4

@tomasbackman
Copy link

tomasbackman commented Mar 4, 2020

Another use case would be if we sometimes want to ignore a field, like "master_password or similar.

Another way solving both these use cases (I guess?) could be if variables would be allowed in lifecycle blocks, something like:

locals { 
  destroy = var.ENVIRONMENT == "prod" ? true : false
}
lifecycle { 
  ignore_changes = var.list_with_changes_to_ignore
  prevent_destroy = local.destroy
}

It would be very useful in any case

@janosmiko
Copy link

+1 for any of these. It would be really useful if we could manipulate lifecycle rules via variables or dynamic blocks.

@tomaszsek
Copy link

+1 for this enhancement. In my case I want to support two different major provider versions. In the old one there is field which is required but in the newest it doesn't exist.

@davi5e
Copy link

davi5e commented Mar 17, 2020

In my case, I'm creating a GKE module that could use a release channel or not, so in one scenario I need to ignore both min_master_version and node_version, while when release_channel == "UNSPECIFIED" I do not want to ignore them...

It would look like something as

data "google_container_engine_versions" "location" {
  location       = "southamerica-east1"
  project        = "leviatan-prod"
  version_prefix = var.kubernetes_channel != "UNSPECIFIED" ? var.kubernetes_version : ""
}

resource "google_container_cluster" "cluster" {
  provider = google-beta

  # [...]

  min_master_version = var.kubernetes_channel != "UNSPECIFIED" ? var.kubernetes_version : data.google_container_engine_versions.location.latest_master_version
  node_version       = var.kubernetes_channel != "UNSPECIFIED" ? var.kubernetes_version : data.google_container_engine_versions.location.latest_master_version

  release_channel {
    channel = var.kubernetes_channel
  }

  lifecycle {
    ignore_changes = [
      var.release_channel != "UNSPECIFIED" ? min_master_version : null,
      var.release_channel != "UNSPECIFIED" ? node_version : null,
    ]
  }
}

By the way, seems that using both the channel and the image versions yields some computed nulls in the resource's code, but this is not a problem (in case the current channel versions are used) nor part of the scope of the discussion...

Anyhow, these are the errors:

Error: Invalid expression

  on main.tf line XXX, in resource "google_container_cluster" "cluster":
 XXX:       var.kubernetes_channel != "UNSPECIFIED" ? min_master_version : null,

A single static variable reference is required: only attribute access and
indexing with constant keys. No calculations, function calls, template
expressions, etc are allowed here.


Error: Invalid expression

  on main.tf line YYY, in resource "google_container_cluster" "cluster":
 YYY:       var.kubernetes_channel != "UNSPECIFIED" ? node_version : null,

A single static variable reference is required: only attribute access and
indexing with constant keys. No calculations, function calls, template
expressions, etc are allowed here.

@timorkal
Copy link

+1

4 similar comments
@thorvats
Copy link

+1

@jwlogemann
Copy link

+1

@jkrivas
Copy link

jkrivas commented Apr 14, 2020

+1

@stencore-repo
Copy link

+1

@ArtemTrofimushkin
Copy link

+1

@alex-sitnikov
Copy link

Any chance this would be implemented? We want to introduce something like blame step in CI/CD to re-tag only changed resources with various info from build and it seems that boilerplate that needs to be included with each and every resource in lifecycle.ignore_changes is obnoxious.

@Nuru
Copy link

Nuru commented Jun 18, 2020

@danieldreier It would be a significant added benefit even if the block were limited to evaluating expressions that are not dependent on any state or resources, such as directly set variables and functions of them. This would still allow the expression to be evaluated very early in the processing, but at the same time allow option flags.

Note to other people reading this: please do not add "+1" comments. Instead, click on the thumbs up icon at the bottom of the issue statement.

@tmccombs
Copy link
Contributor

Another use case is I have a module for a lambda function. Most of the time, I want to ignore changes to the actual code of the lambda, because that is managed outside of terraform. But in a few cases, terraform should manage the code as well, so I don't want to ignore changes.

I also tried doing something like:

  dynamic "lifecycle" {
    for_each = var.manage_code ? [] : [1]

    content {
      ignore_changes = [
        filename,
        runtime,
        handler,
        source_code_hash,
      ]
    }
  }

But then I get an error that Blocks of type "lifecycle" are not expected here. And of course modules don't support lifecycle's either....

The only way I can find to do this is to repeat all of the configuration withe the only change being the lifecycle.

@Skyfallz
Copy link

Skyfallz commented Jun 24, 2020

Hi,

seems like this is just the fresh version of this issue (both dynamic blocks or variable interpolation would do the trick for most of us I guess).

@apparentlymart
The lack of this functionality is a real problem, we can't easily secure our production resources (we obviously do not want some of them to be destroyed) while keeping flexibility for our non-production environments (if I tell Terraform not to delete anything, how is the CI/CD supposed to clean up old environments ?). This is a real production issue, definitely not an improvement or feature request. It makes the whole lifecycle system flawed, and it should be considered as a bug in it. I just can't understand why Hashicorp can't even give us a clear answer on this. It's been 5 years.

Terraform should definitely allow this, and we need to know when this could land.

And PLEASE people, do not add "+1" and noise on this issue. That's what closed the previous ticket, and that's why we never got any response.

@Dmitry1987
Copy link

@Skyfallz some feature requests might never be implemented, we have to accept that and move on with the workarounds I believe. One suggestion for everyone who struggles with this, is put an easy to 'sed' placeholder to that location (like LCYCLE_REPLACEME_LIST), and run a 's/find/replace/g' every time before running terraform (if it's a CI/CD pipeline and the modified tf files get discarded in that build job anyway).

@tmccombs
Copy link
Contributor

tmccombs commented Jul 1, 2020

@Dmitry1987, besides the fact that that is an incredibly awkward workaround, that doesn't solve the problem if the lifecycle is in a module that is used multiple times in the same workspace with different lifecycle requirements. The only workarounds I know of are to duplicate all the config, give up on HCL and use some other tool to generate terraform json files (which I would probably have to build myself, since I don't know of a well-established tool to do this), or use something other than terraform altogether.

@Skyfallz
Copy link

Skyfallz commented Jul 1, 2020

@Dmitry1987 @tmccombs this workaround is not that awkward (not more awkward that the fact that we can't do it natively on TF anyway), especially if you consider to do this only in your 'destroy' step in a CI/CD (this way lifecycle blocks are still present on apply).
But for sure, this is not pretty, and we should definitely have a clean solution instead. I'm working on a Terraform wrapper to handle this usecase (and some others, like #17599), I'll share it when it's done.

@tmccombs
Copy link
Contributor

tmccombs commented Jul 1, 2020

@Skyfallz how would that workaround work for the ignore_changes example I gave above?

@alex-sitnikov
Copy link

@tmccombs we currently investigating use of Pulumi instead of Terraform since it seems to not have these awkward issues and is much more succinct in terms of representation. Basically instead of writing wrappers around Terraform you can use JS/C#/Python to describe your infra.

@Dmitry1987
Copy link

I won't argue if it seems awkward to some :)
but that's one possible way to do it, which I can think of (rendering all TF in JSON might be better or worse, depends on size of infra and how frequently changes are made).
Never saw Pulumi, thanks @revolly will check this out

@Dmitry1987
Copy link

Oh well, the first Pulumi example reminds me using vanilla SDK of a cloud provider, so it's probably better comparison vs SDKs and not so with Terraform (which is easier to use than SDK because declarative and keeps state)

import pulumi
from pulumi_azure import core, storage

# Create an Azure Resource Group
resource_group = core.ResourceGroup("resource_group")

# Create an Azure resource (Storage Account)
account = storage.Account("storage",
    resource_group_name=resource_group.name,
    account_tier='Standard',
    account_replication_type='LRS',
    tags={"Environment": "Dev"})

# Export the connection string for the storage account
pulumi.export('connection_string', account.primary_connection_string)

I wonder how large infra looks like, probably similar to raw SDK code (like boto3 in python if someone does aws in boto)

@alex-sitnikov
Copy link

@Dmitry1987 Pulumi really feels like a next-level Terraform (they even use its modules and have utility to convert tf=>pulumi). It keeps state as well and yeah it's deceptive a bit, because it seems like Pulumi defining actions to be performed and not describing desired state, but it is actually not the case. It's doing the same thing as tf but in imperative way, underneath it all it's very similar to Terraform concept which is to build resource graph and then create it via underlying provider. You can actually take a look at the comparison to tf here.

@Dmitry1987
Copy link

Dmitry1987 commented Jul 2, 2020 via email

@janschumann
Copy link
Contributor

@Dmitry1987 @revolly To say that Pulumi is next level to terraform ist just wrong by definition. as the pulumi comparison page says: terraform is declarative (HCL) and Pulumi is programmatic ("any language"). So these two approaches are completely different and therefore cannot be compared at all (just like apples and pears)

That said, you probably could compare terraform with puppet and Pulumi with chef. Which I used (all of them) in various projects. And my experience with the programmatic approach is that the resulting code needs much more maintenance in the long run, as code evolves. Especially in the DevOps age, where all the developers care for infrastructure as well. So what Pulumi and those promote as an advantage (to be able to everything you want), quickly turns to a maintenance nightmare.

What I often perceived, when I found myself stuck using the declarative approach - saying "I would like to code this thing" - was that there was a flaw in the overall architecture that I created. So that's the maintenance effort in the declarative world: To keep the architecture up to date, which means to constantly improve it, while keeping the code readable to everyone!

@sithmal
Copy link

sithmal commented Sep 17, 2020

+1

@aditki
Copy link

aditki commented Sep 29, 2020

Another use case is ignoring load balancer target groups changes that Code deploy does that we usually ignore but having this support will let us not ignore changes made to the load balancer itself or the target groups.

@BastienLouvel
Copy link

+1

1 similar comment
@alvaro-fdez
Copy link

+1

@Mearman
Copy link

Mearman commented Dec 7, 2021

Oh yeah not at all. When I implement my Terragrunt, I'm going to just generate it.

@diannading
Copy link

+1 , I really need this feature

@mrsiejas
Copy link

mrsiejas commented Jan 24, 2022

Not a perfect solution but I'm about to test this: #22544 (comment)

Big potential problem with this workaround is changing the variable will always cause resource destruction. For those who are brave enough to give it a try, here's also how to consolidate output using locals:

variable "enable_delete_protection" {
  type        = bool
  default     = true
  description = "Set resource protection of important non-recoverable resources"
}

resource "aws_prometheus_workspace" "amp_workspace" {
  count = var.enable_delete_protection ? 0 : 1
  alias = "${var.names_prefix}-workspace"
}

resource "aws_prometheus_workspace" "amp_workspace_protected" {
  count = var.enable_delete_protection ? 1 : 0
  alias = "${var.names_prefix}-workspace"

  lifecycle {
    prevent_destroy = true
  }
}

locals {
  prometheus_endpoint = (var.enable_delete_protection ? aws_prometheus_workspace.amp_workspace_protected : aws_prometheus_workspace.amp_workspace)[0].prometheus_endpoint
  ...
  remoteWrite = [{
  url = "${local.prometheus_endpoint}api/v1/remote_write"
  ...
  

@sjudkins
Copy link

Not a perfect solution but I'm about to test this: #22544 (comment)

Big potential problem with this workaround is changing the variable will always cause resource destruction. For those who are brave enough to give it a try, here's also how to consolidate output using locals:

variable "enable_delete_protection" {
  type        = bool
  default     = true
  description = "Set resource protection of important non-recoverable resources"
}

resource "aws_prometheus_workspace" "amp_workspace" {
  count = var.enable_delete_protection ? 0 : 1
  alias = "${var.names_prefix}-workspace"
}

resource "aws_prometheus_workspace" "amp_workspace_protected" {
  count = var.enable_delete_protection ? 1 : 0
  alias = "${var.names_prefix}-workspace"

  lifecycle {
    prevent_destroy = true
  }
}

locals {
  prometheus_endpoint = (var.enable_delete_protection ? aws_prometheus_workspace.amp_workspace_protected : aws_prometheus_workspace.amp_workspace)[0].prometheus_endpoint
  ...
  remoteWrite = [{
  url = "${local.prometheus_endpoint}api/v1/remote_write"
  ...
  

I am trying to get this workaround to work, but I get an error:

An argument named "alias" is not expected here.

Also, introducing the "count" attribute causes other references to be changed to prevent the following:

│ Because azurerm_resource_group.main has "count" set, its attributes must be accessed on specific instances.

│ For example, to correlate with indices of a referring resource, use:
│ azurerm_resource_group.main[count.index]

Is there a workaround for the "alias" and "count" issues when using this workaround to support "dynamically" destroying or not? (Or, is there another "workaround" until this feature is implemented?)

Thank you

@mrsiejas
Copy link

I am trying to get this workaround to work, but I get an error:

An argument named "alias" is not expected here.

Also, introducing the "count" attribute causes other references to be changed to prevent the following:

│ Because azurerm_resource_group.main has "count" set, its attributes must be accessed on specific instances. │ │ For example, to correlate with indices of a referring resource, use: │ azurerm_resource_group.main[count.index]

Is there a workaround for the "alias" and "count" issues when using this workaround to support "dynamically" destroying or not? (Or, is there another "workaround" until this feature is implemented?)

Thank you

alias is an attribute of resource "aws_prometheus_workspace" - it was used only as an example. Accordingly to terraform docs https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group doesn't have attribute alias so you have to remove it.

I think the second error is self explanatory since you introduce count in your resource, you have to access them on specific index if you are referencing it. U can use azurerm_resource_group.main[0]. I suggest reading docs on terraform ternary operator.

@IrmantasMarozas
Copy link

IrmantasMarozas commented May 12, 2022

Another use case is handling logic "if aws_appautoscaling_scheduled_action is enabled then ignore changes to min_capacity/max_capacity in aws_appautoscaling_target"

# We ignore capacity changes since it's managed by scheduled scaling
  dynamic "lifecycle" {
    for_each = try(var.task_scaling_config.min_capacity.night, 0) > 0 ? [""] : []
    content {
      ignore_changes = [
        min_capacity,
        max_capacity
      ]
    }
  }

@rawpixel-vincent
Copy link

rawpixel-vincent commented Jul 30, 2022

Would need to be able to set a condition to ignore_changes on desired_count for the use case where you have autoscaling enabled in some environment and other environment are automatically turned off.
So for the environment with autoscaling enabled we don't want to update the desired count but for other environment it would be handy so the tasks are started upon deployment without the need of an extra step to start them.

e.g.

ignore_changes = var.withAutoscaling ? [desired_count] : []

@leptitchriss
Copy link

leptitchriss commented Dec 5, 2022

Another potential use-case:

In AWS, you may want to provision an EC2 instance with the latest stable AMI available (for a particular set of filters) and prevent the recreation of the instance on subsequent runs (if a newer AMI is available). At some point however, you may want to update the instance with a newer AMI (without changing terraform code) - so you may want to control/toggle this behavior with a variable (eg. update_ami):

resource "aws_instance" "this" {
  (...)
  lifecycle {
    ignore_changes = var.update_ami ? [] : ["ami"]
  }

@DavidGamba
Copy link

The aws provider doesn't allow increasing the min_size of an autoscaler if the current desired_size is less than the min_size.
So I would like to be able to write:
lifecycle { ignore_changes = var.min_size <= scaling_config.0.desired_size ? [ scaling_config.0.desired_size ] : [] }

@Keimille
Copy link

Keimille commented Jan 6, 2023

Here is a use case for me. I want to be able to ignore the weighted value of default actions of a listener because the weight is dynamically changed with each deploy with CodeDeploy BLUE/GREEN deployment type. Because the weight switches, deploying any changes involves potentially having to update the weight each time.

 dynamic "default_action" {
    for_each = (lookup(each.value, "type", "") == "" || lookup(each.value, "type", "") == "forward") ? [1] : []
    content {
      ignore_changes = ["weight"]
      type = "forward"
      forward {
        stickiness {
          duration = 3600
          enabled  = false
        }
        target_group {
          arn    = aws_lb_target_group.lb_https_tgs[each.key].arn
          weight = 100
        }
        target_group {
          arn    = aws_lb_target_group.lb_http_test_tgs.arn 
          weight = 0
        }
      }
    }
    ```

@raxod502-plaid
Copy link

Terraform core team - if a contributor were to create a pull request implementing this feature, updating documentation/tests/etc, would you review and merge it? I just want to check before spending time on implementation since this has not always been the case.

@crw
Copy link
Contributor

crw commented Jan 31, 2023

@raxod502-plaid Per CONTRIBUTING.md, the solution would first need to be discussed with the core team. Given the length and number of participants on this issue, I do not think discussing in this thread would be particularly fruitful. The best thought I have at the moment would be to open a draft pull request where the proposal for a solution and subsequent discussion could take place. The draft PR would not need a fully coded solution or any code, it would just be a placeholder for the proposal to be discussed.

@raxod502-plaid
Copy link

Okay, filed: #32608

@raxod502-plaid
Copy link

Per discussion in the linked PR, the official response is that

there is nothing that can be done externally, and there is no option for the issue to be resolved until the core team works on it.

As I have written previously, I find this outcome somewhat disappointing, but unfortunately as an external contributor I have done everything I can to help.

@andrianjardan
Copy link

And the PR was automatically closed due to lack of activity. Any chances to get this back to life ?

@denisp13
Copy link

denisp13 commented Apr 8, 2023

I was surprised to find such a long and old tread for such a simple issue.... +1

@Tomasz-Kluczkowski
Copy link

hopefully someone monitors the number of thumbs-up this gets and will make lifecycle dynamic, pretty please?
as it stands now, we would need 2 definitions of same resource, one with ignore changes one without, because it cannot be based on a simple boolean value....quite frustrating, the resource is defined identically..

@edgan
Copy link

edgan commented Jun 13, 2023

I could really use this feature today.

@bekahmark12
Copy link

  • 1, need a use case to be able to dynamically set ignore_changes from a variable list.

@rawpixel-vincent
Copy link

rawpixel-vincent commented Jun 15, 2023

It would be great to get an update from the terraform team since it seems the community is not able to help on this (#24188 (comment))

Screenshot 2023-06-15 at 11 14 11 Screenshot 2023-06-15 at 11 14 09

There is probably enough use case and interests shown for this feature to lock any new comments that are not from terraform team.
so we can still subscribe to this issue in case there is interesting updates.

@hashicorp hashicorp locked and limited conversation to collaborators Jun 20, 2023
@crw
Copy link
Contributor

crw commented Jun 20, 2023

Discussed with the team and there is consensus that we have enough input to understand the requests made in this thread.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.