Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

for_each attribute for creating multiple resources based on a map #17179

Closed
mirogta opened this issue Jan 24, 2018 · 74 comments · Fixed by #21922
Closed

for_each attribute for creating multiple resources based on a map #17179

mirogta opened this issue Jan 24, 2018 · 74 comments · Fixed by #21922
Assignees

Comments

@mirogta
Copy link

mirogta commented Jan 24, 2018

Hi,

We are missing a better support for loops which would be based on keys, not on indexes.

Below is an example of the problem we currently have and would like Terraform to address:

  • We have a list of Azure NSG (Network Security Group) rules defined in a hash. E.g.
locals {
  rules = {
    rdp_from_onprem = {
      priority         = 100
      protocol         = "TCP"
      destination_port = "3389"
      source_address   = "10.0.0.0/8"
    }

    winrm_from_onprem = {
      priority         = 110
      destination_port = "5985-5986"
      source_address   = "10.0.0.0/8"
    }

    dynatrace_security_gateway = {
      priority         = 120
      destination_port = "9999"
    }
  }
}
  • This allows us to keep the Terraform resource definition DRY and use a loop to create all the rules:
resource "azurerm_network_security_rule" "allow-in" {
  count                       = "${length(keys(local.rules))}"
  name                        = "allow-${element(keys(local.rules), count.index)}-in"
  direction                   = "Inbound"
  access                      = "Allow"
  priority                    = "${lookup(local.rules[element(keys(local.rules), count.index)], "priority")}"
  protocol                    = "${lookup(local.rules[element(keys(local.rules), count.index)], "protocol", "*")}"
  source_port_range           = "*"
  destination_port_range      = "${lookup(local.rules[element(keys(local.rules), count.index)], "destination_port", "*")}"
  source_address_prefix       = "${lookup(local.rules[element(keys(local.rules), count.index)], "source_address", "*")}"
  destination_address_prefix  = "${lookup(local.rules[element(keys(local.rules), count.index)], "destination_address", "*")}"
  resource_group_name         = "${azurerm_resource_group.resource_group.name}"
  network_security_group_name = "${azurerm_network_security_group.nsg.name}"
}
  • So far, so good. However since the resources and their state are uniquely identified by the index and not by their name, we can't simply change the rules later.
    • We can add new rules only at the end of the hash.
    • We can remove rules only at the end of the hash.
    • We can modify the rules, as long as their position in the hash doesn't change.
    • But we can never remove any other rule or change their position in the hash. This seems to be very restrictive and basically means we had to stop using this approach and define all individual rules as individual azurerm_network_security_rule resources.

As you can guess, if we e.g. remove the first item from the hash, Terraform would not see that as a removal of the first resource (index 0), but rather removal of the last resource (index 2) and a related unexpected change of all the other resources (old index 1 becomes new index 0, old index 2 becomes new index 1).

Unfortunately this can also cause Azure provider to fail, because it may get into a conflict where an actual resource (old index 1) still exists in Azure, but Terraform now tries to modify the actual resource (old index 0) to have the same properties, but that is not possible (e.g. NSG priority and port have to be unique).

I've shown an example with 3 rules, but in reality we can have 50 rules and the tf file is 5x longer and more difficult to manage with individual resources compared to using a hash.

We would like to use hashes in Terraform in such a way that a position of an element inside a hash doesn't matter. That's why many other languages provide two ways of looping - by index (e.g. for i=0; i<list.length;i++) and by key (foreach key in list).

I'm sure that smart guys like you can figure out how to make this work in Terraform.

Thanks

@apparentlymart
Copy link
Contributor

apparentlymart commented Jan 27, 2018

Hi @mirogta! Thanks for this feature request, and your detailed use-case.

This is definitely a request that has come up before, though it seems like it's only previously been discussed within the comments of other issues, so this issue seems like a good anchor for talking about our plans here, and updating as we make progress.

The current design sketch we have is a new for_each argument that can be used as an alternative to count, taking either a list or a map as its value:

# NOT YET IMPLEMENTED; some details may change before implementation

resource "azurerm_network_security_rule" "allow-in" {
  for_each                    = "${local.rules}"
  name                        = "allow-${each.key}-in"
  direction                   = "Inbound"
  access                      = "Allow"
  priority                    = "${each.value.priority}"
  protocol                    = "${lookup(each.value, "protocol", "*")}"
  source_port_range           = "*"
  destination_port_range      = "${lookup(each.value, "destination_port", "*")}"
  source_address_prefix       = "${lookup(each.value, "source_address", "*")}"
  destination_address_prefix  = "${lookup(each.value, "destination_address", "*")}"
  resource_group_name         = "${azurerm_resource_group.resource_group.name}"
  network_security_group_name = "${azurerm_network_security_group.nsg.name}"
}

The primary benefit of this, as you correctly suggested, is that if the for_each collection is a map then we will use the map keys to correlate configuration instances with state instances when planning updates, and thus avoid the problem you've encountered with adding or removing items in the map.

If a user provides a list to for_each then it'll behave in the same way as count -- correlating by index -- but will still provide the more convenient each.key and each.value accessors to interpolate from the collection elements, reducing the visual noise of all the element(..., count.index) expressions that result when multiplying a resource using count over the length of a list.

We are currently focused on some more general work to improve the configuration language's handling of collection types, which is a pre-requisite for this for_each feature. After that, we'll start designing and prototyping this feature in more detail.

I'm going to update the summary of this issue so that it's more specific about our currently-planned approach, since that should help us find it again to post updates when we have them.

Thanks again for this feature request!

@slawekm
Copy link

slawekm commented Feb 21, 2018

@apparentlymart: For the sake of readability - could this be implemented as a block directive instead?

Per following example:

target_groups = {
  "http" = {
      port = 80
      description = "HTTP port"
   }
  "https" = {
      port = 443
      description = "HTTPS port"
  }
}

resource "aws_lb_target_group" "example" {
  name = "example-${it.key}"
  port = "${it.values["port"]}"
  ...

  iterator {
    on "${var.target_groups}"
  }
}

or with list as input:

repositories = [ "repoA", "repoB"]

resource "aws_ecr_repository" "myrepos" {
  name = "${it.value}" 

  iterator {
    on "${var.repositories}""
  }
}

Where ${it.key} could be list index in this case. Ideally, current position in loop should be exposed to user via${it.index} too.

@apparentlymart
Copy link
Contributor

Hi @slawekm,

Unfortunately HCL syntax doesn't work quite like that, so your nested on argument would need to include an equals sign:

  iterator {
    on = "${var.repositories}""
  }

Given that we expect for_each to become the main case, and count be more of an edge-case, we chose a count-like terse syntax here so that this usage would not create too much "visual noise" in configurations.

In practice today lots of users have the pattern of specifying the count attribute first and separating it from the others by a blank line so it stands out more from the other "normal" attributes, and so I'd expected that in practice people would use a similar pattern with for_each (even though I didn't illustrate that in my example above due to adapting the example in the original issue comment):

# NOT YET IMPLEMENTED; some details may change before implementation

resource "azurerm_network_security_rule" "allow-in" {
  for_each = local.rules

  name                        = "allow-${each.key}-in"
  direction                   = "Inbound"
  access                      = "Allow"
  priority                    = each.value.priority
  protocol                    = lookup(each.value, "protocol", "*")
  source_port_range           = "*"
  destination_port_range      = lookup(each.value, "destination_port", "*")
  source_address_prefix       = lookup(each.value, "source_address", "*")
  destination_address_prefix  = lookup(each.value, "destination_address", "*")
  resource_group_name         = azurerm_resource_group.resource_group.name
  network_security_group_name = azurerm_network_security_group.nsg.name
}

The above also illustrates a capability of the new configuration parser where it's no longer required to use "${ and }" to delimit standalone expressions, since expressions can now be specified directly.

Our current configuration work will also include an overhaul of the configuration-related documentation on the website that should include more "opinionated" best-practices than are currently given (since most of the documentation was written before best-practices emerged) and so we can explicitly recommend the above usage and make sure all of our examples follow it.

@slawekm
Copy link

slawekm commented Feb 22, 2018

Hi @apparentlymart

First of all, thanks for a detailed response.

on argument would need to include an equals sign

A product of c&p typo, my bad.

Given that we expect for_each to become the main case, and count be more of an edge-case, we chose a count-like terse syntax here so that this usage would not create too much "visual noise" in configurations.

Right.

Erm, I guess the main reason I've asked for this is that I'm subconsciously looking for a simpler way to define dynamic resources.

A configurable iterator which could also perform basic operations on input data, such like grouping or filtering on values and for_each looked like a good opportunity to create a foundation block for this.

This would make "feeding" dictionaries into resource blocks much easier.

And with that in mind, turning for_each into block probably would make more sense to you. Sorry for not being clear enough.

@apparentlymart
Copy link
Contributor

apparentlymart commented Feb 22, 2018

Hi @slawekm,

Thanks for the additional information about your use-case.

The new configuration language interpreter has a feature called "for expressions" that I think will meet your use-case here. For example:

  # Not yet implemented and may change before release
  for_each = {
    for x in aws_subnet.main:
    x.id => x # Use subnet id as each.key
    if x.tags["Access"] == "public"
  }

This for construct can be used anywhere a map or list is expected, and can iterate over lists and maps.

@codyja
Copy link

codyja commented Feb 23, 2018

Whoa, I'd love to see this. Very exciting

@joshuabaird
Copy link

Definitely exciting. Has any progress been made on the new config language interpreter that includes for_each?

@lorengordon
Copy link
Contributor

Really hope this happens soon. To get around this type of problem, I've resorted to templating the .tf files using jinja to create individual resources rather than use count. Here's the general idea, https://github.com/Crapworks/terratools/tree/master/terratemplate

@fcoelho
Copy link

fcoelho commented Mar 28, 2018

Will the for_each attribute be available for use in modules too?

@jay-stillman
Copy link

Is there any indication of when these will be rolled out?

@virtualbubble
Copy link

virtualbubble commented Apr 14, 2018

I had the same requirements for the NSG's and used this solution which lets me change ports as the rules are grouped and delete all previous rules with an update. This is just an example.

nsg_rules.tf.json (seperate global variables file for nsg rules.)

{
  "output": {
    "22-80-8080-443_all": {
      "description": "inbound_allow_SRC:*_Dest:*_Ports:22,80,8080,443",
    "value":"22_inbound_allow_tcp_*_*,80_inbound_allow_tcp_*_*,8080_inbound_allow_tcp_*_*,443_inbound_allow_tcp_*_*"
    },
    "443_all": {
      "description": "inbound_allow_SRC:*_Dest:*_Ports:443",
      "value":"443_inbound_allow_tcp_*_*"
    },
    "3389-22_wirelesetwork": {
      "description": "inbound_allow_SRC:63.200.10.5_Dest:*_Ports:3389,22",
      "value":"3389_inbound_allow_tcp_63.200.10.5_*,22_inbound_allow_tcp_63.200.10.5_*"
    },
    "3389-22_wirelesetwork_jumpbox1": {
      "description": "inbound_allow_SRC:63.200.10.5_Dest:10.1.0.1_Ports:3389,22",
      "value":"3389_inbound_allow_tcp_63.200.10.5_10.1.0.1,22_inbound_allow_tcp_63.200.10.5_10.1.0.1"
    }
  }
}

main.tf

module "nsg_rules" {
  source = "./core/nsg_rules"  //source to global nsg rules
}

module "DomainController1" {
  source               = "./core/.compute"
  location             = "${var.location}"
  resource_group_name  = "${var.resource_group_name}"
  vm_hostname          = "${var.vm_hostname}"
  custimagerecgroup    = "${var.custom_imagerecgroup}"
  customimage          = "${module.variables.2016-datacenter-latest}"
  vm_size              = "${var.vm_size["medium"]}"
  vnet_subnet_id       = "${data.azurerm_subnet.networks.id}"
  nsg_ports            = "${module.variables.3389-22_wirelessnetwork}" //REFERENCE THE RULE FROM THE JSON
}

core module code

resource "azurerm_network_security_group" "vm" {
  count               = "${var.nsg_required == "true" ? 1 : 0}"
  name                = "${var.vm_hostname}-${coalesce(var.remote_port,module.os.calculated_remote_port)}-nsg"
  location            = "${azurerm_resource_group.vm.location}"
  resource_group_name = "${azurerm_resource_group.vm.name}"
}

We then split the rules by manipulating the string values form the json.

Split by comma for count of rules

"22_inbound_allow_tcp__
80_inbound_allow_tcp__
8080_inbound_allow_tcp__
443_inbound_allow_tcp__

Then split using the underscores in order for port, direction, action, protocol, source and destination
note in the code below we have a * to accept from any.

resource "azurerm_network_security_rule" "vmnsg" {
  count                       = "${length(split(",", var.nsg_ports))}"
  name                        = "${element(split(",", var.nsg_ports), count.index)}"
  priority                    = "10${count.index}"
  direction                   = "${element(split(",", var.nsg_direction), count.index)}"
  access                      = "${element(split(",", var.nsg_access), count.index)}"
  protocol                    = "${element(split(",", var.nsg_protocol), count.index)}"
  destination_port_range      = "${element(split("_", element(split(",", var.nsg_ports), count.index)),0)}"
  source_port_range           = "*"
  source_address_prefix       = "${element(split("_", element(split(",", var.nsg_ports), 0)),4)}"
  destination_address_prefix  = "${element(split(",", var.nsg_dest_add), count.index)}"
  resource_group_name         = "${azurerm_resource_group.vm.name}"
  network_security_group_name = "${azurerm_network_security_group.vm.name}"
}

@stipx
Copy link

stipx commented Apr 30, 2018

I also think the count feature is not really the best way to achieve creation of multiple resources.
The name will always be resource[0], resource[1] etc. But it would be much better to have resource[key1], resource[keySomething]. I just stumbled upon this when I was creating multiple VPN tunnels for AWS. I tried it with maps and of course the keys get sorted alphabetically.

Is there any progress with the for_each?

@jaloren
Copy link

jaloren commented Apr 30, 2018

how would this design handle sub resources? would there be a way to create a resource conditionally? For example, would i be able to say given condition X in one iteration, skip creating this resource.

@RishikeshDarandale
Copy link

@apparentlymart , will there be migration guide/document provided from count to for_each? This is basically to take advantage of for_each in existing logic where count is used for list and map.

@psalaberria002
Copy link

Is this currently being worked on? Is there any expected delivery date?

@lorengordon
Copy link
Contributor

Looks like it's getting close, https://www.hashicorp.com/blog/terraform-0-1-2-preview

@bentterp
Copy link

Recent activity in the PR, so I'm hopeful.
This is very much something that would make our work easier and our implementations much more robust.
Fingers crossed!

@ViggyNash
Copy link

I wish there was some documentation to indicate to users that count should not be used dynamically. I've had to refactor a a chunk of my code and throw out another project that would have relied on that after discovering the count issue.

@lorengordon
Copy link
Contributor

@ViggyNash As with so many docs, the explanation is there, it just needs to be read carefully and probably doesn't make a lot of sense until it bites you. :/

The count meta-argument accepts expressions in its value, similar to the resource-type-specific arguments for a resource. However, Terraform must interpret the count argument before any actions are taken from remote resources, and so (unlike the resource-type-specifc arguments) the count expressions may not refer to any resource attributes that are not known until after a configuration is applied, such as a unique id generated by the remote API when an object is created.

Note that the separate resource instances created by count are still identified by their index, and not by the string values in the given list. This means that if an element is removed from the middle of the list, all of the indexed instances after it will see their subnet_id values change, which will cause more remote object changes than were probably intended. The practice of generating multiple instances from lists should be used sparingly, and with due care given to what will happen if the list is changed later.

https://www.terraform.io/docs/configuration/resources.html#count-multiple-resource-instances

@tmccombs
Copy link
Contributor

Is there anything I could do to help get this feature added or the PR merged?

@pselle
Copy link
Contributor

pselle commented Jul 16, 2019

@tmccombs Thank you for the kind offer!

We are indeed prioritizing this for a near release of Terraform, at which point I'd say the best way to help is to use it and help us find bugs that will inevitably show up in the first pass of the feature (you could do this now, by building the active PR; it would help me best if anything you find you comment on the PR itself).

@bentterp
Copy link

for_each fails for me when using a local variable map
more details in the PR

@ghost
Copy link

ghost commented Jul 24, 2019

I have a question regarding "count" vs "for". I am experiencing all the issues discussed here, too. But my design was planned to have something like

module "zookeepers" {
  source      = "../modules/virtual-machine"
  vm_count    = var.zk_count
  datadisks = [
    {
      id   = 0
      type = "Standard_LRS"
      size = "128"
    },
    {
      id   = 1
      type = "Standard_LRS"
      size = "128"
    },
  ]
 }

wich means I would need to maintain a two-dimensional array of count and length(var.datadisks). PLaying with DIV and MOD tricks doesn't work since disks would get reshuffled on count change. Any chance to get this done with the new approach?

@simonbirtles
Copy link

simonbirtles commented Jul 24, 2019

Using TF 0.12.4 this is how I am dealing with disks with a vsphere_virtual_machine with dynamic disks without relying on indexing with count.

The vsphere_virtual_machine machine resource has the following disk configuration:

  # Data disks #'s 1-...
  # disk.tag and disk.value available on each loop
  dynamic "disk" {
    for_each = [for data_disk in var.disks : {
      disk_unit_number = data_disk.unit_number
      disk_label       = format("disk%d", data_disk.unit_number)
      disk_size        = data_disk.size
      }
      if data_disk.unit_number != 0
    ]

    content {
      unit_number      = disk.value.disk_unit_number
      label            = disk.value.disk_label
      size             = disk.value.disk_size
      eagerly_scrub    = false
      thin_provisioned = true
      keep_on_remove   = false
    }
  }

The variables come from a file specific to the virtual machine similar to what you have above:

"module": [
    {
      "vm_id...": {
      "source": "......",
      "disks": [
            {
                "unit_number": 0,
                "size": "60"
            },
            {
                "unit_number": 1,
                "size": "120"
            },
            {
                "unit_number": 2,
                "size": "100"
            },
            {
                "unit_number": 3,
                "size": "100"
            },
            {
                "unit_number": 4,
                "size": "100"
            }
            ],

Note: I exclude disk 0 as its part of our image but we have the disk in the server spec file for completness, hence the skipping on disk 0 in the dynamic loop.

@ghost
Copy link

ghost commented Jul 24, 2019

thanks @simonbirtles I get the idea.
Would you create the vm's based on that script, too? I just wonder how to convet the machine list entries intro resource commands...

@simonbirtles
Copy link

Hi @desixma ,

If I understand correctly I will briefly describe our workflow....

We have a workflow which starts with a file per resource (i.e. vm) which is our own DDL spec which contains all details required for a virtual machine (vm specific, configuration and software), each file is converted using a j2 template into a terraform config file per resource/vm which contains the module config with variables as I showed in the second snippet. These files (per vm) and a basic main/var/output.tf files for the full terraform configuration.

The files above import a generic module for a vmware virtual machine, the first snippet is part of a generic vmware vm resource which is imported by each terraform config file (file per vm).

From what I see, I think your approach is fine, you would need to have seperate modules for zookeeper1, zookeeper2, ... of course and add in the 2nd snippet I provided into your module ../modules/virtual-machine with some adjustment of var names etc.

TF does not yet support dynamic resources so I create a module per vm and import the generic module which is the usual approach I would say. Once the feature being discussed here is available we "hopefully" can move away from different modules.

Hope that helps.

@ghost
Copy link

ghost commented Jul 24, 2019

ah I see. Thanks for the clarification

@cyrus-mc
Copy link

cyrus-mc commented Aug 3, 2019

How do you access all resources when using for_each.

For example I have the following:

# create required policies
resource "aws_iam_policy" "main" {
  for_each = var.policies

  name = each.key
  path = lookup(each.value, "path", "/")

  policy = templatefile("${var.policies_dir}/${lookup(each.value, "template", each.key)}.tmpl",
                        lookup(each.value, "vars", {}))
}

Prior to this I just used count on a list and then could perform the following:

  /* create a map with key = policy name and value being the policy arn */
  policy_name_arn_mapping = zipmap(aws_iam_policy.main.*.name, aws_iam_policy.main.*.arn)

With for_each I get

Error: Unsupported attribute

  on ../variables.tf line 11, in locals:
  11:   policy_name_arn_mapping = zipmap(aws_iam_policy.main[*].name, aws_iam_policy.main[*].arn)

This object does not have an attribute named "name".


Error: Unsupported attribute

  on ../variables.tf line 11, in locals:
  11:   policy_name_arn_mapping = zipmap(aws_iam_policy.main[*].name, aws_iam_policy.main[*].arn)

This object does not have an attribute named "arn".

@tmccombs
Copy link
Contributor

tmccombs commented Aug 5, 2019

Unfortunately, HCL's splat syntax only supports lists (perhaprs that should be changed in the upstream hcl project?).

You can still access all of them by using the values function to convert the values to a list. for example:

/* create a map with key = policy name and value being the policy arn */
  policy_name_arn_mapping = zipmap(values(aws_iam_policy.main)[*].name, values(aws_iam_policy.main)[*].arn)

Although in this case it might be better just to use a for_each comrehension such as:

{for name, policy in aws_iam_policy.main: name => policy.arn}

@timota
Copy link

timota commented Aug 13, 2019

How dynamically we can create such map , so it can be passed to for_each within resource block

  disk_by_inst = {
    first-1 = {
      type = "gp2"
      size = 10
    },
    first-2 = {
      type = "io2"
      size = 20
    },
    second-1 = {
      type = "ght"
      size = 30
    },
    second-2 = {
      type = "gp2"
      size = 40
    }
  }

input params:

variable "instances_id" {
  default = ["first", "second", "third"]
}

variable "disks" {
  type        = list(map(string))
  default     = []
}

where

disks = [
  {
    size = 50
  },
  {
    iops = 100
    size = 10
  }
]

i tried different methods, but no luck. the closest is

  sum = {
    for s in var.instances_id :
    "${s}-vol-${local.counter + 1}" => { for k, v in var.disks : k => v }
  }

which gives this output

out = {
  "first-vol-1" = {
    "0" = {
      "size" = "50"
    }
    "1" = {
      "iops" = "100"
      "size" = "10"
    }
  }
  "second-vol-1" = {
    "0" = {
      "size" = "50"
    }
    "1" = {
      "iops" = "100"
      "size" = "10"
    }
  }
  "third-vol-1" = {
    "0" = {
      "size" = "50"
    }
    "1" = {
      "iops" = "100"
      "size" = "10"
    }
  }
}

@OJFord
Copy link
Contributor

OJFord commented Aug 13, 2019

@timota It's not very clear to me what your desired output is, but if it's that:

"<nth>-vol-<i>" = {
  "0" = {...}
  "1" = {...}
}

should instead have only the 0 or 1 block corresponding to i - 1, then you need to change:

{ for k, v in var.disks : k => v }

to:

var.disks[local.counter]

(but that local.counter's not going to change in the loop, so you're only going to get *-vol-1.)

@timota
Copy link

timota commented Aug 13, 2019

ahg sorry.

my goal is to get this map

{
	<inst-01a-vol-01> = {...}
	<inst-01a-vol-02> = {...}
	<inst-02a-vol-01> = {...}
	<inst-02a-vol-02> = {...}
}

so i can use keys (inst-xxx-vol-xxx) in for_each for resource to create named resources

@OJFord
Copy link
Contributor

OJFord commented Aug 13, 2019

@timota I think you want something like:

{
  for instdisk in setproduct(var.instance_ids, var.disks)
  : "inst-${index(var.instance_ids, instdisk[0])}-vol-${index(var.disks, instdisk[1])}"
  => instdisk[1]
}

(but note I haven't tested it!)

This is quite a good of where an 'enumerate' function would be helpful (#21940).

@timota
Copy link

timota commented Aug 13, 2019

cool, thanks. will test and let you know

@timota
Copy link

timota commented Aug 13, 2019

it works like a charm

Many thanks.

@Hatsou
Copy link

Hatsou commented Aug 13, 2019

Hi guys im currently try to create dynamically data disk for some virtual machines. The creation of data disk. For exemple : I want to create 2 VM with 3 data disks
so the expected result for the name of data disks is :
DD01-VM-1
DD02-VM-1
DD03-VM-1
DD01-VM-2
DD02-VM-2
DD03-VM-2
I use the following code :

Creation of data disks

resource "azurerm_managed_disk" "MyDataDiskVm" {
  count                = "${var.nb_data_disk * var.nb_vms}"
  name = "${format("DD%02d", (count.index % var.nb_data_disk) +1)}-VM-${var.vm_name_suffix}${format("%d", floor((count.index / var.nb_data_disk) +1))}"
  location             = "${var.location}"
  resource_group_name  = "${var.resource_group_name}"
  storage_account_type = "Standard_LRS"
  disk_size_gb         = "${var.data_disk_size_gb}"
  create_option        = "Empty"
  depends_on = ["azurerm_virtual_machine.MyVms"]
}

Attach the created data disk to virtual machine

resource "azurerm_virtual_machine_data_disk_attachment" "MyDataDiskVmAttach" {
  count              = "${var.nb_data_disk * var.nb_vms}"
  managed_disk_id    = "${azurerm_managed_disk.MyDataDiskVm.*.id[count.index]}"
  virtual_machine_id = "${azurerm_virtual_machine.MyVms.*.id[ceil((count.index +1) * 1.0 / var.nb_data_disk) -1]}"
  lun                = "${count.index % var.nb_data_disk}"
  caching            = "ReadWrite"
  create_option      = "Attach"
  depends_on         = ["azurerm_managed_disk.MyDataDiskVm"]
}

Everything works fine, datadisks are created with the right name is correctly attached to vm but once i restart an "apply" Terraform wants to change the id of the datadisks and therefore destroy and recreate it..

-/+ azurerm_managed_disk.MyDataDiskVm[0] (new resource required)

Im using Terraform v0.11.11.

Do you know where the error may come from or is it possible to dynamically create data disks with an "for each" with the azurerm provider ?

Thx for your feedback

@neelam-007
Copy link

Hi Guys,

I need to dynamically generate an entire resource block based on a map. So, in one file, I need something like the following, repeated for each.value:

variable "my_map" {
  type = "map"
  default = {
    key1 = "key1val",
    key2 = "key2val"
  }
}
# do the following for each entry in my_map
resource "vault_policy" "${each.key}-admin" {
  name="${each.value}-admin"
  path "ab/cd/ef/${each.value}" {
    capabilities = ["list"]
  }
  path "gh/ij/kl/${each.value}" {
    capabilities = ["list", read"]
 }
} 

How can I achieve this for_each? So far what I've tried is not working. Note that I've successfully generated one single resource block with a bunch of path definitions based on a map, but I don't exactly know what the syntax should look like for generating repeated resource blocks.

Thanks for feedback/help.

@pselle
Copy link
Contributor

pselle commented Aug 13, 2019

Hi friends! While I appreciate seeing folks help each other, please use the community forums to ask questions, and help future people who are asking similar questions. Thank you!

@sanzen193
Copy link

Hi All,

I am using a for_each to assign a new network to each VM that I create

data "vsphere_network" "network1" {
count = var.VMcount
name = "${var.network_name1}-${format("%02d", count.index + var.start_index)}"
datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_virtual_machine" "template" {
name = var.disk_template
datacenter_id = data.vsphere_datacenter.datacenter.id
}

=========================================================

FOLDER, TFVM RESOURCES ETC.

=========================================================

resource "vsphere_folder" "chefinfra" {
datacenter_id = data.vsphere_datacenter.datacenter.id
path = var.vmfolder
type = var.vsphere_folder_type
}

resource "vsphere_virtual_machine" "tfvm" {
for_each = {for net in data.vsphere_network.network1:net.id => net}

datastore_id = data.vsphere_datastore.datastore.id
resource_pool_id = data.vsphere_resource_pool.pool.id
#count = var.VMcount
name = "${var.vmname}${format("%02d", each.name + var.start_index)}"
annotation = var.tfvm_annotation
folder = vsphere_folder.chefinfra.path
hv_mode = var.hv_mode
nested_hv_enabled = var.nested_hv_enabled
num_cpus = var.cpu
num_cores_per_socket = var.cpu
cpu_hot_add_enabled = true
cpu_hot_remove_enabled = true
memory = var.memory
memory_hot_add_enabled = true
guest_id = var.guest_id
scsi_type = data.vsphere_virtual_machine.template.scsi_type
wait_for_guest_net_timeout = var.guest_net_timeout

unfortunately since I am using v0.12.6, I cannot use the count.index to dynamically name my VMs - since I have the for_each. What is the alternative to create VM names dynamically while in a for_each? The VM names need to increment by 1. Thanks in advance

@hashicorp hashicorp locked as resolved and limited conversation to collaborators Aug 13, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.