Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

for_each on google_compute_instance with map of boot disk : impact others #4286

Closed
maxolodrou opened this issue Aug 18, 2019 · 7 comments
Closed
Assignees
Labels

Comments

@maxolodrou
Copy link

maxolodrou commented Aug 18, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

Terraform v0.12.6

  • provider.external v1.2.0
  • provider.google v2.13.0

Affected Resource(s)

google_compute_instance

Terraform Configuration Files

############## LOCALS ###########
locals {
  datetime = replace(replace(replace(replace(timestamp(), "-", ""),"T",""),":",""),"Z","")
  # Image
  image_list = compact(
    concat(
      [for vm,params in var.vms : lookup(lookup(lookup(params, "boot"),"disk_source",{}), "image_family", "")]
    )
  )
  link_image_list = data.google_compute_image.images.*.self_link
  final_image_map = zipmap(local.image_list,local.link_image_list)
}
######## Data source ################
data "google_compute_subnetwork" "subnetwork"{
  name= "gce-sub-01"
  region = var.region
  project = var.host_project_id
}
data "google_compute_image" "images" {
  count = length(local.image_list)
  project = "gce-uefi-images"
  family  = element(local.image_list,count.index)
}
#####  resources #####
resource "google_compute_disk" "boot_disk" {
  for_each = var.vms
  name    = "${each.key}-boot-${local.datetime}"
  project = var.project_id
  type    = lookup(lookup(each.value, "boot"),"disk_type")
  size    = lookup(lookup(each.value, "boot"),"disk_size",16)
  zone    = lookup(each.value, "zone", var.zone)
  image   = lookup(lookup(lookup(each.value, "boot"),"disk_source",{}),"source","image") == "image" ? lookup(local.final_image_map,lookup(lookup(lookup(each.value, "boot"),"disk_source",{}),"image_family"),"") : ""
  lifecycle {
    ignore_changes  = [
      "name",
    ]
    prevent_destroy = false
    #create_before_destroy = true
  }
}
resource "google_compute_instance" "instance" {
  for_each = var.vms
  project      = var.project_id
  name         = each.key
  zone         = lookup(each.value,"zone",var.zone)
  machine_type = lookup(each.value,"machine_type")
    boot_disk {
    source = lookup(google_compute_disk.boot_disk,each.key).self_link
    device_name = "${each.key}-boot"
    auto_delete = "false"
  }  
  depends_on = ["google_compute_disk.boot_disk"]
  network_interface {
    subnetwork = data.google_compute_subnetwork.subnetwork.self_link       
  }
  scheduling {
    on_host_maintenance = "MIGRATE"
    automatic_restart   = "true"
  }
  allow_stopping_for_update = "true"
  lifecycle {
    #ignore_changes  = [boot_disk["source"],boot_disk["initialize_params"],boot_disk[""],]
  }
}

#### Variables #####
variable "project_id" {
  description = "ID of the project "
  type        = "string"
}

variable "host_project_id" {
  description = "ID of the host project "
  type        = "string"
}

variable "network_name" {
  description = "ID of the host project "
  type        = "string"
}

variable "region" {
  description = "Region where create new ressources"
}

variable "zone" {
  description = "Zone where create new ressources"
}

variable "vms" {
  description = "Vms to create"
}

Terraform variables

terraform.tfvars :

project_id = "my_project_id"
host_project_id = "my_host_project_id"
region = "europe-west1"
zone = "europe-west1-d"
vms = {
    vmname1 = {                                         
       machine_type   = "n1-standard-1"                                          
        boot = {                                        
            disk_type = "pd-standard"                   
            disk_size = "50"                            
            disk_source = {                             
                source = "image"       
                image_family = "centos-7"
            }
        }
        service_account = "gce-svcaccount@my_project_id.iam.gserviceaccount.com"
        api_permissions = [                              
            "logging-write", 
            "monitoring-write", 
            "storage-rw", 
            "service-management", 
            "service-control"
        ]                   
    }
    vmname2 = {                                         
       machine_type   = "n1-standard-1"                                          
        boot = {                                        
            disk_type = "pd-standard"                   
            disk_size = "50"                            
            disk_source = {                             
                source = "image"       
                image_family = "windows-2019"
            }
        }
        service_account = "gce-svcaccount@my_project_id.iam.gserviceaccount.com"
        api_permissions = [                              
            "logging-write", 
            "monitoring-write", 
            "storage-rw", 
            "service-management", 
            "service-control"
        ]                   
    }
}

Debug Output

https://gist.github.com/maxolodrou/e2ebafbbd879a5382ab40d09112526f4

Expected Behavior

When change something on vmname2 boot disk : change only configuration on vmname2 and not on vmname1 !

Actual Behavior

When change image for vmname2 that impact vmname1 :

    .....
    # google_compute_instance.instance["vmname2"] must be replaced  ==> NORMAL
    .....
    # google_compute_disk.boot_disk["vmname2"] must be replaced  ==> NORMAL
    .....
    # google_compute_instance.instance["vmname1"] must be replaced ==> NOT NORMAL
    -/+ resource "google_compute_instance" "instance" {
    .....
      ~ boot_disk {
            auto_delete                = false
            device_name                = "vmname1-boot"
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          ~ source                     = "https://www.googleapis.com/compute/v1/projects/my_project_id/zones/europe-west1-d/disks/vmname1-boot-20190818154912" -> (known after apply) # forces replacement
    .......

Steps to Reproduce

  1. terraform apply
  2. change image_family in tfvars for vmname2 (for example : "windows-2019" --> "windows-2016")
  3. terraform apply
@ghost ghost added the bug label Aug 18, 2019
@Chupaka
Copy link
Contributor

Chupaka commented Aug 19, 2019

Looks like the problem is at "source = lookup(google_compute_disk.boot_disk,each.key).self_link" — your vmname1 has a boot disk of vmname2

@maxolodrou
Copy link
Author

Hi Chupaka, no. Sorry, maybe not enough detailed in the current "Actual Behavior", I'll update. But I confirm the change impact only the vmname2 disk. After apply, the vmname1 and vmname2 are update (shutdown, remove, recreate, start) but vmname1 rebuilded exactly with the same disk (disk vmname1 not rebuilded) and of course vmname2 rebuilded with her new rebuilded disk. So the strange behavior is that vmname1 is rebuild even if disk not change. It looks like when terraform detect an update in the list of boot disk (created with for_each too), all instance need rebuild (even if no update on the linked diks).

@slevenick
Copy link
Collaborator

I suspect that the timestamp on the disk name is causing the diff: name = "${each.key}-boot-${local.datetime}".

The field that forces recreate in your example is the name field which presumably has been recalculated since the timestamp has changed. ~ source = "https://www.googleapis.com/compute/v1/projects/my_project_id/zones/europe-west1-d/disks/vmname1-boot-20190818154912" -> (known after apply) # forces replacement

If you remove the timestamp and attempt to reproduce do you still see the recreation of vmname1?

@maxolodrou
Copy link
Author

Hi slevenick,

Not caused by the timestamp because I use ignore change for the name of the disk.
lifecycle { ignore_changes = [ "name", ]
And FYI, I have already test to remove the timestamp and I have the same behavior !
Do you have see the debug output ? I have this line : [WARN] Provider "google" produced an invalid plan for go....

I suspect than the provider (or terraform code) have issue to use a index with value ["vmname2-boot"] (because I use for_each feature) rather than int [1] (use when count feature is use) in the state file.

@ghost ghost removed the waiting-response label Aug 22, 2019
@slevenick slevenick assigned tysen and unassigned slevenick Aug 22, 2019
@maxolodrou
Copy link
Author

@apparentlymart, could take a look on this issue please ? Because seems you are the master of the for_each on terraform :)

@tysen
Copy link

tysen commented Sep 3, 2019

This is an upstream issue so I'm closing this one in favor of the issue linked above. We'll reopen if it turns out otherwise.

@tysen tysen closed this as completed Sep 3, 2019
@ghost
Copy link

ghost commented Oct 4, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Oct 4, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants