Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attaching an additional disk to a template containing a boot disk fails with google_compute_instance_from_template #2122

Closed
bmoyet opened this issue Sep 27, 2018 · 5 comments · Fixed by GoogleCloudPlatform/magic-modules#1077
Assignees
Labels
bug forward/review In review; remove label to forward service/compute-instances

Comments

@bmoyet
Copy link

bmoyet commented Sep 27, 2018

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

Terraform v0.11.8

  • provider.google v1.18.0

Affected Resource(s)

  • google_compute_instance_from_template

Terraform Configuration Files

terraform {
	required_version = "= 0.11.8"
}

provider "google" {
  credentials = xxx
  project     = xxx
  region      = xxx
}

variable "zone" {
  default = "europe-west1-d"
}
variable "instance_group_name" {}
variable "replica_set" {}
variable "disk_size" {
  default = "500"
}
variable "instance_count" {
  default = "2"
}
variable "arbiter_count" {
  default = "1"
}

data "google_compute_image" "mongo34debian8" {
  family    = "mongodebian8"
}

data "google_compute_image" "blank" {
  family    = "blank"
}

resource "google_compute_instance_template" "rsmember" {
  name          = "mongo-prod-mongo34"
  description   = "A template to create a mongo replica set member"

  labels  = {
    environment = "research"
  }

  machine_type    = "n1-standard-8"

  disk {
    source_image    = "${data.google_compute_image.mongo34debian8.self_link}"
    auto_delete     = true
    boot            = true
    disk_type       = "pd-ssd"
    disk_size_gb    = 10
  }

  metadata {
    rs    = "${var.replica_set}"
  }

  metadata_startup_script = "/home/bastien/startup-script.sh"
   
  network_interface {
    network = "default"
  }
}

resource "google_compute_disk" "data" {
  count     = "${var.instance_count}"

  zone      = "${var.zone}"
  name      = "${var.instance_group_name}-sv${count.index}-data"
  size      = "${var.disk_size}"
  type      = "pd-ssd"
}

resource "google_compute_instance_from_template" "member" {
  count           = "${var.instance_count}"

  name            = "${var.instance_group_name}-sv${count.index}"
  zone            = "${var.zone}"
  description     = "A replica set member"

  source_instance_template = "${google_compute_instance_template.rsmember.self_link}"

  // Override fields from instance template
  
  attached_disk {
    source      = "${google_compute_disk.data.*.self_link[count.index]}"
  }
}

resource "google_compute_disk" "data-arb" {
  count     = "${var.arbiter_count}"

  zone      = "${var.zone}"
  name      = "${var.instance_group_name}-arb${count.index}-data"
  size      = 10
  type      = "pd-ssd"
}

resource "google_compute_instance_from_template" "member-arb" {
  count           = "${var.arbiter_count}"

  name            = "${var.instance_group_name}-sv${count.index}"
  zone            = "europe-west1-d"
  description     = "A replica set arbiter"

  source_instance_template = "${google_compute_instance_template.rsmember.self_link}"

  // Override fields from instance template
  
  attached_disk {
    source      = "${google_compute_disk.data-arb.*.self_link[count.index]}"
  }
}

Debug Output

Trace output : https://gist.github.com/bmoyet/8a9c65ade4ed8b9c69569a0e8af6511d
Debug output if you prefer : https://gist.github.com/bmoyet/f08ff914750af09e86c730718f171c52

Expected Behavior

Terraform should have created a google instance template containing a boot disk with the specified image. Then, when creating the actual instance from it, it should have used the boot disk and attached another disks.

Maybe I'm using google_compute_instance_from_template wrong, I tried different other setup including trying to specify "boot_disk" inside since it is a google_compute_instance property and the doc says I can use any property from google_compute_instance in google_compute_instance_from_template, but to no avail.

Actual Behavior

Apply fails giving me the error : * google_compute_instance_from_template.member: Error creating instance: googleapi: Error 400: Invalid value for field 'resource.disks[0]': ''. Boot disk must be the first disk attached to the instance., invalid

On google cloud the template is created with the correct boot disk attached, so is the data disk created with google_compute_disk.

Steps to Reproduce

  1. terraform plan -var 'instance_group_name=mongo-cluster-shd04' -var 'replica_set=rs04' -var 'instance_count=1' -out deployRs
  2. terraform apply "deployRs"

Thank you for you help

@ghost ghost added the bug label Sep 27, 2018
@paddycarver
Copy link
Contributor

Does using the compute_attached_disk resource work? I have a hunch we're hitting a race condition here, where the we're trying to attach the attached disk before we finish setting up from the template.

@bmoyet
Copy link
Author

bmoyet commented Sep 28, 2018

I didn't see this resource. I cannot try this right now, but I'll check that asap

@bmoyet
Copy link
Author

bmoyet commented Oct 2, 2018

Hello,

I was able to try this workaround today, and it is working as expected, the instance is started with the additional disk attached. So your hunch seems correct. I will be able to use this workaround in the meantime.

Thanks

@danawillow
Copy link
Contributor

I don't think this is a race condition, but a bug in the way we handle disks.

The compute instances API expects one block of disks, where the first one is the boot disk. We decided that in TF we were going to separate that out separately into a boot disk block, attached disks, and scratch disks. When we actually make the API request, we put the boot disk, all attached disks, and then all scratch disks in a list and send it off to the API.

In the compute_instance_from_template resource, we create the instance in the same way as the instance resource. Since we allow only specifying a subset of the disks, this means that the list we send to the API in this example is just the attached disk, which is the bug. Instead, we should read the instance template, get the list of disks, and append this one onto the end. Then in the read function, we should read both the instance and the template, subtract out any disks that appear in both lists, and store the difference in state.

But yes, as Paddy said, the compute_attached_disk resource works great as a workaround until this bug is fixed.

@ghost
Copy link

ghost commented Jan 18, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Jan 18, 2019
@github-actions github-actions bot added forward/review In review; remove label to forward service/compute-instances labels Jan 15, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug forward/review In review; remove label to forward service/compute-instances
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants