Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_compute_instance disks not controlled by terraform forces a new resource #12715

Closed
JorritSalverda opened this issue Mar 15, 2017 · 8 comments
Labels
bug provider/google-cloud waiting-response An issue/pull request is waiting for a response from the community

Comments

@JorritSalverda
Copy link

JorritSalverda commented Mar 15, 2017

When a google_compute_instance has disks that are mounted by something else than terraform - in our case kubelet mounts a disk directly into a container - terraform wants to recreate the resource every time, while we're fine with the mounting of this disk not being controlled by terraform.

Disabling refresh prevents Terraform from recreating the resource, but that's not ideal.

Terraform Version

Terraform v0.8.8

Affected Resource(s)

  • google_compute_instance

Terraform Configuration Files

resource "google_compute_disk" "disk_mounted_by_kubelet" {
    name = "disk-mounted-by-kubelet"
    zone = "europe-west1-c"
    size = "1"
}

data "template_file" "kubernetes_proxy_container_manifest" {
    template = "${file("bootstrap/kubernetes-proxy-container-manifest.yml")}"
}

resource "google_compute_instance" "kubernetes_proxy" {
    name = "kubernetes-proxy"
    machine_type = "g1-small"
    zone = "europe-west1-c"

    // boot disk
    disk {
        image = "https://www.googleapis.com/compute/v1/projects/google-containers/global/images/container-vm-v20160217"
        auto_delete = "true"
    }

    network_interface {
        subnetwork = "europe-west1-c"
        access_config {
            // ephemeral
        }
    }

    metadata {
        google-container-manifest = "${data.template_file.kubernetes_proxy_container_manifest.rendered}"
    }

    depends_on = ["google_compute_disk.disk_mounted_by_kubelet"]
}

Expected Behavior

Terraform should ignore the disk mounted outside of it's control.

disk.#: "1" => "1" (forces new resource)

Actual Behavior

Terraform plan actually shows this:

disk.#: "2" => "1" (forces new resource)

@etiennetremel
Copy link

A bad and temporary workaround is to use the lifecycle ignore_changes (not recommended):

   lifecycle {
       ignore_changes = ["disk"]
   }

@paddycarver
Copy link
Contributor

This comes back to the idea of Terraform owning your infrastructure state, unfortunately. And while it is totally reasonable to want Kubernetes to be able to mount disks for you, we're bending Terraform a little here. Which isn't to say that this won't get fixed or even that it's a bad idea or wrong to do this, just know that we're operating slightly outside Terraform's bread and butter, and so things may not end up being as nice as if we were operating within Terraform's wheelhouse, where it owns the resources it's managing and considers any outside change to be in-need-of-correction.

The core problem I'm struggling with here is "how is Terraform supposed to distinguish between disks Kubernetes might mount and other disks, that could be added through the console or gcloud or any other tool?" Basically, how should Terraform distinguish between "this is fine" and drift that it needs to correct?

If what you want is for it to not care which disks are attached, lifecycle.ignore_changes = ["disk"] is indeed the correct way around it, but that obviously comes with some danger--Terraform will basically wash its hands of things, assuming someone else will manage those disks. I have thoughts on alternatives, but I'm mostly just curious to hear what workflow people are actually expecting here. Can anyone help me out with some examples of what you hope/imagine would happen, and how you would imagine you'd tell Terraform which disks it shouldn't worry about re: whether they're attached or not?

@paddycarver paddycarver added the waiting-response An issue/pull request is waiting for a response from the community label Apr 6, 2017
@catsby
Copy link
Contributor

catsby commented Apr 27, 2017

There hasn't been much activity here for a while and this is expected behavior for Terraform, so I'm going to close this for now.

@catsby catsby closed this as completed Apr 27, 2017
@paddycarver
Copy link
Contributor

Chiming in quickly here, this should be mitigated (at least in part) by the work @danawillow did/is doing in #13443 and #14018. That will at least get rid of the destroy/create cycle part.

@maikzumstrull
Copy link

It seems working around the issue with ignore_changes is no longer possible, as there is an extra hardcoded check:
https://github.com/terraform-providers/terraform-provider-google/blob/master/google/resource_compute_instance.go#L852-L854

Can this be reopened?

(Our long-term fix, btw, will be that Terraform no longer owns the instances, only managed instance groups. But for now, we still have some Kubernetes minions TF owns directly, and it hates Kubernetes Volumes.)

@maikzumstrull
Copy link

FWIW it appears simply deleting that check makes everything okay - I'm not sure why it is there, the following code (that sorts disks between those created with a disk entry and those otherwise attached) seems to be fine either way?

@paddycarver
Copy link
Contributor

Hey @maikzumstrull! I think @danawillow is doing a lot of work on instance disks right now. See hashicorp/terraform-provider-google#122 and hashicorp/terraform-provider-google#123. I think we're aiming to get those merged and released shortly.

However, a lot of the people maintaining the Google provider won't be watching this repo for issues; if you want to continue discussing it, I'd highly suggest starting a new issue in the terraform-providers/terraform-provider-google repo. Sorry for the inconvenience!

@ghost
Copy link

ghost commented Apr 8, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug provider/google-cloud waiting-response An issue/pull request is waiting for a response from the community
Projects
None yet
Development

No branches or pull requests

6 participants