-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
google_compute_instance disks not controlled by terraform forces a new resource #12715
Comments
A bad and temporary workaround is to use the lifecycle ignore_changes (not recommended):
|
This comes back to the idea of Terraform owning your infrastructure state, unfortunately. And while it is totally reasonable to want Kubernetes to be able to mount disks for you, we're bending Terraform a little here. Which isn't to say that this won't get fixed or even that it's a bad idea or wrong to do this, just know that we're operating slightly outside Terraform's bread and butter, and so things may not end up being as nice as if we were operating within Terraform's wheelhouse, where it owns the resources it's managing and considers any outside change to be in-need-of-correction. The core problem I'm struggling with here is "how is Terraform supposed to distinguish between disks Kubernetes might mount and other disks, that could be added through the console or gcloud or any other tool?" Basically, how should Terraform distinguish between "this is fine" and drift that it needs to correct? If what you want is for it to not care which disks are attached, |
There hasn't been much activity here for a while and this is expected behavior for Terraform, so I'm going to close this for now. |
Chiming in quickly here, this should be mitigated (at least in part) by the work @danawillow did/is doing in #13443 and #14018. That will at least get rid of the destroy/create cycle part. |
It seems working around the issue with ignore_changes is no longer possible, as there is an extra hardcoded check: Can this be reopened? (Our long-term fix, btw, will be that Terraform no longer owns the instances, only managed instance groups. But for now, we still have some Kubernetes minions TF owns directly, and it hates Kubernetes Volumes.) |
FWIW it appears simply deleting that check makes everything okay - I'm not sure why it is there, the following code (that sorts disks between those created with a |
Hey @maikzumstrull! I think @danawillow is doing a lot of work on instance disks right now. See hashicorp/terraform-provider-google#122 and hashicorp/terraform-provider-google#123. I think we're aiming to get those merged and released shortly. However, a lot of the people maintaining the Google provider won't be watching this repo for issues; if you want to continue discussing it, I'd highly suggest starting a new issue in the terraform-providers/terraform-provider-google repo. Sorry for the inconvenience! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
When a google_compute_instance has disks that are mounted by something else than terraform - in our case kubelet mounts a disk directly into a container - terraform wants to recreate the resource every time, while we're fine with the mounting of this disk not being controlled by terraform.
Disabling refresh prevents Terraform from recreating the resource, but that's not ideal.
Terraform Version
Terraform v0.8.8
Affected Resource(s)
Terraform Configuration Files
Expected Behavior
Terraform should ignore the disk mounted outside of it's control.
disk.#: "1" => "1" (forces new resource)
Actual Behavior
Terraform plan actually shows this:
disk.#: "2" => "1" (forces new resource)
The text was updated successfully, but these errors were encountered: