-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to update existing virtual machine with managed disks #734
Comments
I think this is something I ran into a while back. It's an issue with the "FromImage" OS disk. It wasn't deleted and terraform doesn't understand how to reattach it. AFAIK it's not actually terraform's fault, the Azure API itself lacks a "FromImageOrAttachIfExisting" option. |
I think @jzampieron means this issue: #243 change your code from
into something like
This should work as long as you use os-images from the marketplace. If you want to use self-made os-images you'll have to wait for #565 |
This is a more general issue that we have run into as well. When you create any managed disks inline with the VM definition, you can destroy the VM leaving the disks and Terraform can't handle it because it's lost the disk state along with the VM definition. I suspect, as mentioned above, that defining the disks outside of the VM will fix the problem, but we didn't do that as it was excessively verbose with no apparent benefit until #795 is implemented. Seeing #565 makes me suspect we have no workable solution here. Even with a method to attach varying counts of disks, it seems like this method of managed disk handling needs to be fixed, removed, or heavily documented so that other people don't find themselves in this frustrating situation. |
@DonEstefan's workaround of this issue works for me, thanks! I had tried similar but I'm not quite sure what I was doing wrong previously. |
I am unable to reproduce this issue in the current latest build. This issue is fixed as part of #813 which was released as part of 1.1.2 on Feb 15. As part of PR 813, we no longer recreate VM when we attach data disks. If you still face this issue, please reopen it with sample HCL scripts. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Terraform Version
Terraform v0.11.2
Affected Resource(s)
Terraform Configuration Files
I have an azure vm resource that's fairly similar to the documented example:
Some time later, I want to add another data disk, so I add a resource for my new disk:
And add a block like this within the azurerm_virtual_machine block:
Things look good when I plan (unchanged attributes snipped out):
However, when I execute the plan, things get a little weird. The new data disk is created and the original VM destroyed as expected. However, it hits an error trying to create the VM again:
Which seems to indicate I should change the
create_option
ofstorage_os_disk
toAttach
instead ofFromImage
, which sort of makes sense since the disk already exists and I want it attached to a new VM.But when
create_option
is set toFromImage
, even afterplan
tells me the new configuration is OK, there's another error when I try to run the plan:Why do I need to specify this? Where do I even get this? Terraform handled creating and attaching the disk for me, why do I need to be this explicit to make a simple change on an unrelated property of the VM?
Can I even specify this?
If I try to add
vhd_uri
ormanaged_disk_id
, I also need to deletemanaged_disk_type = "Standard_LRS"
. I tried adding a managed disk data source, which is documented as providing a "source_uri" attribute, but when I tried to use it I get:Since it's a managed disk, maybe that's expected? Either way, unclear.
In the example for the azurerm_managed_disk above, use of the
id
attribute is shown but the attribute isn't documented. I'm assuming this is the same as themanaged_disk_id
param the VM'sstorage_os_disk
section wants, there's another error, again not caught by planning:I don't understand this either, it already deleted the machine and I want to create a new one, not change any properties.
At this point I'm completely stuck. Terraform / azurerm seems unable to recreate the machine from existing disks. The only way I've been able to get back in a good state is to:
os_type = "Linux"
understorage_os_disk
storage_image_reference
block under the VMos_profile
block under the VMAm I missing something?
Debug Output
Plan apply: https://gist.github.com/dpedu2/4acc2dccd5d907053c823d9c891158ba
Expected Behavior
I should be able to create and modify Azure virtual machines
Actual Behavior
Updating azure virtual machines requires manual intervention and drastic altering of my terraform config.
Steps to Reproduce
See above
Important Factoids
I don't think anything I'm doing is particularly unique; I'm literally taking the "with managed disks" example VM resource, and, after initial creation, adding an additional virtual disk.
The text was updated successfully, but these errors were encountered: