-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
azurerm - Terraform wants to recreate VM when attaching existing managed data disk #85
Comments
Is there any update on that? In my opinion the simplest solution would be to have Or would this currently not be working due to Azure restrictions? |
It's really important for fix! I hope will be done asap! |
👋 hey @marratj Thanks for opening this issue :) I've taken a look into this issue and unfortunately this appears to be a limitation at Azure's end where it's not possible to update the ordering of disks through the API; where attempting to change either the As such - I've raised an issue about this on the Azure SDK for Go repository - to find out how this should be achieved, since I believe it should be possible (since the portal allows for this to be done) - and will update when I've heard back. Thanks! |
@tombuildsstuff I noticed that the issue you opened at the Azure SDK repo is closed almost a month ago and that you removed your assignment from this issue 10 days ago. So does that mean this is now fixed? I could not find a related PR, but maybe I'm just overlooking something here... And if it's not yet fixed, is there something in the works already? |
@svanharmelen as I read this, it's not a bug upstream but you have to order operations in a certain manner which @tombuildsstuff could then use to make this work correctly (or whoever picks up the bug). Though would sure like this too as hitting our site at present also! |
@tombuildsstuff This is not upstream bug I did test using the latest go sdk to find if the VM was getting reprovisioned when attaching a new disk .None of tools like arm template, azcli arm powershell or sdk has this behaviour . This is only to do with how terraform handle the new data disk . As @marratj said using ForceNew: false instead of true when changing the managed_disk_idof storage_data_disk may fix the issue but may be not the clearest one . This is really important and it is affecting us badly and your help is highly appreciated |
Sounds good! Thanks for the update @tombuildsstuff! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
This issue was originally opened by @marratj as hashicorp/terraform#14268. It was migrated here as part of the provider split. The original body of the issue is below.
Terraform Version
0.9.4
Affected Resource(s)
azurerm_virtual_machine
azurerm_managed_disk
Terraform Configuration Files
Debug Output
Panic Output
Expected Behavior
Existing Managed disk gets attached to VM without recreating the VM.
Actual Behavior
The VM gets destroyed and recreated from scratch.
When adding a Managed Disk with the "Empty" option however, it works as expected -> here the VM just gets reconfigured, not recreated.
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1
terraform plan
2.
terraform apply
Important Factoids
References
The text was updated successfully, but these errors were encountered: