-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix working with orphaned devices #1005
Conversation
link to see current commit author: https://github.com/terraform-providers/terraform-provider-vsphere/commit/d943fe089a9ef60af52e532cb0edd0baaf9fd131.patch @Fodoj Or look into fixing your git user.email config 😅 to get proper credit for your commits 🏆 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, many thanks for taking the time to create a fix and finding the root cause.
@jetersen let me re-do the commit with proper identity |
@jetersen should be fine now. :-) |
@Fodoj thank you so much for fixing this issue Can someone merge this PR please ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Thanks for taking the time to track this down and write the fix.
A release would be much appreciated :) |
Hi @Fodoj We had the same issue in #875 with Terraform v0.12.17 + terraform-provider-vsphere_v1.13.0_x4 + K8s v1.16.4, but it is occasionally. In normal cases, I did "terraform plan -state=tfstate.xxx" cmd to refresh the infrastructure when there are additional dynamic vsphere volumes attached to the node, but the tfstate file do not change anything. So my confuse is that with "ignore disks" in lifecycle, why and when will these orphaned disk info be saved into tfstate file? Please check the remain logs: |
@pkqsun even though Terraform ignores changes to disks, they are still saved in the statefile, at least in this particular provider |
Thanks for your info. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Fixes this bug: #875
Problem is that if VM had orphaned disk saved in state file and same VM gets another orphaned disk after that, Terraform will add new disk with the same orpahed_disk_0 label. This breaks the run because label names must be unique according to virtual_machine_disk_subresource.go
The fix itself simply detects orphaned disks among already existing resources and skips them, so that they are re-added to state file as orphaned disks again, thus making this provider iterate over compelte list of orphaned disks and not only newly discovered ones.