-
Notifications
You must be signed in to change notification settings - Fork 542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disk is created twice after a succesful apply. #832
Comments
If we call terraform apply again after the first apply, then the disk should be removed. A new apply says that both sides match. Another call again reports that the disk is removed. This is unpredictable. Sometimes this way and sometimes that way. When we confirm this, terraform apply runs, but the hard drive is still there. This is really strange. We have terraform projects in Azure Cloud and in Hetzner Cloud. We have never experienced anything like this.
$ terraform version
|
We clone form a template, the disk is the os disk. |
Read again my post from yesterday. Not the clearest one ;-) To sum it up: Terraform or Terraform Proxmox-Provider do makes randomly mistakes in compairing state by no changes in the code. At one apply, the state is : Acquiring state lock. This may take a few moments...
At the direct next apply it is:
|
I am also seeing a similar issue, have a VM with 2 extra disk and when I do a plan after creating, I get this:
After doing a plan later with no changes, I see a new disk change and if I do an apply, a new disk is added and replaces the last one defined in the resource block. The previously used disk is then placed as a unused disk causing me unable to use that mountpoint/disk
|
I can usually work around the issue by removing the local |
It is a workaround that will just prevent terraform trigger changes each time you run it. btw, there is a PR to fix this, but still opened: #794
|
Yes, this helps. Cool. In my case, it is the OS disk which comes from the template and will not be changed in terraform. |
Another thing, that can be tested is to use a dynamic block, so we can change the sequence to "set" that will not be reordered
|
I think we can confirm, this has gotten much by upgrading to 2.9.14. i tried understanding the logic, but the main branch has quite some changes compare to 2.9.14. |
This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs |
This issue was closed because it has been inactive for 5 days since being marked as stale. |
This issue breaks the proxmox integration completely. It's unusable and a very frustrating experience. |
same here |
See also discussions in #832 |
I create a vm from a template with cloud init using telmate 2.9.14 as a Terraform provider. In the disk section I give a type, storage and size. After the apply the VM is created with the correct disk. If I then do a terraform plan again it says that the disk will have be created with the specs given. If I apply that, an extra disk with the same specs is created again. Happens with every single VM I try to deploy.
In 2.9.11 I don't have this.
The text was updated successfully, but these errors were encountered: