Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overhaul qemu disks #794

Closed

Conversation

Tinyblargon
Copy link
Collaborator

This change fully re-implements the way qemu disks are handled to be more in accordance with how proxmox handles qemu disks. as discussed in #778 and Telmate/proxmox-api-go#187

  • feature has been implemented.
  • documentation has been updated.
  • examples have been updated.

sadly this pull undoes #767 as the wwn is not yet supported in https://github.com/Telmate/proxmox-api-go

@mleone87 I did some manual testing and i think it would all work. But would it be an idea to create a release candidate and notify the users who where experiencing the issues?

@rogerfachini
Copy link

rogerfachini commented Jun 20, 2023

Hi! Unless I'm missing something, the new config structure for disks will not allow for dynamic blocks that can create N disks in a loop. With the previous disk block that can be specified multiple times, an arbitrary number of disks can be easily created with:

dynamic "disk" {
    for_each = var.vm_disks
    content {
     ...
    }
  }

In the above code, the number of disks created would be determined by the number of elements in the var.vm_disks variable.

In the new config structure, since each disk_0, disk_1... block can only be specified once, and the label portion of a dynamic block does not support template variables, there doesn't appear to be a clean way to allow for dynamic configuration of the disks attached to a VM. The only possible approach with the existing config would be creating a series of dynamic blocks for each permutation of ide/virtio/sata/scsi, disk_0...diskN, and cdrom/disk/passthrough blocks.

In terms of dynamic configuration, this would be massively simplified if each inner block could have the cdrom/disk/passthrough blocks specified multiple times, with a property of the inner block that specifies the number, instead of the current disk_0 format.

In the current proposed config format, a dynamic config would have to look something like:

locals {
   vm_disks = {
     "0" = {size = "512G", number = 0}
     "1" = {size = "256G", number = 1}
     "2" = {size = "256G", number = 2}
   }
}  
disks {
    scsi {
      disk_0 {
        dynamic "disk" {
          for_each = {for k, v in local.vm_disks if v.number == 0}
          content = {
             size = "${disk.size}"
          }
        }
      }
     disk_1 {
        dynamic "disk" {
          for_each = {for k, v in local.vm_disks if v.number == 1}
          content = {
             size = "${disk.size}"
          }
        }
     }
    disk_2 {
        dynamic "disk" {
          for_each = {for k, v in local.vm_disks if v.number == 2}
          content = {
             size = "${disk.size}"
          }
        }
    }
}
  // this would have to continue for ever possible disk_n for a fully dynamic configuration

If the disk_n layer of the nested config were removed, multiple disk/cdrom/passthrough blocks could be specified at the lower level, and the disk number was passed in as a property, then it would be possible to do the above in a single dynamic block:

locals {
   vm_disks = {
     "0" = {size = "512G", number = 0}
     "1" = {size = "256G", number = 1}
     "2" = {size = "256G", number = 2}
   }
}  
disks {
    scsi {
      dynamic "disk" {
          for_each = local.vm_disks
          content = {
             size = "${disk.size}"
             number = disk.number
          }
      }
}

@mleone87
Copy link
Collaborator

nice point @rogerfachini

@Tinyblargon
Copy link
Collaborator Author

@rogerfachini if i understand you correctly then the ide, sata, scsi, virtio blocks would each have cdrom, disk, passthrough as an unordered list.

The downside of this is that validation during the planning phase would not be possible. Instead it will catch the error during the execution phase.

@Tinyblargon
Copy link
Collaborator Author

So I've tried reworking the schema to:

disks {
  scsi {
    disk {
      size = 8
      number = 0
    }
    disk {
      size = 32
      number = 1
    }
  }
}

But when disk 0 gets removed it wants to recreate the existing disks.
The planning stage looks as follows:

~ disks {
  ~ scsi {
    - disk {
      - size = 8
      - number = 0
      }
    - disk {
      - size = 32
      - number = 1
      }
    + disk {
      + size = 32
      + number = 1
      }
    }
  }

So far I've tried typeSet (unordered list) and typeList (ordered list).

@sgabenov
Copy link

sgabenov commented Sep 6, 2023

Hey, any updates on this issues? Looking foreword to get this PR merged as soon as possible

@Tinyblargon
Copy link
Collaborator Author

Currently, i don't know how to implement the schema suggested by @rogerfachini . As far as i understand from the Terraform documentation, we would need to do the state management ourselves. Currently, this PR is blocked by the decision on which schema needs to be implemented.

@den-patrakeev
Copy link

Hi!

I really want to see this PR merged.
We use a provider to manage a VM with 3-4 disks. The provider breaks its state and in fact it can only be used for primary VM deployment. His behavior is not predictable. Because of this, we lost industrial VMs several times.

I would be very grateful if this PR gets into the master.

@elbars
Copy link

elbars commented Nov 13, 2023

What about this feature ?

@mleone87
Copy link
Collaborator

mleone87 commented Dec 6, 2023

@Tinyblargon ready to merge?

@Tinyblargon
Copy link
Collaborator Author

@mleone87 I'll move forward with my original implementation, although it won't be compatible with dynamic blocks.

It's gonna take me a bit of time tho to get it in sync with the rest of the project.

@mleone87
Copy link
Collaborator

@Tinyblargon If I can help let me know

@Tinyblargon
Copy link
Collaborator Author

@mleone87 I've started working on this again in a new branch. Yesterday, i tried to merge master back into it, but the amount of merge conflicts is astounding.

hestiahacker pushed a commit to hestiahacker/terraform-provider-proxmox that referenced this pull request Jan 4, 2024
These changes were pulled from Tinyblargon's branch which was out of sync
with the Telmate master branch. I merely dealt with the merge conflicts so
we could re-submit a new merge request that can be applied cleanly.

Ref: Telmate#794
hestiahacker pushed a commit to hestiahacker/terraform-provider-proxmox that referenced this pull request Jan 4, 2024
These changes were pulled from Tinyblargon's branch which was out of sync
with the Telmate master branch. I merely dealt with the merge conflicts so
we could re-submit a new merge request that can be applied cleanly.

Ref: Telmate#794
mleone87 added a commit that referenced this pull request Jan 14, 2024
* feat: re-implement the way qemu disks are handled

These changes were pulled from Tinyblargon's branch which was out of sync
with the Telmate master branch. I merely dealt with the merge conflicts so
we could re-submit a new merge request that can be applied cleanly.

Ref: #794

* fix: no functional change, just making github CI happy

* Update proxmox-api-go dependency

* Apply patches

* fix: typos

* fix: panic when `disks` is empty

* docs: change disks property names

* chore: update dependencies

* Add debug logging

---------

Co-authored-by: hestia <[email protected]>
Co-authored-by: mleone87 <[email protected]>
@Tinyblargon
Copy link
Collaborator Author

Superseded by #892

spettinichi pushed a commit to spettinichi/terraform-provider-proxmox that referenced this pull request Jan 26, 2024
* feat: re-implement the way qemu disks are handled

These changes were pulled from Tinyblargon's branch which was out of sync
with the Telmate master branch. I merely dealt with the merge conflicts so
we could re-submit a new merge request that can be applied cleanly.

Ref: Telmate#794

* fix: no functional change, just making github CI happy

* Update proxmox-api-go dependency

* Apply patches

* fix: typos

* fix: panic when `disks` is empty

* docs: change disks property names

* chore: update dependencies

* Add debug logging

---------

Co-authored-by: hestia <[email protected]>
Co-authored-by: mleone87 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants