-
Notifications
You must be signed in to change notification settings - Fork 541
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overhaul qemu disks #794
Overhaul qemu disks #794
Conversation
the proxmox library can now hande the updating od disks all by itself, simplifying the logic needed here.
Hi! Unless I'm missing something, the new config structure for disks will not allow for dynamic "disk" {
for_each = var.vm_disks
content {
...
}
} In the above code, the number of disks created would be determined by the number of elements in the In the new config structure, since each In terms of dynamic configuration, this would be massively simplified if each inner block could have the In the current proposed config format, a dynamic config would have to look something like: locals {
vm_disks = {
"0" = {size = "512G", number = 0}
"1" = {size = "256G", number = 1}
"2" = {size = "256G", number = 2}
}
}
disks {
scsi {
disk_0 {
dynamic "disk" {
for_each = {for k, v in local.vm_disks if v.number == 0}
content = {
size = "${disk.size}"
}
}
}
disk_1 {
dynamic "disk" {
for_each = {for k, v in local.vm_disks if v.number == 1}
content = {
size = "${disk.size}"
}
}
}
disk_2 {
dynamic "disk" {
for_each = {for k, v in local.vm_disks if v.number == 2}
content = {
size = "${disk.size}"
}
}
}
}
// this would have to continue for ever possible disk_n for a fully dynamic configuration If the locals {
vm_disks = {
"0" = {size = "512G", number = 0}
"1" = {size = "256G", number = 1}
"2" = {size = "256G", number = 2}
}
}
disks {
scsi {
dynamic "disk" {
for_each = local.vm_disks
content = {
size = "${disk.size}"
number = disk.number
}
}
} |
nice point @rogerfachini |
@rogerfachini if i understand you correctly then the The downside of this is that validation during the planning phase would not be possible. Instead it will catch the error during the execution phase. |
So I've tried reworking the schema to: disks {
scsi {
disk {
size = 8
number = 0
}
disk {
size = 32
number = 1
}
}
} But when disk 0 gets removed it wants to recreate the existing disks. ~ disks {
~ scsi {
- disk {
- size = 8
- number = 0
}
- disk {
- size = 32
- number = 1
}
+ disk {
+ size = 32
+ number = 1
}
}
} So far I've tried |
Hey, any updates on this issues? Looking foreword to get this PR merged as soon as possible |
Currently, i don't know how to implement the schema suggested by @rogerfachini . As far as i understand from the Terraform documentation, we would need to do the state management ourselves. Currently, this PR is blocked by the decision on which schema needs to be implemented. |
Hi! I really want to see this PR merged. I would be very grateful if this PR gets into the master. |
What about this feature ? |
@Tinyblargon ready to merge? |
@mleone87 I'll move forward with my original implementation, although it won't be compatible with It's gonna take me a bit of time tho to get it in sync with the rest of the project. |
@Tinyblargon If I can help let me know |
@mleone87 I've started working on this again in a new branch. Yesterday, i tried to merge master back into it, but the amount of merge conflicts is astounding. |
These changes were pulled from Tinyblargon's branch which was out of sync with the Telmate master branch. I merely dealt with the merge conflicts so we could re-submit a new merge request that can be applied cleanly. Ref: Telmate#794
These changes were pulled from Tinyblargon's branch which was out of sync with the Telmate master branch. I merely dealt with the merge conflicts so we could re-submit a new merge request that can be applied cleanly. Ref: Telmate#794
* feat: re-implement the way qemu disks are handled These changes were pulled from Tinyblargon's branch which was out of sync with the Telmate master branch. I merely dealt with the merge conflicts so we could re-submit a new merge request that can be applied cleanly. Ref: #794 * fix: no functional change, just making github CI happy * Update proxmox-api-go dependency * Apply patches * fix: typos * fix: panic when `disks` is empty * docs: change disks property names * chore: update dependencies * Add debug logging --------- Co-authored-by: hestia <[email protected]> Co-authored-by: mleone87 <[email protected]>
Superseded by #892 |
* feat: re-implement the way qemu disks are handled These changes were pulled from Tinyblargon's branch which was out of sync with the Telmate master branch. I merely dealt with the merge conflicts so we could re-submit a new merge request that can be applied cleanly. Ref: Telmate#794 * fix: no functional change, just making github CI happy * Update proxmox-api-go dependency * Apply patches * fix: typos * fix: panic when `disks` is empty * docs: change disks property names * chore: update dependencies * Add debug logging --------- Co-authored-by: hestia <[email protected]> Co-authored-by: mleone87 <[email protected]>
This change fully re-implements the way qemu disks are handled to be more in accordance with how proxmox handles qemu disks. as discussed in #778 and Telmate/proxmox-api-go#187
sadly this pull undoes #767 as the
wwn
is not yet supported in https://github.com/Telmate/proxmox-api-go@mleone87 I did some manual testing and i think it would all work. But would it be an idea to create a release candidate and notify the users who where experiencing the issues?