Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"unused disk" problems with rc4 et rc6 #1205

Open
TKinslayer opened this issue Dec 17, 2024 · 2 comments
Open

"unused disk" problems with rc4 et rc6 #1205

TKinslayer opened this issue Dec 17, 2024 · 2 comments
Assignees
Labels
issue/not-a-bug The reported issue is the intended behavior or the problem is not inside this project resource/qemu Issue or PR related to Qemu resource

Comments

@TKinslayer
Copy link

TKinslayer commented Dec 17, 2024

All of my Terraform scripts used to work but... they don't seem to anymore. I actually see where the problem is but not how to fix it...

Here is the problem:

  • I have a cloud-init template that has its cloudinit drive in ide0, and an iscsi0 as a disk-0 for the VM disk.
  • In my Terraform script (as shown below), I clone the VM and I set an ide0 drive for the cloudinit, and an iscsi0 with a disk0 for the VM disk
  • Used to work... but now, it when the script changes the size of the disks (from 10g to 8g), it also changes the disk-0 of iscsi0 and set it to disk-1 (creating a new disk)
  • And in the Proxmox VM's hardware, it says that there is a disk 0 that is unused...
  • And when the VM tries boots it says the disk is unbootable.
  1. Cloudinit hardware
Cloud-init hardware
  1. VM's hardware BEFORE it tries to resize the disk, with a disk0 for the SCSI drive
VM hardware before disk resize
  1. VM's hardware AFTER it resizes and create a new disk and put the former one in unused
VM hardware after disk resize
  1. Error message when the VM tries to boot
ubootable disk #1

And I know the problem is that disk0 to disk1. Because when I manually (through the GUI) detach disk1 and add disk0 inside the VM, it works !!!!

But... I don't want to manually do that to all VMs...

I am on Proxmox 8.2.7 and used both "3.0.1-rc4" and "3.0.1-rc6"

Here are the two Terraform scripts I used for the "3.0.1-rc4" :

resource "proxmox_vm_qemu" "test" {
    target_node = "pve${count.index + 1}"
    vmid = "102${count.index + 1}"
    name = "HA-${count.index + 1}"
    tags = "high_availibility,maximal_protection"

    count = 3
    onboot = true
    vm_state = "started"
    
    clone = "Cloud-init-Generic"
    full_clone  = true

    agent = 1

    os_type = "cloud-init"
    cores = 2
    sockets = 1
    numa = false
    vcpus = 0
    cpu = "host"
    memory = 4096

    kvm = true 
    serial {
        id = 0
        type = "socket"
    }

    scsihw   = "virtio-scsi-pci"
    bootdisk = "scsi0"

    disks {
        ide {
            ide0 {
                cloudinit {
                    storage = "Local_ZFS"
                }
            }
        }
        scsi {
            scsi0 {
                disk {
                    size = "8192M"
                    storage = "Local_ZFS"
                    emulatessd = true 
                }
            }
        }
    }
    network {
        model = "virtio"
        bridge = "vmbr3"
        tag = 40
    }

    ipconfig0 = "ip=10.66.40.${13 + count.index}/24,gw=10.66.40.1"

    timeouts {
        create = "2h"
        update = "2h"
        delete = "1h"
    }
    agent_timeout = 1200

    ciupgrade = true

    nameserver = "10.66.1.1"

    sshkeys = <<EOF
        XXXX-keys
    EOF
}

Here are the two Terraform scripts I used for the "3.0.1-rc6" :

resource "proxmox_vm_qemu" "test" {
  name         = "HA-Test"
  target_node  = "pve1"
  vmid         = 1021
  clone        = "Cloud-init-Generic"
  full_clone   = true
  agent        = 1
  os_type      = "cloud-init"
  cores        = 2
  sockets      = 1
  memory       = 8192
  scsihw       = "virtio-scsi-pci"
  bootdisk     = "scsi0" 
  onboot       = true
  vm_state     = "started"
  hastate      = "enabled"

  disk {
    type    = "cloudinit"
    storage = "Local_ZFS"
    slot    = "ide0"
  }

  disk {
    type      = "disk"
    storage   = "Local_ZFS"
    size      = "8G"
    emulatessd = true
    slot      = "scsi0"
  }

  network {
    id     = 0
    model  = "virtio"
    bridge = "vmbr3"
    tag    = 40
  }

  ipconfig0 = "ip=10.66.40.250/24,gw=10.66.40.1"
  nameserver = "10.66.1.1"
  sshkeys = <<EOF
       XXXX-keys
    EOF

  timeouts {
    create = "2h"
    update = "2h"
    delete = "1h"
  }
  agent_timeout = 1200
  ciupgrade = true
}

With rc6, I had some problems with the disk id, it told me I couldn't put them to 0, and had to let them be dynamically set, so I had to remove them.

What could be wrong ?

@TKinslayer TKinslayer changed the title "unused disk" problems with all templates "unused disk" problems with rc4 et rc6 Dec 17, 2024
@Tinyblargon Tinyblargon self-assigned this Dec 17, 2024
@Tinyblargon
Copy link
Collaborator

@TKinslayer It's not possible to resize a disk to be smaller, making a disk smaller recreates the disk.
Setting scsi0 to be 10G at minimum would prevent this recreation.

@Tinyblargon Tinyblargon added issue/not-a-bug The reported issue is the intended behavior or the problem is not inside this project resource/qemu Issue or PR related to Qemu resource labels Dec 17, 2024
@TKinslayer
Copy link
Author

TKinslayer commented Dec 18, 2024

Ahhhh...
Kinda looks totally logical in fact, now that I think about it.
I feel dumb now.... ;-(

@Tinyblargon - Thanks a bunch for your quick answer !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
issue/not-a-bug The reported issue is the intended behavior or the problem is not inside this project resource/qemu Issue or PR related to Qemu resource
Projects
None yet
Development

No branches or pull requests

2 participants