Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[3.0.1-rc1] docs out of date #932

Closed
Tinyblargon opened this issue Feb 5, 2024 · 33 comments · Fixed by #948
Closed

[3.0.1-rc1] docs out of date #932

Tinyblargon opened this issue Feb 5, 2024 · 33 comments · Fixed by #948
Assignees

Comments

@Tinyblargon
Copy link
Collaborator

Tinyblargon commented Feb 5, 2024

The docs and examples of the cloud-init disk are out of date.
Also check that the new disks is used in examples.

@Tinyblargon Tinyblargon self-assigned this Feb 5, 2024
@Tinyblargon Tinyblargon changed the title [3.0.1-rc1] cloud-init docs out of date [3.0.1-rc1] docs out of date Feb 5, 2024
@electropolis
Copy link

electropolis commented Feb 5, 2024

In here, where Resource/proxmox_cloud_init_disk is documented in example box there is resource proxmox_vm_qemu with setup

  // Define a disk block with media type cdrom which reference the generated cloud-init disk
  disk {
    type    = "scsi"
    media   = "cdrom"
    storage = local.iso_storage_pool
    volume  = proxmox_cloud_init_disk.ci.id
    size    = proxmox_cloud_init_disk.ci.size
  }

which is not actual setup. Guides/cloud_init looks also bit strange

@jjblack604
Copy link

jjblack604 commented Feb 10, 2024

Making a note for cloud init and new disk format to work in case anyone was wondering because fixes are super scattered among multiple posts right now.

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "3.0.1-rc1"
    }
  }
}
provider "proxmox" {
  pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
  pm_tls_insecure = true
}
resource "proxmox_vm_qemu" "terraform" {
  count = 1
  name = var.vm_name
  target_node = var.proxmox_host
  clone = var.template_name
  onboot = var.onboot
  agent = 1
  cloudinit_cdrom_storage = var.diskstorage
  cores = var.corecount
  sockets = 1
  cpu = "host"
  memory = var.memorycount
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"
  disks {
    scsi {
      scsi0 {
        disk {
          size = var.disksize
          storage = var.diskstorage
          emulatessd = true
          iothread = false
          discard = true
          backup = true
          replicate = true
        }
      }
    }
  }
  network {
    model = "virtio"
    bridge = "vmbr10000"
    tag = "10"
}
  network {
    model = "virtio"
    bridge = "vmbr10000"
}
  lifecycle {
    ignore_changes = [
      network,
    ]
  }
#set ip dhcp or static
  ipconfig0 = var.ipvar
#sshkeys set using variables.
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

Note i use vars as i have prompting shell script to make vms, albeit not perfect, but my typical dev deployments it's fine.

Appreciate the hard work everyone does on this project, but docs need to be updated on a per feature basis, not after. No one wants to dig through source with trial and error that is not a good practice.

@knuurr
Copy link

knuurr commented Feb 10, 2024

@jjblack604 thanks for snippet. So I just wanna ask because it's still not obvious for me, assuming i Have cloud-init template in YAML format, how does one use it using current version of this provider, epsecially together with options such as ciuser, and cipassword ? I need that for some customizations, such as packages,

ciuser = "ci-user"
cipassword = "password"

@electropolis
Copy link

Making a note for cloud init and new disk format to work in case anyone was wondering because fixes are super scattered among multiple posts right now.

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "3.0.1-rc1"
    }
  }
}
provider "proxmox" {
  pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
  pm_tls_insecure = true
}
resource "proxmox_vm_qemu" "terraform" {
  count = 1
  name = var.vm_name
  target_node = var.proxmox_host
  clone = var.template_name
  onboot = var.onboot
  agent = 1
  os_type = "cloud-init"
  cores = var.corecount
  sockets = 1
  cpu = "host"
  memory = var.memorycount
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"
  disks {
    scsi {
      scsi0 {
        disk {
          size = var.disksize
          storage = var.diskstorage
          emulatessd = true
          iothread = false
          discard = true
          backup = true
          replicate = true
        }
      }
    }
  }
  network {
    model = "virtio"
    bridge = "vmbr10000"
    tag = "10"
}
  network {
    model = "virtio"
    bridge = "vmbr10000"
}
  lifecycle {
    ignore_changes = [
      network,
    ]
  }
#set ip dhcp or static
  ipconfig0 = var.ipvar
#sshkeys set using variables.
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

Note i use vars as i have prompting shell script to make vms, albeit not perfect, but my typical dev deployments it's fine.

Appreciate the hard work everyone does on this project, but docs need to be updated on a per feature basis, not after. No one wants to dig through source with trial and error that is not a good practice.

This setup doesn't work either. Where do you have cloudinit_cdrom_storage ?

@jjblack604
Copy link

jjblack604 commented Feb 10, 2024

S

Making a note for cloud init and new disk format to work in case anyone was wondering because fixes are super scattered among multiple posts right now.

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "3.0.1-rc1"
    }
  }
}
provider "proxmox" {
  pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
  pm_tls_insecure = true
}
resource "proxmox_vm_qemu" "terraform" {
  count = 1
  name = var.vm_name
  target_node = var.proxmox_host
  clone = var.template_name
  onboot = var.onboot
  agent = 1
  os_type = "cloud-init"
  cores = var.corecount
  sockets = 1
  cpu = "host"
  memory = var.memorycount
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"
  disks {
    scsi {
      scsi0 {
        disk {
          size = var.disksize
          storage = var.diskstorage
          emulatessd = true
          iothread = false
          discard = true
          backup = true
          replicate = true
        }
      }
    }
  }
  network {
    model = "virtio"
    bridge = "vmbr10000"
    tag = "10"
}
  network {
    model = "virtio"
    bridge = "vmbr10000"
}
  lifecycle {
    ignore_changes = [
      network,
    ]
  }
#set ip dhcp or static
  ipconfig0 = var.ipvar
#sshkeys set using variables.
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

Note i use vars as i have prompting shell script to make vms, albeit not perfect, but my typical dev deployments it's fine.
Appreciate the hard work everyone does on this project, but docs need to be updated on a per feature basis, not after. No one wants to dig through source with trial and error that is not a good practice.

This setup doesn't work either. Where do you have cloudinit_cdrom_storage ?

Shoot sorry I had copied the wrong template, see the post above again of mine I edited it. @electropolis

Please note that some of these options are custom configs for me of coruse, but the bulk is there, I use vars pulled from vars.tf for a few items of course.

@electropolis
Copy link

electropolis commented Feb 10, 2024

S

Making a note for cloud init and new disk format to work in case anyone was wondering because fixes are super scattered among multiple posts right now.

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "3.0.1-rc1"
    }
  }
}
provider "proxmox" {
  pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
  pm_tls_insecure = true
}
resource "proxmox_vm_qemu" "terraform" {
  count = 1
  name = var.vm_name
  target_node = var.proxmox_host
  clone = var.template_name
  onboot = var.onboot
  agent = 1
  os_type = "cloud-init"
  cores = var.corecount
  sockets = 1
  cpu = "host"
  memory = var.memorycount
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"
  disks {
    scsi {
      scsi0 {
        disk {
          size = var.disksize
          storage = var.diskstorage
          emulatessd = true
          iothread = false
          discard = true
          backup = true
          replicate = true
        }
      }
    }
  }
  network {
    model = "virtio"
    bridge = "vmbr10000"
    tag = "10"
}
  network {
    model = "virtio"
    bridge = "vmbr10000"
}
  lifecycle {
    ignore_changes = [
      network,
    ]
  }
#set ip dhcp or static
  ipconfig0 = var.ipvar
#sshkeys set using variables.
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

Note i use vars as i have prompting shell script to make vms, albeit not perfect, but my typical dev deployments it's fine.
Appreciate the hard work everyone does on this project, but docs need to be updated on a per feature basis, not after. No one wants to dig through source with trial and error that is not a good practice.

This setup doesn't work either. Where do you have cloudinit_cdrom_storage ?

Shoot sorry I had copied the wrong template, see the post above again of mine I edited it. @electropolis

Mine looks almost the same but the whole structure is the same and still doesn't work. It eliminates cloudinit ide2 CD-ROM and add blank one on Proxmox 8.1.3

resource "proxmox_vm_qemu" "cloudinit" {
  depends_on  = [null_resource.cloud_init_config_files]
  for_each    = var.hosts
  target_node = var.pve_node
  name        = each.value.hostname
  clone       = each.value.template
  memory      = each.value.ram * 1024
  cores       = each.value.cpu
  os_type     = "cloud-init"
  sockets     = 1
  scsihw      = "virtio-scsi-pci"
  agent       = 1
  disks {
    scsi {
      scsi0 {
        disk {
          size       = each.value.hdd
          storage    = var.pve_pool
          emulatessd = true
        }
      }
    }
  }
  network {
    macaddr = each.value.macaddr
    model   = "virtio"
    bridge  = each.value.bridge
  }
  # ipconfig0 = "ip=${each.value.ip}/24,gw=${join(".", slice(split(".", each.value.ip), 0, 3))}.254"
  # nameserver              = "10.0.1.30 1.1.1.1"
  # searchdomain            = "${var.pve_node}.sonic hw.sonic"
  cicustom                = "user=local:snippets/user-data_vm-${each.key}.yaml,network=local:snippets/network-config_vm-${each.key}.yaml"
  cloudinit_cdrom_storage = "local-lvm"
}

Im using cicustom based on snippets with cloudinit yaml files.

@jjblack604
Copy link

S

Making a note for cloud init and new disk format to work in case anyone was wondering because fixes are super scattered among multiple posts right now.

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "3.0.1-rc1"
    }
  }
}
provider "proxmox" {
  pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
  pm_tls_insecure = true
}
resource "proxmox_vm_qemu" "terraform" {
  count = 1
  name = var.vm_name
  target_node = var.proxmox_host
  clone = var.template_name
  onboot = var.onboot
  agent = 1
  os_type = "cloud-init"
  cores = var.corecount
  sockets = 1
  cpu = "host"
  memory = var.memorycount
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"
  disks {
    scsi {
      scsi0 {
        disk {
          size = var.disksize
          storage = var.diskstorage
          emulatessd = true
          iothread = false
          discard = true
          backup = true
          replicate = true
        }
      }
    }
  }
  network {
    model = "virtio"
    bridge = "vmbr10000"
    tag = "10"
}
  network {
    model = "virtio"
    bridge = "vmbr10000"
}
  lifecycle {
    ignore_changes = [
      network,
    ]
  }
#set ip dhcp or static
  ipconfig0 = var.ipvar
#sshkeys set using variables.
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

Note i use vars as i have prompting shell script to make vms, albeit not perfect, but my typical dev deployments it's fine.
Appreciate the hard work everyone does on this project, but docs need to be updated on a per feature basis, not after. No one wants to dig through source with trial and error that is not a good practice.

This setup doesn't work either. Where do you have cloudinit_cdrom_storage ?

Shoot sorry I had copied the wrong template, see the post above again of mine I edited it. @electropolis

Mine looks almost the same but the whole structure is the same and still doesn't work. It eliminates cloudinit ide2 CD-ROM and add blank one on Proxmox 8.1.3

resource "proxmox_vm_qemu" "cloudinit" {
  depends_on  = [null_resource.cloud_init_config_files]
  for_each    = var.hosts
  target_node = var.pve_node
  name        = each.value.hostname
  clone       = each.value.template
  memory      = each.value.ram * 1024
  cores       = each.value.cpu
  os_type     = "cloud-init"
  sockets     = 1
  scsihw      = "virtio-scsi-pci"
  agent       = 1
  disks {
    scsi {
      scsi0 {
        disk {
          size       = each.value.hdd
          storage    = var.pve_pool
          emulatessd = true
        }
      }
    }
  }
  network {
    macaddr = each.value.macaddr
    model   = "virtio"
    bridge  = each.value.bridge
  }
  # ipconfig0 = "ip=${each.value.ip}/24,gw=${join(".", slice(split(".", each.value.ip), 0, 3))}.254"
  # nameserver              = "10.0.1.30 1.1.1.1"
  # searchdomain            = "${var.pve_node}.sonic hw.sonic"
  cicustom                = "user=local:snippets/user-data_vm-${each.key}.yaml,network=local:snippets/network-config_vm-${each.key}.yaml"
  cloudinit_cdrom_storage = "local-lvm"
}

Im using cicustom based on snippets with cloudinit yaml files.

hmmm, i'm actually now using 8.1.4 not sure if that plays into anything, but also not using cicustom bit you have

my cloud-init drive is adding as ide3, there is blank 'cd-rom' as ide2, which is where cloud-init use to be for me
image

@jjblack604
Copy link

@jjblack604 thanks for snippet. So I just wanna ask because it's still not obvious for me, assuming i Have cloud-init template in YAML format, how does one use it using current version of this provider, epsecially together with options such as ciuser, and cipassword ? I need that for some customizations, such as packages,

ciuser = "ci-user"
cipassword = "password"

currently not using this myself, will have to take a look, i have some things set on my cloneable templates.

@electropolis
Copy link

electropolis commented Feb 10, 2024

@jjblack604 And how's your template looks like? Because that's the start point of this. Mine is with ide2 cloudinit drive

@jjblack604
Copy link

@jjblack604 And how's your template looks like? Because that's the start point of this. Mine is with ide2 cloudinit drive

vars.tf:

variable "proxmox_host" {
       default = "sprocket"
}
variable "template_name" {
       default = "ubuntu-2204-cloudinit"
}
variable "vm_name" {
       default = "test"
}
variable "onboot" {
       default = "true"
}
variable "corecount" {
       default = "2"
}
variable "memorycount" {
       default = "2048"
}
variable "diskstorage" {
       default = "nvme-hl-storage"
}
variable "disksize" {
       default = "10"
}
variable "ipvar" {
       default = "ip=dhcp"
}
variable "ssh_key" {
       default = "ssh-ed25519 xxxxxxxxxxx"
}

main.tf

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "3.0.1-rc1"
    }
  }
}
provider "proxmox" {
  pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
  pm_tls_insecure = true
}
resource "proxmox_vm_qemu" "terraform" {
  count = 1
  name = var.vm_name
  target_node = var.proxmox_host
  clone = var.template_name
  onboot = var.onboot
  agent = 1
  cloudinit_cdrom_storage = var.diskstorage
  os_type = "cloud-init"
  cores = var.corecount
  sockets = 1
  cpu = "host"
  memory = var.memorycount
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"
  disks {
    scsi {
      scsi0 {
        disk {
          size = var.disksize
          storage = var.diskstorage
          emulatessd = true
          iothread = false
          discard = true
          backup = true
          replicate = true
        }
      }
    }
  }
  network {
    model = "virtio"
    bridge = "vmbr10000"
    tag = "10"
}
  network {
    model = "virtio"
    bridge = "vmbr10000"
}
  lifecycle {
    ignore_changes = [
      network,
    ]
  }
#set ip dhcp or static
  ipconfig0 = var.ipvar
#sshkeys set using variables.
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

@jjblack604
Copy link

@jjblack604 thanks for snippet. So I just wanna ask because it's still not obvious for me, assuming i Have cloud-init template in YAML format, how does one use it using current version of this provider, epsecially together with options such as ciuser, and cipassword ? I need that for some customizations, such as packages,

ciuser = "ci-user"
cipassword = "password"

#407

Third post down has an example and that seems to work if I were to implement it, it looks like you would use the cicustom option to pass a config, but this I don't use myself, looks like @electropolis does use it though.

Package management I use ansible to manage vm installs after being spun up.

Terraform to deploy, Ansible to manage.

@knuurr
Copy link

knuurr commented Feb 10, 2024

@jjblack604 thanks for snippet. So I just wanna ask because it's still not obvious for me, assuming i Have cloud-init template in YAML format, how does one use it using current version of this provider, epsecially together with options such as ciuser, and cipassword ? I need that for some customizations, such as packages,

ciuser = "ci-user"
cipassword = "password"

#407

Third post down has an example and that seems to work if I were to implement it, it looks like you would use the cicustom option to pass a config, but this I don't use myself, looks like @electropolis does use it though.

Package management I use ansible to manage vm installs after being spun up.

Terraform to deploy, Ansible to manage.

So no way to do this using inline methods? proxmox_cloud_init_disk resource form wiki has this user_data seciton which for me and my case, would be ideal - but as already discussed, wiki example is outdated and I don't know how to use it with current version.

  user_data = <<EOT
#cloud-config
users:
  - default
ssh_authorized_keys:
  - ssh-rsa AAAAB3N......
EOT

@jjblack604
Copy link

@jjblack604 thanks for snippet. So I just wanna ask because it's still not obvious for me, assuming i Have cloud-init template in YAML format, how does one use it using current version of this provider, epsecially together with options such as ciuser, and cipassword ? I need that for some customizations, such as packages,

ciuser = "ci-user"
cipassword = "password"

#407
Third post down has an example and that seems to work if I were to implement it, it looks like you would use the cicustom option to pass a config, but this I don't use myself, looks like @electropolis does use it though.
Package management I use ansible to manage vm installs after being spun up.
Terraform to deploy, Ansible to manage.

So no way to do this using inline methods? proxmox_cloud_init_disk resource form wiki has this user_data seciton which for me and my case, would be ideal - but as already discussed, wiki example is outdated and I don't know how to use it with current version.

  user_data = <<EOT
#cloud-config
users:
  - default
ssh_authorized_keys:
  - ssh-rsa AAAAB3N......
EOT

hmm yeah, again not sure as I don't use it, i essentially just sshkeys to inject a single key, then ansible to add multiple users and ssh keys afterwards, maybe someone else can chime in with an example if they use the way you do

@jjblack604
Copy link

I also find it odd that we are force attaching a cdrom disk to ide2 with no way to disable that option? It's fine that cloud-init now uses ide3 instead of previous ide2 but can we have a flag available to disable this? It's fine if it is default true just no need for it.

@electropolis
Copy link

@jjblack604 And how's your template looks like? Because that's the start point of this. Mine is with ide2 cloudinit drive

vars.tf:


variable "proxmox_host" {

       default = "sprocket"

}

variable "template_name" {

       default = "ubuntu-2204-cloudinit"

}

variable "vm_name" {

       default = "test"

}

variable "onboot" {

       default = "true"

}

variable "corecount" {

       default = "2"

}

variable "memorycount" {

       default = "2048"

}

variable "diskstorage" {

       default = "nvme-hl-storage"

}

variable "disksize" {

       default = "10"

}

variable "ipvar" {

       default = "ip=dhcp"

}

variable "ssh_key" {

       default = "ssh-ed25519 xxxxxxxxxxx"

}

main.tf


terraform {

  required_providers {

    proxmox = {

      source = "telmate/proxmox"

      version = "3.0.1-rc1"

    }

  }

}

provider "proxmox" {

  pm_api_url = "https://${var.proxmox_host}:8006/api2/json"

  pm_tls_insecure = true

}

resource "proxmox_vm_qemu" "terraform" {

  count = 1

  name = var.vm_name

  target_node = var.proxmox_host

  clone = var.template_name

  onboot = var.onboot

  agent = 1

  cloudinit_cdrom_storage = var.diskstorage

  os_type = "cloud-init"

  cores = var.corecount

  sockets = 1

  cpu = "host"

  memory = var.memorycount

  scsihw = "virtio-scsi-pci"

  bootdisk = "scsi0"

  disks {

    scsi {

      scsi0 {

        disk {

          size = var.disksize

          storage = var.diskstorage

          emulatessd = true

          iothread = false

          discard = true

          backup = true

          replicate = true

        }

      }

    }

  }

  network {

    model = "virtio"

    bridge = "vmbr10000"

    tag = "10"

}

  network {

    model = "virtio"

    bridge = "vmbr10000"

}

  lifecycle {

    ignore_changes = [

      network,

    ]

  }

#set ip dhcp or static

  ipconfig0 = var.ipvar

#sshkeys set using variables.

  sshkeys = <<EOF

  ${var.ssh_key}

  EOF

}

I was asking about the template which you are using to clone. I have that process in ansible set with qm commands with cloudinit disk

@jjblack604
Copy link

I was asking about the template which you are using to clone. I have that process in ansible set with qm commands with cloudinit disk

Ah you mean the images? I have a few images set up to be cloned from.

image

What would you like to know about them?

I think I had built them off: https://austinsnerdythings.com/2021/08/30/how-to-create-a-proxmox-ubuntu-cloud-init-image/
at some point

@electropolis
Copy link

I was asking about the template which you are using to clone. I have that process in ansible set with qm commands with cloudinit disk

Ah you mean the images? I have a few images set up to be cloned from.

image

What would you like to know about them?

I think I had built them off: https://austinsnerdythings.com/2021/08/30/how-to-create-a-proxmox-ubuntu-cloud-init-image/ at some point

Details of those template. Do they have cloudinit

sudo qm set 9000 --ide2 local-zfs:cloudinit it's this one. Those template can have cloudinit already set up. Mine have. But I'm changing them to snippets.

@jjblack604
Copy link

I was asking about the template which you are using to clone. I have that process in ansible set with qm commands with cloudinit disk

Ah you mean the images? I have a few images set up to be cloned from.
image
What would you like to know about them?
I think I had built them off: https://austinsnerdythings.com/2021/08/30/how-to-create-a-proxmox-ubuntu-cloud-init-image/ at some point

Details of those template. Do they have cloudinit

sudo qm set 9000 --ide2 local-zfs:cloudinit it's this one. Those template can have cloudinit already set up. Mine have. But I'm changing them to snippets.

In the sense that the templates that can be cloned, do they have cloud-init setup, yes with basic configuration, and I have customized the images somewhat to have some admin type settings applied but I do not actively modify those templated images as the overhead for that is too high.

@electropolis
Copy link

I was asking about the template which you are using to clone. I have that process in ansible set with qm commands with cloudinit disk

Ah you mean the images? I have a few images set up to be cloned from.
image
What would you like to know about them?
I think I had built them off: https://austinsnerdythings.com/2021/08/30/how-to-create-a-proxmox-ubuntu-cloud-init-image/ at some point

Details of those template. Do they have cloudinit
sudo qm set 9000 --ide2 local-zfs:cloudinit it's this one. Those template can have cloudinit already set up. Mine have. But I'm changing them to snippets.

In the sense that the templates that can be cloned, do they have cloud-init setup, yes with basic configuration, and I have customized the images somewhat to have some admin type settings applied but I do not actively modify those templated images as the overhead for that is too high.

Ok so this is the case. Can you show the exact detail of the template from hardware how you have the cloudinit drive set up? Because mine looks like this
image

Which is the standard scenario when preparing a template with cloudinit on ide2. And those templates don't work. Every time during terraform provisioning process the cloudinit disappear and ide2 has taken by cd-rom without any source. That's why I'm asking of you have a proper configuration then please share it so that everyone can use because this all about retrieving finally working configuration. This version of provider as we already know has some issues with that ide2. And I don't know finally what to do but for sure the current setup which I had when working on proxmox 7 isn't working on 8.1.3 so what modification has been made by you? What changes when comparing to the standard configuration? Because it's not about the terraform code as we both show.

@JamborJan
Copy link

I tried to get my head around all the info and I was not able to solve my need to write a cloud init and pass it via terraform to proxmox to spin up a cloned vm. Some help would be much appreciated.

@jjblack604
Copy link

jjblack604 commented Feb 11, 2024

I was asking about the template which you are using to clone. I have that process in ansible set with qm commands with cloudinit disk

Ah you mean the images? I have a few images set up to be cloned from.
image
What would you like to know about them?
I think I had built them off: https://austinsnerdythings.com/2021/08/30/how-to-create-a-proxmox-ubuntu-cloud-init-image/ at some point

Details of those template. Do they have cloudinit
sudo qm set 9000 --ide2 local-zfs:cloudinit it's this one. Those template can have cloudinit already set up. Mine have. But I'm changing them to snippets.

In the sense that the templates that can be cloned, do they have cloud-init setup, yes with basic configuration, and I have customized the images somewhat to have some admin type settings applied but I do not actively modify those templated images as the overhead for that is too high.

Ok so this is the case. Can you show the exact detail of the template from hardware how you have the cloudinit drive set up? Because mine looks like this image

Which is the standard scenario when preparing a template with cloudinit on ide2. And those templates don't work. Every time during terraform provisioning process the cloudinit disappear and ide2 has taken by cd-rom without any source. That's why I'm asking of you have a proper configuration then please share it so that everyone can use because this all about retrieving finally working configuration. This version of provider as we already know has some issues with that ide2. And I don't know finally what to do but for sure the current setup which I had when working on proxmox 7 isn't working on 8.1.3 so what modification has been made by you? What changes when comparing to the standard configuration? Because it's not about the terraform code as we both show.

ide2 is reserved now for cd rom, ide3 will be for the cloud-init image according to the docs in the repo (Only for the terraform side, not for the cloneable vms)
These are vms sitting available that i clone from:
image

Their basic cloud-init config:
image

and an image i spun up using terraform:
image

see ide3 above?

if you're still getting ide2 vanishing it means your configuration is wrong if you're using release candidate 3.0.1-rc1.

you either have a config that is extra or have not added
cloudinit_cdrom_storage = 'local-lvm' # this is just an example but this is what should make it provision on ide3 are you sure you have this?
os_type = "cloud-init"

with cloudinit_cdrom_storage it means cloud-int will provision to ide3

in the source:

                                        Schema: map[string]*schema.Schema{
                                                "ide": {
                                                        Type:     schema.TypeList,
                                                        Optional: true,
                                                        MaxItems: 1,
                                                        Elem: &schema.Resource{
                                                                Schema: map[string]*schema.Schema{
                                                                        "ide0": schema_Ide("ide0"),
                                                                        "ide1": schema_Ide("ide1"),
                                                                        // ide2 reserved for cdrom
                                                                        // ide3 reserved for cloudinit
                                                                },
                                                        },
                                                },

can you paste a template of yours again a fresh one you are using?

@electropolis
Copy link

electropolis commented Feb 11, 2024

Template , screen, I already showed you. You asking for tf code, that one I showed also but let me paste it again
in full scale with all other resources

resource "proxmox_vm_qemu" "cloudinit" {
  depends_on  = [null_resource.cloud_init_config_files]
  for_each    = var.hosts
  target_node = var.pve_node
  name        = each.value.hostname
  clone       = each.value.template
  memory      = each.value.ram * 1024
  cores       = each.value.cpu
  os_type     = "cloud-init"
  sockets     = 1
  scsihw      = "virtio-scsi-pci"
  agent       = 1
  disks {
    scsi {
      scsi0 {
        disk {
          size       = each.value.hdd
          storage    = var.pve_pool
          emulatessd = true
        }
      }
    }
  }
  network {
    macaddr = each.value.macaddr
    model   = "virtio"
    bridge  = each.value.bridge
  }
  # ipconfig0 = "ip=${each.value.ip}/24,gw=${join(".", slice(split(".", each.value.ip), 0, 3))}.254"
  # nameserver              = "10.0.1.30 1.1.1.1"
  # searchdomain            = "${var.pve_node}.sonic hw.sonic"
  cicustom                = "user=local:snippets/user-data_vm-${each.key}.yaml,network=local:snippets/network-config_vm-${each.key}.yaml"
  cloudinit_cdrom_storage = "local-lvm"
}

data "template_file" "user_data" {
  for_each = var.hosts
  template = file("${path.module}/../cloud-init/user_data.cfg")
  vars = {
    hostname = each.value.hostname
    fqdn     = "${each.value.hostname}.sonic.net.pl"
  }
}

data "template_file" "network-config" {
  for_each = var.hosts
  template = file("${path.module}/../cloud-init/network-config.cfg")
  vars = {
    dns_search  = jsonencode(var.dns_search)
    dns_servers = jsonencode(var.dns_servers)
    dhcp        = var.dhcp
    ip_address  = "${each.value.ip}/24"
    gateway     = "${join(".", slice(split(".", each.value.ip), 0, 3))}.254"
    hostname    = each.value.hostname
  }
}
locals {
  vm_user_data      = values(data.template_file.user_data)
  vm_network-config = values(data.template_file.network-config)
  hosts             = values(var.hosts)
}

resource "local_file" "cloud_init_user_data_file" {
  count    = length(local.vm_user_data)
  content  = local.vm_user_data[count.index].rendered
  filename = "${path.module}/../cloud-init/user_data_${local.hosts[count.index].hostname}.cfg"
}

resource "local_file" "cloud_init_network-config_file" {
  count    = length(local.vm_network-config)
  content  = local.vm_network-config[count.index].rendered
  filename = "${path.module}/../cloud-init/network-config_${local.hosts[count.index].hostname}.cfg"
}
resource "null_resource" "cloud_init_config_files" {
  count = length(local_file.cloud_init_user_data_file)
  connection {
    type  = "ssh"
    host  = var.pve_ip
    agent = true
  }
  provisioner "file" {
    source      = local_file.cloud_init_user_data_file[count.index].filename
    destination = "/var/lib/vz/snippets/user-data_vm-${local.hosts[count.index].hostname}.yaml"
  }
}
resource "null_resource" "cloud_init_network-config_files" {
  count = length(local_file.cloud_init_network-config_file)
  connection {
    type  = "ssh"
    host  = var.pve_ip
    agent = true
  }
  provisioner "file" {
    source      = local_file.cloud_init_network-config_file[count.index].filename
    destination = "/var/lib/vz/snippets/network-config_vm-${local.hosts[count.index].hostname}.yaml"
  }
}

I do have everything setup nicely because it worked earlier. The problem is that that cloudinit disappears so if my config is wrong what is the new one? Because yours is the same that works also on PVE7. So what changed because I don't see a thing. And just to be clear. Ide2 doesn't vanish for good. I Get vanishing cloudinit ide2 from the template when the virtual machine spins up. It is provisioned and in that step it has cloudinit on ide2 like in the template. But suddenly ide2 changes to none,cdrom but ide3 with cloud-init doesn't appear.

And that case is described here: #901
So I don't know how you got your setup working

@jjblack604
Copy link

@electropolis

Hmm you do have a ton of structural differences in your template in terms of how you provision compared to mine and truthfully it is far too difficult to know exactly what is causing it because of that. I am also running 8.1.4 and you are at 8.1.3, maybe that's the difference? I don't think I'm going to be much help trying to fix your template, you're likely going to have to start removing features until you find a point that works so you can isolate the issue. I don't rely on calling file configs like you do within the template for cloud-init, I would start there. It may have worked before, but there are definite syntax changes in 3.0.1-rc1 and you're going to have to go through the source to see what they are and how that impacts not only your terraform templates, but all your cloud-init file imports.

@JamborJan
Copy link

JamborJan commented Feb 11, 2024

Hi @jjblack604 , I reviewed a lot of issues now and didn’t find an end to end example yet. And I seems the documentation is outdated right now. Would it be possible that you provide and end to end example how you set up a VM? Starting from how you generate a VM template (bash script or step by step with qm create and qm set steps), terraform code with how you create the cloud-init part, and maybe how you add additional software to the created VM. Are you using ansible? From my point of view the screenshots don’t help that much, as they only show the result. Not the input which successfully created the result.

I have tested a lot. The only way I was able to get the terraform provider successfully clone and spine up a template is by adding a hard coded cloud init script to initial VM template. But then I have all VMs with the same users and the sam w software. ciuser and sshkeys is no longer processed in the terraform code.

let me know when it would be beneficial to post all my steps in detail. I would be very happy to help improve the docs so that we have the working examples end to end documented in one place and not spread across issues and discord.

Thank you again for all your patience and help.

@electropolis
Copy link

@electropolis

Hmm you do have a ton of structural differences in your template in terms of how you provision compared to mine and truthfully it is far too difficult to know exactly what is causing it because of that. I am also running 8.1.4 and you are at 8.1.3, maybe that's the difference? I don't think I'm going to be much help trying to fix your template, you're likely going to have to start removing features until you find a point that works so you can isolate the issue. I don't rely on calling file configs like you do within the template for cloud-init, I would start there. It may have worked before, but there are definite syntax changes in 3.0.1-rc1 and you're going to have to go through the source to see what they are and how that impacts not only your terraform templates, but all your cloud-init file imports.

But that ton is irrelevant. Just focus on the resource of resource "proxmox_vm_qemu" "cloudinit rest is just for parsing variables and preparing yaml snippets. You don't use it, because you are using ciuser, and those parameters available in proxmox and mine is with cicustom that's why there are other things to make it happen. Most of those resources below are for editing and preparing those yaml snippets. Some local and null_resources like I said irrelevant Mine setup works beside that damn cloud-init drive. That's why I asked about your config. API calls are the same always in our scenarios.

@jjblack604
Copy link

@electropolis
Hmm you do have a ton of structural differences in your template in terms of how you provision compared to mine and truthfully it is far too difficult to know exactly what is causing it because of that. I am also running 8.1.4 and you are at 8.1.3, maybe that's the difference? I don't think I'm going to be much help trying to fix your template, you're likely going to have to start removing features until you find a point that works so you can isolate the issue. I don't rely on calling file configs like you do within the template for cloud-init, I would start there. It may have worked before, but there are definite syntax changes in 3.0.1-rc1 and you're going to have to go through the source to see what they are and how that impacts not only your terraform templates, but all your cloud-init file imports.

But that ton is irrelevant. Just focus on the resource of resource "proxmox_vm_qemu" "cloudinit rest is just for parsing variables and preparing yaml snippets. You don't use it, because you are using ciuser, and those parameters available in proxmox and mine is with cicustom that's why there are other things to make it happen. Most of those resources below are for editing and preparing those yaml snippets. Some local and null_resources like I said irrelevant Mine setup works beside that damn cloud-init drive. That's why I asked about your config. API calls are the same always in our scenarios.

My Point is, I'm saying remove everything that does not match my templates for now for testing purposes only, I am trying to help you isolate where the issue is. My templates work and yours do not, so there is a possibility those extra pieces are causing issues.

Aside from that I see:

  depends_on  = [null_resource.cloud_init_config_files]
  for_each    = var.hosts
``` which is the only other difference that I do not use, I'm not looping over hosts.

You are specifying this yes?

terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "3.0.1-rc1"
}
}
}

@jjblack604
Copy link

Hi @jjblack604 , I reviewed a lot of issues now and didn’t find an end to end example yet. And I seems the documentation is outdated right now. Would it be possible that you provide and end to end example how you set up a VM? Starting from how you generate a VM template (bash script or step by step with qm create and qm set steps), terraform code with how you create the cloud-init part, and maybe how you add additional software to the created VM. Are you using ansible? From my point of view the screenshots don’t help that much, as they only show the result. Not the input which successfully created the result.

I had used these two guides initially:
(Step 1)
https://austinsnerdythings.com/2021/08/30/how-to-create-a-proxmox-ubuntu-cloud-init-image/
(Step 2)
https://austinsnerdythings.com/2021/09/01/how-to-deploy-vms-in-proxmox-with-terraform/

So full credit to the author Austin there.

I have tested a lot. The only way I was able to get the terraform provider successfully clone and spine up a template is by adding a hard coded cloud init script to initial VM template. But then I have all VMs with the same users and the sam w software. ciuser and sshkeys is no longer processed in the terraform code.

While it is possible to add custom packages and extra ssh keys etc with Terraform, I personally opt to use Ansible for this. So I also have a single user configured with SSH via terraform - which is my user since I only use proxmox for deving. Then I have several dozen ansible roles I use to manage things like server configuration, users, other things depending on what the VM or server is going to be doing.

I am not a contributor to this project, just a user like yourself. It sounds like you are on the right track, if you were opting to use ansible you could configure extra users and install packages after initial terraform setup.

While I don't have the time to create a guide at the moment, found this quick vid about deploying ssh keys with ansible if you are unfamiliar. There would also be lots of content available out there for things like packages etc.
https://www.youtube.com/watch?v=543_km6ymX0

@electropolis
Copy link

electropolis commented Feb 11, 2024

@electropolis
Hmm you do have a ton of structural differences in your template in terms of how you provision compared to mine and truthfully it is far too difficult to know exactly what is causing it because of that. I am also running 8.1.4 and you are at 8.1.3, maybe that's the difference? I don't think I'm going to be much help trying to fix your template, you're likely going to have to start removing features until you find a point that works so you can isolate the issue. I don't rely on calling file configs like you do within the template for cloud-init, I would start there. It may have worked before, but there are definite syntax changes in 3.0.1-rc1 and you're going to have to go through the source to see what they are and how that impacts not only your terraform templates, but all your cloud-init file imports.

But that ton is irrelevant. Just focus on the resource of resource "proxmox_vm_qemu" "cloudinit rest is just for parsing variables and preparing yaml snippets. You don't use it, because you are using ciuser, and those parameters available in proxmox and mine is with cicustom that's why there are other things to make it happen. Most of those resources below are for editing and preparing those yaml snippets. Some local and null_resources like I said irrelevant Mine setup works beside that damn cloud-init drive. That's why I asked about your config. API calls are the same always in our scenarios.

My Point is, I'm saying remove everything that does not match my templates for now for testing purposes only, I am trying to help you isolate where the issue is. My templates work and yours do not, so there is a possibility those extra pieces are causing issues.

Aside from that I see:

  depends_on  = [null_resource.cloud_init_config_files]
  for_each    = var.hosts
``` which is the only other difference that I do not use, I'm not looping over hosts.

You are specifying this yes?

terraform { required_providers { proxmox = { source = "telmate/proxmox" version = "3.0.1-rc1" } } }

Because best practice is to use loops instead count that makes by default your resources added to the list with changeable index. If you have for example 3 servers so they received index 0,1,2. When you remove the second server that has 1 the one with index 2 will be removed and created again so that it will take the free index 1. That's why count is basically avoided and only used in specific scenarios. So yes, I do have that var.hosts defined here

in tfvars file

hosts = {
  "srv-app-1" = {
    hostname = "srv-app-1"
    macaddr  = "00:1E:67:01:10:01"
    ip       = "10.0.10.1"
    bridge   = "skynet"
    cpu      = 4
    hdd      = "15"
    ram      = 8
    template = "debian-12-amd64-cloudinit-template"
  }

I can try to disable those other things but basically it will crash my script and I Will have to set different variables. I will try first of all to remove cloudinit disk from template , by preparing a new one. In other issue here some guy says he has no cloudinit attached to a template at all. He is attaching it when cloning using terraform.

btw

The only way I was able to get the terraform provider successfully clone and spine up a template is by adding a hard coded cloud init script to initial VM template. But then I have all VMs with the same users and the sam w software. ciuser and sshkeys is no longer processed in the terraform code.

And it's good to have all the VMs with the same initial user and software, basic, additional one. And rest can be used by ansible. Cloudinit Is the pre-configuration step that provides among above things for example a dedicated user of ansible with key that can be used after the installation with ansible as a post-configuration setup.

@electropolis
Copy link

electropolis commented Feb 12, 2024

@jjblack604 it would be quicker if you could change the code use cicustom by manually adding cloudinit snippets. I'm trying to do it with different steps

  1. Template without cloudinit -> got cd-rom on ide2
  2. Template with cloudinit BUT on ide3 (not on ide2 like it was when working with Proxmox v7)

In all cases I received only CD-ROM. :/ btw I checked proxmox version and I use 8.1.4
and I want to remind that we probably are checking your config that luckily works. Luckily because the official statement is that there is a issue with cloudinit #922
and it is in the fixing phase. How is your code working can't say but many people are showing that this isn't working and it is bug not wrong config on the operator side.

{"data":{"/":{"VM.Config.Disk":1,"SDN.Use":1,"VM.Migrate":1,"Datastore.Audit":1,"VM.Config.CPU":1,"VM.Config.CDROM":1,"VM.Clone":1,"Sys.Audit":1,"VM.Config.HWType":1,"Pool.Allocate":1,"VM.Config.Memory":1,"VM.Allocate":1,"VM.PowerMgmt":1,"User.Modify":1,"VM.Config.Cloudinit":1,"VM.Config.Options":1,"Datastore.AllocateSpace":1,"VM.Config.Network":1,"Sys.Modify":1,"VM.Monitor":1,"VM.Audit":1,"Sys.Console":1}}}: timestamp=2024-02-12T13:52:40.185+0100
2024-02-12T13:52:40.187+0100 [DEBUG] Connection established. Handshaking for user root
2024-02-12T13:52:40.187+0100 [DEBUG] Connection established. Handshaking for user root
2024-02-12T13:52:40.261+0100 [DEBUG] Telling SSH config to forward to agent
2024-02-12T13:52:40.261+0100 [DEBUG] Setting up a session to request agent forwarding
2024-02-12T13:52:40.269+0100 [DEBUG] Telling SSH config to forward to agent
2024-02-12T13:52:40.269+0100 [DEBUG] Setting up a session to request agent forwarding
2024-02-12T13:52:40.312+0100 [INFO]  agent forwarding enabled
2024-02-12T13:52:40.312+0100 [DEBUG] starting ssh KeepAlives
2024-02-12T13:52:40.313+0100 [DEBUG] opening new ssh session
2024-02-12T13:52:40.316+0100 [DEBUG] Starting remote scp process:  'scp' -vt /var/lib/vz/snippets
2024-02-12T13:52:40.317+0100 [INFO]  agent forwarding enabled
2024-02-12T13:52:40.317+0100 [DEBUG] starting ssh KeepAlives
2024-02-12T13:52:40.317+0100 [DEBUG] opening new ssh session
2024-02-12T13:52:40.320+0100 [DEBUG] Started SCP session, beginning transfers...
2024-02-12T13:52:40.320+0100 [DEBUG] Beginning file upload...
2024-02-12T13:52:40.321+0100 [DEBUG] Starting remote scp process:  'scp' -vt /var/lib/vz/snippets
2024-02-12T13:52:40.327+0100 [DEBUG] Started SCP session, beginning transfers...
2024-02-12T13:52:40.327+0100 [DEBUG] Beginning file upload...
2024-02-12T13:52:40.327+0100 [DEBUG] SCP session complete, closing stdin pipe.
2024-02-12T13:52:40.327+0100 [DEBUG] Waiting for SSH session to complete.
2024-02-12T13:52:40.332+0100 [DEBUG] SCP session complete, closing stdin pipe.
2024-02-12T13:52:40.332+0100 [ERROR] scp stderr: "Sink: C0644 1915 user-data_vm-srv-app-1.yaml\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
2024-02-12T13:52:40.332+0100 [DEBUG] Waiting for SSH session to complete.
null_resource.cloud_init_config_files[0]: Creation complete after 0s [id=6011708041045336489]
2024-02-12T13:52:40.337+0100 [ERROR] scp stderr: "Sink: C0644 307 network-config_vm-srv-app-1.yaml\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
2024-02-12T13:52:40.337+0100 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
null_resource.cloud_init_network-config_files[0]: Creation complete after 0s [id=6828990818097151280]
2024-02-12T13:52:40.342+0100 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2024-02-12T13:52:40.343+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-02-12T13:52:40.343+0100 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/null/3.2.2/darwin_arm64/terraform-provider-null_v3.2.2_x5 pid=41864
2024-02-12T13:52:40.343+0100 [DEBUG] provider: plugin exited
2024-02-12T13:52:40.421+0100 [WARN]  Provider "registry.terraform.io/telmate/proxmox" produced an invalid plan for proxmox_vm_qemu.cloudinit["srv-app-1"], but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .additional_wait: planned value cty.NumberIntVal(5) for a non-computed attribute
      - .onboot: planned value cty.False for a non-computed attribute
      - .balloon: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .define_connection_info: planned value cty.True for a non-computed attribute
      - .vlan: planned value cty.NumberIntVal(-1) for a non-computed attribute
      - .vm_state: planned value cty.StringVal("running") for a non-computed attribute
      - .bios: planned value cty.StringVal("seabios") for a non-computed attribute
      - .clone_wait: planned value cty.NumberIntVal(10) for a non-computed attribute
      - .vcpus: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .tablet: planned value cty.True for a non-computed attribute
      - .force_create: planned value cty.False for a non-computed attribute
      - .hotplug: planned value cty.StringVal("network,disk,usb") for a non-computed attribute
      - .kvm: planned value cty.True for a non-computed attribute
      - .cpu: planned value cty.StringVal("host") for a non-computed attribute
      - .guest_agent_ready_timeout: planned value cty.NumberIntVal(100) for a non-computed attribute
      - .preprovision: planned value cty.True for a non-computed attribute
      - .automatic_reboot: planned value cty.True for a non-computed attribute
      - .oncreate: planned value cty.False for a non-computed attribute
      - .full_clone: planned value cty.True for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].format: planned value cty.StringVal("raw") for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_r_burst_length: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_r_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_wr_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_wr_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].mbps_wr_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].mbps_r_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_r_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].mbps_wr_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].backup: planned value cty.True for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_wr_burst_length: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].mbps_r_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .smbios: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead
      - .network[0].tag: planned value cty.NumberIntVal(-1) for a non-computed attribute
      - .network[0].firewall: planned value cty.False for a non-computed attribute
      - .network[0].link_down: planned value cty.False for a non-computed attribute
proxmox_vm_qemu.cloudinit["srv-app-1"]: Creating...
2024-02-12T13:52:40.421+0100 [INFO]  Starting apply for proxmox_vm_qemu.cloudinit["srv-app-1"]

if the debug will get your attention but I'm worry about those invalid plan for proxmox that is seen in the log

Screen.Recording.2024-02-12.at.13.53.12.mov

Here is the clip that shows exactly the behaviour

I was working based on this tutorial: https://yetiops.net/posts/proxmox-terraform-cloudinit-saltstack-prometheus/#creating-a-template

I've just added additional config for network in the data.template_file

@jjblack604
Copy link

jjblack604 commented Feb 12, 2024

@jjblack604 it would be quicker if you could change the code use cicustom by manually adding cloudinit snippets. I'm trying to do it with different steps

1. Template without cloudinit -> got cd-rom on ide2

2. Template with cloudinit BUT on ide3 (not on ide2 like it was when working with Proxmox v7)

In all cases I received only CD-ROM. :/ btw I checked proxmox version and I use 8.1.4 and I want to remind that we probably are checking your config that luckily works. Luckily because the official statement is that there is a issue with cloudinit #922 and it is in the fixing phase. How is your code working can't say but many people are showing that this isn't working and it is bug not wrong config on the operator side.

{"data":{"/":{"VM.Config.Disk":1,"SDN.Use":1,"VM.Migrate":1,"Datastore.Audit":1,"VM.Config.CPU":1,"VM.Config.CDROM":1,"VM.Clone":1,"Sys.Audit":1,"VM.Config.HWType":1,"Pool.Allocate":1,"VM.Config.Memory":1,"VM.Allocate":1,"VM.PowerMgmt":1,"User.Modify":1,"VM.Config.Cloudinit":1,"VM.Config.Options":1,"Datastore.AllocateSpace":1,"VM.Config.Network":1,"Sys.Modify":1,"VM.Monitor":1,"VM.Audit":1,"Sys.Console":1}}}: timestamp=2024-02-12T13:52:40.185+0100
2024-02-12T13:52:40.187+0100 [DEBUG] Connection established. Handshaking for user root
2024-02-12T13:52:40.187+0100 [DEBUG] Connection established. Handshaking for user root
2024-02-12T13:52:40.261+0100 [DEBUG] Telling SSH config to forward to agent
2024-02-12T13:52:40.261+0100 [DEBUG] Setting up a session to request agent forwarding
2024-02-12T13:52:40.269+0100 [DEBUG] Telling SSH config to forward to agent
2024-02-12T13:52:40.269+0100 [DEBUG] Setting up a session to request agent forwarding
2024-02-12T13:52:40.312+0100 [INFO]  agent forwarding enabled
2024-02-12T13:52:40.312+0100 [DEBUG] starting ssh KeepAlives
2024-02-12T13:52:40.313+0100 [DEBUG] opening new ssh session
2024-02-12T13:52:40.316+0100 [DEBUG] Starting remote scp process:  'scp' -vt /var/lib/vz/snippets
2024-02-12T13:52:40.317+0100 [INFO]  agent forwarding enabled
2024-02-12T13:52:40.317+0100 [DEBUG] starting ssh KeepAlives
2024-02-12T13:52:40.317+0100 [DEBUG] opening new ssh session
2024-02-12T13:52:40.320+0100 [DEBUG] Started SCP session, beginning transfers...
2024-02-12T13:52:40.320+0100 [DEBUG] Beginning file upload...
2024-02-12T13:52:40.321+0100 [DEBUG] Starting remote scp process:  'scp' -vt /var/lib/vz/snippets
2024-02-12T13:52:40.327+0100 [DEBUG] Started SCP session, beginning transfers...
2024-02-12T13:52:40.327+0100 [DEBUG] Beginning file upload...
2024-02-12T13:52:40.327+0100 [DEBUG] SCP session complete, closing stdin pipe.
2024-02-12T13:52:40.327+0100 [DEBUG] Waiting for SSH session to complete.
2024-02-12T13:52:40.332+0100 [DEBUG] SCP session complete, closing stdin pipe.
2024-02-12T13:52:40.332+0100 [ERROR] scp stderr: "Sink: C0644 1915 user-data_vm-srv-app-1.yaml\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
2024-02-12T13:52:40.332+0100 [DEBUG] Waiting for SSH session to complete.
null_resource.cloud_init_config_files[0]: Creation complete after 0s [id=6011708041045336489]
2024-02-12T13:52:40.337+0100 [ERROR] scp stderr: "Sink: C0644 307 network-config_vm-srv-app-1.yaml\nscp: debug1: fd 0 clearing O_NONBLOCK\r\n"
2024-02-12T13:52:40.337+0100 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
null_resource.cloud_init_network-config_files[0]: Creation complete after 0s [id=6828990818097151280]
2024-02-12T13:52:40.342+0100 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2024-02-12T13:52:40.343+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-02-12T13:52:40.343+0100 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/null/3.2.2/darwin_arm64/terraform-provider-null_v3.2.2_x5 pid=41864
2024-02-12T13:52:40.343+0100 [DEBUG] provider: plugin exited
2024-02-12T13:52:40.421+0100 [WARN]  Provider "registry.terraform.io/telmate/proxmox" produced an invalid plan for proxmox_vm_qemu.cloudinit["srv-app-1"], but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .additional_wait: planned value cty.NumberIntVal(5) for a non-computed attribute
      - .onboot: planned value cty.False for a non-computed attribute
      - .balloon: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .define_connection_info: planned value cty.True for a non-computed attribute
      - .vlan: planned value cty.NumberIntVal(-1) for a non-computed attribute
      - .vm_state: planned value cty.StringVal("running") for a non-computed attribute
      - .bios: planned value cty.StringVal("seabios") for a non-computed attribute
      - .clone_wait: planned value cty.NumberIntVal(10) for a non-computed attribute
      - .vcpus: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .tablet: planned value cty.True for a non-computed attribute
      - .force_create: planned value cty.False for a non-computed attribute
      - .hotplug: planned value cty.StringVal("network,disk,usb") for a non-computed attribute
      - .kvm: planned value cty.True for a non-computed attribute
      - .cpu: planned value cty.StringVal("host") for a non-computed attribute
      - .guest_agent_ready_timeout: planned value cty.NumberIntVal(100) for a non-computed attribute
      - .preprovision: planned value cty.True for a non-computed attribute
      - .automatic_reboot: planned value cty.True for a non-computed attribute
      - .oncreate: planned value cty.False for a non-computed attribute
      - .full_clone: planned value cty.True for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].format: planned value cty.StringVal("raw") for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_r_burst_length: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_r_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_wr_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_wr_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].mbps_wr_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].mbps_r_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_r_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].mbps_wr_concurrent: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].backup: planned value cty.True for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].iops_wr_burst_length: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .disks[0].scsi[0].scsi0[0].disk[0].mbps_r_burst: planned value cty.NumberIntVal(0) for a non-computed attribute
      - .smbios: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead
      - .network[0].tag: planned value cty.NumberIntVal(-1) for a non-computed attribute
      - .network[0].firewall: planned value cty.False for a non-computed attribute
      - .network[0].link_down: planned value cty.False for a non-computed attribute
proxmox_vm_qemu.cloudinit["srv-app-1"]: Creating...
2024-02-12T13:52:40.421+0100 [INFO]  Starting apply for proxmox_vm_qemu.cloudinit["srv-app-1"]

if the debug will get your attention but I'm worry about those invalid plan for proxmox that is seen in the log
Screen.Recording.2024-02-12.at.13.53.12.mov

Here is the clip that shows exactly the behaviour

I was working based on this tutorial: https://yetiops.net/posts/proxmox-terraform-cloudinit-saltstack-prometheus/#creating-a-template

I've just added additional config for network in the data.template_file

image

I don't do the above, terraform should download the correct version when it applies the config if it is stated in main.tf.

I also install the qemu-guest-agent PRIOR, into my images
https://austinsnerdythings.com/2021/08/30/how-to-create-a-proxmox-ubuntu-cloud-init-image/

Cloud init drive is supposed to be on IDE3 now. NOT IDE2 as I posted above, it is in the source code for 3.0.1-rc1 as a change.

https://drive.google.com/file/d/1xOCIgTY-RMXOrqSESbE9mgtrNm-BiAis/view?usp=drive_link

it would be quicker if you could change the code use cicustom by manually adding cloudinit snippets. I'm trying to do it with different steps

I will not be attempting to re-create and fix your code. The reference to #922 looks like it was addressed by installing the qemu agent? I install this into my custom images, BEFORE cloud-init: https://austinsnerdythings.com/2021/08/30/how-to-create-a-proxmox-ubuntu-cloud-init-image/

I don't have more time to spend on this. I hope you are able to resolve the issue. I have provided a working configuration example and all links for how I have setup my cloneable vms are in the posts above as well as video proof of it working.

@electropolis
Copy link

electropolis commented Feb 12, 2024

I don't do the above, terraform should download the correct version when it applies the config if it is stated in main.tf.
You mean the screen shot ? Forget that I'm not talking about the terraform and provider installation. I'm talking about main.tf and how was it setup. To have a better description of the code here it is: https://github.com/sonic-networks/terraform/

I also install the qemu-guest-agent PRIOR, into my images
https://austinsnerdythings.com/2021/08/30/how-to-create-a-proxmox-ubuntu-cloud-init-image/

And this is why I use cloudinit.
Terraform is for provisioning VM
Cloudinit is to set there rest things that are supposed to be immutable, like qemu-agent.
The IaC approach is to avoid any manual intervention as necessary. You are installing manually I wanna have this done by cloudinit as it supposed to be automatically without any manual interference. Here is the installation process: https://github.com/sonic-networks/terraform/blob/master/proxmox/cloud-init/user_data.cfg#L28

I don't do any template modification after creation, I don't run any additional scripts that probably are doing the same what I'm doing with cloudinit. Template is prepared by ansible scripts like that, according to the official proxmox documentation for cloudinit. The Ansible code here

So basically

  1. Ansible prepare templates
  2. Terraform provision VM with cloudinit snippet (that doesn't work)
  3. Cloudinit is making postponed installation on the VM to make sure the config is immutable (non changeable)

I have provided a working configuration example and all links for how I have setup my cloneable vms are in the posts above as well as video proof of it working.

That's a workaround without using snippets. So it's not a working configuration. It's a workaround. Working configuration is the configuration that doesn't prevent cloudinit that was previously created from disappearing leaving the newly created VM with cloudinit that was used set here cloudinit_cdrom_storage = "local-lvm" and not ignored.

I don't know why you are referring to the agent constantly. I don't have problem with agent. As soon as the cloudinit will work I will have agent.

Creating Templates tutorials are all the same. Austin is using deprecated methods for pve7. I'm using those according to Official Proxmox Documentation. But that's irrelevant. That gives the same result. When it comes to terraform you don't use cicustom and that's the case. I don't want to refactor mine code to be like yours which has lack of functionality and then step by step reverting it to the previous stage to see where is the issue. It will take ages and will not solve anything. Btw they already debug that there is some problem with api calls so I'm waiting for someone who has a workaround for cloudinit based on snippets.

I just forgot where should the cloudinit_cdrom_storage = point too ? Because my cloudinit lies on local-lvm file that has Disk Image content
image

but the snippets can only be on local disk that has _Snippets content and this is where terraform puts those yaml files
image

So I don't remember which disk should be. You guys have the same because you are not using snippets.

@hestiahacker
Copy link
Contributor

hestiahacker commented Feb 26, 2024

Back to the topic of original post, I see that examples/pxe_example.tf has already been updated. I updated examples/cloudinit_example.tf and had to remove the ssd because emulating an SSD doesn't make sense for a virtio disk. I also updated the docs to have examples that use disks instead of disk and pointed to the example file from the cloud_init guide.

Merge request has been submitted and I believe that should address the concerns this issue was created to address.

hestiahacker pushed a commit to hestiahacker/terraform-provider-proxmox that referenced this issue Feb 26, 2024
hestiahacker pushed a commit to hestiahacker/terraform-provider-proxmox that referenced this issue Feb 26, 2024
@electropolis
Copy link

Back to the topic of original post, I see that examples/pxe_example.tf has already been updated. I updated examples/cloudinit_example.tf and had to remove the ssd because emulating an SSD doesn't make sense for a virtio disk. I also updated the docs to have examples that use disks instead of disk and pointed to the example file from the cloud_init guide.

Merge request has been submitted and I believe that should address the concerns this issue was created to address.

The boot order is already set when creating the template

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants