Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3.0.1-rc6 creating a proxmox_cloud_init_disk returns Error 500 #1196

Open
kagehisa opened this issue Dec 6, 2024 · 4 comments
Open

3.0.1-rc6 creating a proxmox_cloud_init_disk returns Error 500 #1196

kagehisa opened this issue Dec 6, 2024 · 4 comments

Comments

@kagehisa
Copy link

kagehisa commented Dec 6, 2024

I'm trying to reproduce the example code for creating a cloud-init disk resource from the docs.
My cluster uses a cephfs storage and so far no issue.
tofu validate states my code is valid, but when I apply my code I get the following output (loglevel debug):

proxmox_cloud_init_disk.ci["node1"]: Creating...
2024-12-06T10:27:57.965+0100 [INFO]  Starting apply for proxmox_cloud_init_disk.ci["node1"]
2024-12-06T10:27:57.965+0100 [DEBUG] proxmox_cloud_init_disk.ci["node1"]: applying the planned Create change
2024-12-06T10:27:57.972+0100 [ERROR] provider.terraform-provider-proxmox_v3.0.1-rc6: Response contains error diagnostic: diagnostic_detail="" diagnostic_severity=ERROR tf_proto_version=5.6 tf_req_id=6c8a4db8-febb-76a9-96d7-c307333b8c48 @caller=github.com/hashicorp/[email protected]/tfprotov5/internal/diag/diagnostics.go:58 @module=sdk.proto tf_rpc=ApplyResourceChange diagnostic_summary="500 can't upload to storage type 'rbd'" tf_provider_addr=registry.terraform.io/telmate/proxmox tf_resource_type=proxmox_cloud_init_disk timestamp="2024-12-06T10:27:57.972+0100"
2024-12-06T10:27:57.972+0100 [DEBUG] State storage *statemgr.Filesystem declined to persist a state snapshot
2024-12-06T10:27:57.972+0100 [ERROR] vertex "proxmox_cloud_init_disk.ci[\"node1\"]" error: 500 can't upload to storage type 'rbd'
╷
│ Error: 500 can't upload to storage type 'rbd'
│ 
│   with proxmox_cloud_init_disk.ci["node1"],
│   on main.tf line 15, in resource "proxmox_cloud_init_disk" "ci":
│   15: resource "proxmox_cloud_init_disk" "ci" {
│ 
╵
2024-12-06T10:27:58.008+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2024-12-06T10:27:58.009+0100 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.opentofu.org/telmate/proxmox/3.0.1-rc6/linux_amd64/terraform-provider-proxmox_v3.0.1-rc6 pid=17413
2024-12-06T10:27:58.009+0100 [DEBUG] provider: plugin exited

My code is similar to the one in the docs except that I use "for_each".

resource "proxmox_cloud_init_disk" "ci" {
  for_each = var.vms
  name      = each.value.name
  pve_node  = "SRV01"
  storage   = "ceph-storage"

  meta_data = yamlencode({
    instance_id    = "iid-local01"
    local-hostname = each.value.name
  })

  user_data = <<-EOT
  disable_root: false
  preserve_hostname: false
  manage_etc_hosts: false
  hostname: ${each.value.name}
  fqdn: ${each.value.name}.domain.de
  ssh_pwauth: true
  users:
    - name: automation
      groups:
       - users
      homedir: /home/automation
      lock_passwd: false
      hashed_passwd: ${data.vault_kv_secret_v2.ci_userpass.data["password_hash"]}
      type: text
      shell: /bin/bash
      sudo: ALL=(ALL) NOPASSWD:ALL
  ntp:
    ntp_client: auto
    enabled: true
  EOT

  network_config = yamlencode({
    version = 1
    config = [{
      type = "physical"
      name = "eth0"
      subnets = [{
        type            = "static"
        address         = each.value.ip_address
        netmask         = each.value.netmask
        gateway         = each.value.gateway
      }]
      },{
      type = "nameserver"
      address = [
        "10.48.98.21"
      ]
      search = [
        "domain.de"
      ]
    }]
  })
}


resource "proxmox_vm_qemu" "ci_test" {
  for_each = var.vms
  target_node = "SRV01"
  full_clone = true
  os_type = "cloud-init"
  numa = true
  bios = "ovmf"
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"
  agent = 1
  tags = "tofu_test"
  hastate = "started"
  hagroup = "CLU03"
  clone = each.value.template
  name = each.value.name
  cores = each.value.cores
  sockets = each.value.sockets
  memory = each.value.memory

  disks {
    scsi {
      scsi0 {
        disk {
          storage = "ceph-storage"
          size = each.value.disk_space
        }
      }
      scsi2 {
        cdrom {
          iso = "${proxmox_cloud_init_disk.ci[lower(each.value.name)].id}"
        }
      }
    }
  }

  network {
    id = 0
    bridge = "vmbr3020"
    model  = "virtio"
  }
}

Did I forget something or is there something I have to enable for our storage pool?

@maksimsamt
Copy link
Contributor

@kagehisa,

My code is similar to the one in the docs except that I use "for_each".

Can you try to do this without "for_each" and only for single host?

@kagehisa
Copy link
Author

kagehisa commented Dec 6, 2024

Hi, yes I did that as well. Same error.
But I found this entry in the proxmox support forum and the following wiki page (see "Storage Features")that tell me I can not store an iso image on a shared ceph storage.

Is there some other way to attach a custom ci iso to a qemu_vm resource without putting it on the shared storage beforehand? Like creating it on a local volume and attaching it as cloudinit disk? Somehow disks of type cloudinit seem to be excempt from that rbd storage rule (are they even stored as iso images?). I just want to avoid having a second shared volume just for iso images when a disk of type cloud-init seems to be usable on a storage of type rbd.

@Tinyblargon
Copy link
Collaborator

Tinyblargon commented Dec 12, 2024

@kagehisa as far as i understand, you would have to set up another shared storage. Ceph has block and file storage, but they are separate storages. The iso gets treated as a file, while the cloud-init disk (while technically just an iso) gets created as a virtual disk.

(Not too wel versed with ceph, so i might be mixing up the block and the file storage)

@kagehisa
Copy link
Author

@Tinyblargon thanks for the clarifiactation. Same is true for using cicustom if I understand the examples correctly. The snippets would need to be on a shared storage and additionally a cloud-init disk resource is needed for the vm. My first thought was that the cloud-init disk resource gets "filled" with the cicustem information and then the snippets are no longer needed but that doesn't seem to be the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants