Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform Destroy not working #1116

Open
HafizAhmadJamil opened this issue Sep 29, 2024 · 2 comments
Open

Terraform Destroy not working #1116

HafizAhmadJamil opened this issue Sep 29, 2024 · 2 comments
Labels
issue/confirmed Issue has been reviewed and confirmed to be present or accepted to be implemented resource/qemu Issue or PR related to Qemu resource type/bug

Comments

@HafizAhmadJamil
Copy link

Unable to destroy the vm

here is my config

provider.tf

root@ST:~/debian_iac# cat provider.tf 
variable "pm_api_url" {
  type = string
}

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "3.0.1-rc4"
    }
  }
}

provider "proxmox" {
  pm_api_url = var.pm_api_url
}

main.tf

root@ST:~/debian_iac# cat main.tf 
variable "cloudinit_template_name" {
    type = string 
}

variable "proxmox_node" {
    type = string
}

variable "ssh_key" {
  type = string 
  sensitive = true
}

resource "proxmox_vm_qemu" "k8s-1" {
  count = 1
  name = "k8s-1${count.index + 1}"
  target_node = var.proxmox_node
  clone = var.cloudinit_template_name
  agent = 1
  os_type = "cloud-init"
  cores = 4
  sockets = 1
  cpu = "host"
  memory = 4096
  scsihw = "virtio-scsi-pci"
  bootdisk = "scsi0"

  disk {
    slot = "scsi0"
    size = "40G"
    type = "disk"
    storage = "local-lvm"
  }

  network {
    model = "virtio"
    bridge = "V10"
  }
  
  lifecycle {
    ignore_changes = [
      network,
    ]
  }

  ipconfig0 = "ip=10.250.10.20${count.index + 1}/24,gw=10.250.10.1"
  
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF

}
root@ST:~/debian_iac# 
root@ST:~/debian_iac# terraform apply 

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # proxmox_vm_qemu.k8s-1[0] will be created
  + resource "proxmox_vm_qemu" "k8s-1" {
      + additional_wait        = 5
      + agent                  = 1
      + automatic_reboot       = true
      + balloon                = 0
      + bios                   = "seabios"
      + boot                   = (known after apply)
      + bootdisk               = "scsi0"
      + ciupgrade              = false
      + clone                  = "debian-11-cloudinit-template"
      + clone_wait             = 10
      + cores                  = 4
      + cpu                    = "host"
      + default_ipv4_address   = (known after apply)
      + default_ipv6_address   = (known after apply)
      + define_connection_info = true
      + force_create           = false
      + full_clone             = true
      + hotplug                = "network,disk,usb"
      + id                     = (known after apply)
      + ipconfig0              = "ip=10.250.10.201/24,gw=10.250.10.1"
      + kvm                    = true
      + linked_vmid            = (known after apply)
      + memory                 = 4096
      + name                   = "k8s-11"
      + onboot                 = false
      + os_type                = "cloud-init"
      + protection             = false
      + reboot_required        = (known after apply)
      + scsihw                 = "virtio-scsi-pci"
      + skip_ipv4              = false
      + skip_ipv6              = false
      + sockets                = 1
      + ssh_host               = (known after apply)
      + ssh_port               = (known after apply)
      + sshkeys                = (sensitive value)
      + tablet                 = true
      + tags                   = (known after apply)
      + target_node            = "ST"
      + unused_disk            = (known after apply)
      + vcpus                  = 0
      + vm_state               = "running"
      + vmid                   = (known after apply)

      + disk {
          + backup               = true
          + format               = "raw"
          + id                   = (known after apply)
          + iops_r_burst         = 0
          + iops_r_burst_length  = 0
          + iops_r_concurrent    = 0
          + iops_wr_burst        = 0
          + iops_wr_burst_length = 0
          + iops_wr_concurrent   = 0
          + linked_disk_id       = (known after apply)
          + mbps_r_burst         = 0
          + mbps_r_concurrent    = 0
          + mbps_wr_burst        = 0
          + mbps_wr_concurrent   = 0
          + passthrough          = false
          + size                 = "40G"
          + slot                 = "scsi0"
          + storage              = "local-lvm"
          + type                 = "disk"
        }

      + network {
          + bridge    = "V10"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }

      + smbios (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

proxmox_vm_qemu.k8s-1[0]: Creating...
proxmox_vm_qemu.k8s-1[0]: Still creating... [10s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still creating... [20s elapsed]
proxmox_vm_qemu.k8s-1[0]: Creation complete after 24s [id=ST/qemu/102]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
root@ST:~/debian_iac# terraform destroy 
proxmox_vm_qemu.k8s-1[0]: Refreshing state... [id=ST/qemu/102]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # proxmox_vm_qemu.k8s-1[0] will be destroyed
  - resource "proxmox_vm_qemu" "k8s-1" {
      - additional_wait        = 5 -> null
      - agent                  = 1 -> null
      - automatic_reboot       = true -> null
      - balloon                = 0 -> null
      - bios                   = "seabios" -> null
      - boot                   = "c" -> null
      - bootdisk               = "scsi0" -> null
      - ciupgrade              = false -> null
      - clone                  = "debian-11-cloudinit-template" -> null
      - clone_wait             = 10 -> null
      - cores                  = 4 -> null
      - cpu                    = "host" -> null
      - default_ipv4_address   = "10.250.10.201" -> null
      - define_connection_info = true -> null
      - force_create           = false -> null
      - full_clone             = true -> null
      - hotplug                = "network,disk,usb" -> null
      - id                     = "ST/qemu/102" -> null
      - ipconfig0              = "ip=10.250.10.201/24,gw=10.250.10.1" -> null
      - kvm                    = true -> null
      - linked_vmid            = 0 -> null
      - memory                 = 4096 -> null
      - name                   = "k8s-11" -> null
      - numa                   = false -> null
      - onboot                 = false -> null
      - os_type                = "cloud-init" -> null
      - protection             = false -> null
      - qemu_os                = "other" -> null
      - reboot_required        = false -> null
      - scsihw                 = "virtio-scsi-pci" -> null
      - skip_ipv4              = false -> null
      - skip_ipv6              = false -> null
      - sockets                = 1 -> null
      - ssh_host               = "10.250.10.201" -> null
      - ssh_port               = "22" -> null
      - sshkeys                = (sensitive value) -> null
      - tablet                 = true -> null
        tags                   = null
      - target_node            = "ST" -> null
      - unused_disk            = [] -> null
      - vcpus                  = 0 -> null
      - vm_state               = "running" -> null
        # (10 unchanged attributes hidden)

      - disk {
          - backup               = true -> null
          - discard              = false -> null
          - emulatessd           = false -> null
          - format               = "raw" -> null
          - id                   = 0 -> null
          - iops_r_burst         = 0 -> null
          - iops_r_burst_length  = 0 -> null
          - iops_r_concurrent    = 0 -> null
          - iops_wr_burst        = 0 -> null
          - iops_wr_burst_length = 0 -> null
          - iops_wr_concurrent   = 0 -> null
          - iothread             = false -> null
          - linked_disk_id       = -1 -> null
          - mbps_r_burst         = 0 -> null
          - mbps_r_concurrent    = 0 -> null
          - mbps_wr_burst        = 0 -> null
          - mbps_wr_concurrent   = 0 -> null
          - passthrough          = false -> null
          - readonly             = false -> null
          - replicate            = false -> null
          - size                 = "40G" -> null
          - slot                 = "scsi0" -> null
          - storage              = "local-lvm" -> null
          - type                 = "disk" -> null
            # (6 unchanged attributes hidden)
        }

      - network {
          - bridge    = "V10" -> null
          - firewall  = false -> null
          - link_down = false -> null
          - macaddr   = "FE:B7:FD:8A:E5:76" -> null
          - model     = "virtio" -> null
          - mtu       = 0 -> null
          - queues    = 0 -> null
          - rate      = 0 -> null
          - tag       = -1 -> null
        }

      - serial {
          - id   = 0 -> null
          - type = "socket" -> null
        }

      - smbios {
          - uuid         = "04d42755-6f21-4af0-a899-c2e48eecd726" -> null
            # (6 unchanged attributes hidden)
        }
    }

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

proxmox_vm_qemu.k8s-1[0]: Destroying... [id=ST/qemu/102]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 10s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 20s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 30s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 40s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 50s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 1m0s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 1m10s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 1m20s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 1m30s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 1m40s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 1m50s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 2m0s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 2m10s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 2m20s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 2m30s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 2m40s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 2m50s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 3m0s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 3m10s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 3m20s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 3m30s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 3m40s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 3m51s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 4m1s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 4m11s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 4m21s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 4m31s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 4m41s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 4m51s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 5m1s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 5m11s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 5m21s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 5m31s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 5m41s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 5m51s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 6m1s elapsed]
proxmox_vm_qemu.k8s-1[0]: Still destroying... [id=ST/qemu/102, 6m11s elapsed]
@Tinyblargon
Copy link
Collaborator

@HafizAhmadJamil the issue you are experiencing is probably related to #1106 as currently it waits forever on the qemu network config due to it incorrectly detecting if the VM supports cloud-init.

Would you be able to test with #1120?

@Tinyblargon
Copy link
Collaborator

@HafizAhmadJamil could you test with the latest release?

@Tinyblargon Tinyblargon added issue/confirmed Issue has been reviewed and confirmed to be present or accepted to be implemented resource/qemu Issue or PR related to Qemu resource labels Nov 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
issue/confirmed Issue has been reviewed and confirmed to be present or accepted to be implemented resource/qemu Issue or PR related to Qemu resource type/bug
Projects
None yet
Development

No branches or pull requests

2 participants