Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate disks created #887

Open
vilhelmprytz opened this issue Dec 22, 2023 · 47 comments
Open

Duplicate disks created #887

vilhelmprytz opened this issue Dec 22, 2023 · 47 comments
Labels
issue/investigate resource/qemu Issue or PR related to Qemu resource

Comments

@vilhelmprytz
Copy link
Contributor

I've had the same issues with Proxmox 8 as a lot of people (like #882 and #863), but when I build this project locally from master and use that, then both these problems are solved. However, a new problem arises. The provider is supposed to increase the size of the 2G base image disk that I have, but instead, it seems like it just detaches that disk and creates a new empty one (with the correct size). This new disk is of course not bootable, and the terraform apply command stalls.

image

Any idea if this is a known problem? It didn't seem to me like a duplicate of any other open issues right now.

Proxmox: 8.1.3
Provider: local build of latest commit, a8675d3

@everythings-gonna-be-alright
Copy link

everythings-gonna-be-alright commented Dec 22, 2023

Cause pxapi.NewConfigQemuFromApi function now returns disks in Disks part of ConfigQemu struct. However, the code expects them in QemuDisks
Disks *QemuStorages structure differs in format from QemuDisks QemuDevices

https://github.com/Telmate/terraform-provider-proxmox/blob/a8675d3967710bab4ac08fad9dbc05eed3ae2c58/proxmox/resource_vm_qemu.go#L1148C2-L1148C2

@vilhelmprytz
Copy link
Contributor Author

So have you found a solution to this problem? I'm guessing it's something that needs to be fixed with the provider, and not the .tf configuration I've used.

@everythings-gonna-be-alright

Unfortunately, it isn't very easy for me. ( insufficient knowledge of GoLang )
As I can see, it is necessary to rewrite a significant part of disk logic.

@djonko
Copy link

djonko commented Dec 24, 2023

same problem from proxmox 8.1.3 thanks in advance

@vilhelmprytz
Copy link
Contributor Author

@mleone87 Anything new regarding this?

@ihatethecloud
Copy link

Having the same problem but with TheGameProfi/proxmox.
It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

@andrei-matei
Copy link

Having the same issue myself. For now, the provider seems to be unusable.

@hestiahacker
Copy link
Contributor

I was able to duplicate this issue in Proxmox 7.4-17 as well. This issue doesn't seem to be related to the version of Proxmox.

I don't think the issues is where [https://github.com//issues/887#issuecomment-1868043544] pointed to because that line hasn't changed in 3 years.

I tried to find the exact commit where this stopped working so we could get a better idea of where the problem was introduced. Unfortunately, there are a lot of commits where the code doesn't compile, which made it difficult to see where the issue was introduced. Here's the best I could do:

I spot checked the 24 commits in between and none of them compiled except a8675d3, but that one failed because my VM name was var.fqdn and dots were not allowed in this version (which was fixed in the e51e787 commit, which is where I was able to reproduce this issue).

My two guesses for where the problem was introduced are:

I. commit 4a602733bbb5b767eeb79e1b27cf98665c904bb4, specifically the block starting on line 1066 which adds an additional disk. That change was merged into the default branch with #732, or
II. commit d02bb5dba45bdb8e65ed58fccaa885fd6432e6fa, specifically the code block starting line 2202 which transforms a list of qemu devices (which includes disk drives). This was merged into the default branch in #866

Those guesses are listed in the order of where my gut tells me the issue lies. It's also possible that there was some change in some library that the provider uses, of which there are a ton of them and a lot of them were updated between when I could confirm this issue was not present and when I could reproduce this issue.

I was actually trying to investigate a issue #704 when I came across this one, so I'm going to go back to trying to identifying the root cause of that issue. Hopefully my notes here will be helpful in someone getting to the bottom of this one before another release is cut.

@riverar
Copy link
Contributor

riverar commented Jan 1, 2024

Noticed my recent EFI changes were referenced here (#732). Happy to help debug as well.

@riverar

This comment was marked as outdated.

@hestiahacker
Copy link
Contributor

I am able to reproduce the issue with the small example below using a provider compiled from commit e51e787 (the latest as of right now):

# Define the usual provider things
terraform {
  required_providers {
    proxmox = {
      source = "registry.example.com/telmate/proxmox"
      version = ">=1.0.0"
      #source = "Telmate/proxmox"
      #version = "=2.9.11"
    }
  }
  required_version = ">= 0.14"
}

resource "proxmox_vm_qemu" "server" {
  name              = "test.example.com"
  target_node       = "ra"
  clone             = "debian-12"
  full_clone        = true
  os_type           = "cloud-init"
  cores             = 1
  sockets           = "1"
  cpu               = "host"
  memory            = 512
  scsihw            = "virtio-scsi-pci"
  bootdisk          = "virtio0"
  disk {
    size            = "20G"
    type            = "virtio"
    cache           = "writeback"
    storage         = "local-zfs"
  }
  network {
    model           = "virtio"
    bridge          = "vmbr0"
  }

  # Cloud Init Settings
  # Reference: https://pve.proxmox.com/wiki/Cloud-Init_Support
  ipconfig0 = "ip=192.168.22.222/22,gw=192.168.22.1"
  nameserver = "192.168.23.100 192.168.22.100"
  sshkeys = file("${path.root}/test.pub")
}

I am testing against Proxmox 7.4-17. That created test.example.com with two disks: disk-0 (unused), and disk-1 (virtio0). It attempts to boot but just boot loops because it can't find any bootable media.

issue887

@riverar
Copy link
Contributor

riverar commented Jan 2, 2024

Thanks! Can reproduce here now.

@riverar
Copy link
Contributor

riverar commented Jan 2, 2024

Looks like this has broken due to an upstream qemu disks overhaul (Telmate/proxmox-api-go#255) as @everythings-gonna-be-alright suggested (tagging @Tinyblargon), which changed ConfigQemu.QemuDisks behavior. (I suspect the "deprecated" label is in error and it's really just gone/obsolete now.)

This may be expected churn for the master branch, so not faulting anyone here. We just need to do the work to bring the terraform provider back in alignment.

@Tinyblargon
Copy link
Collaborator

@riverar #794 was supposed to change this behavior when the functionality was changed in the upstream library. Due to some setbacks, this kept getting delayed.

@riverar
Copy link
Contributor

riverar commented Jan 2, 2024

@Tinyblargon Oh nice, I missed that PR. Thanks!

@pescobar
Copy link

pescobar commented Jan 3, 2024

We are also hitting this problem with proxmox 8.1.3 and provider TheGameProfi/proxmox version 2.9.15

@pescobar
Copy link

pescobar commented Jan 3, 2024

while debugging this issue I have realized that once I create a new qemu VM and I try terraform state show there is no disk section for the VM state and that's why the provider tries to add it again.

Weird that during the initial creation of the VM the disk is properly created in the right storage pool and with the right size defined in the terraform code but it's just not added to the terraform state

@hestiahacker
Copy link
Contributor

I can confirm that the code that @Tinyblargon wrote does fix this problem. I tested using the same small example that I posted earlier this week.

I've submitted a merge request with Tinyblargon's changes that can be applied cleanly to the HEAD of the default branch here in this repo. I'm pretty sure I've resolved all the merge conflicts correctly, and I have tested it to make sure it fixes this issue, but if anyone else would be willing and able to give it a review, I'd appreciate having a second pair of eyes on this.

And if anyone wants to just check out the code, compile it and verify that it fixes the issue, the code can be found here: https://github.com/hestiahacker/terraform-provider-proxmox/tree/overhaul-qemu-disks Having someone else reproduce my results would be good for avoiding that "works on my box" problem. 🙂

@hestiahacker
Copy link
Contributor

I've also tested an updated terraform file which uses the new syntax for configuring multiple disks. This avoids a deprecation warning from being printed, which makes me happy. My updated terraform file is below.

# Define the usual provider things
terraform {
  required_providers {
    proxmox = {
      source = "registry.example.com/telmate/proxmox"
      version = ">=1.0.0"
      #source = "Telmate/proxmox"
      #version = "=2.9.11"
    }
  }
  required_version = ">= 0.14"
}

resource "proxmox_vm_qemu" "server" {
  name              = "test.example.com"
  target_node       = "ra"
  clone             = "debian-12"
  full_clone        = true
  os_type           = "cloud-init"
  cores             = 1
  sockets           = "1"
  cpu               = "host"
  memory            = 512
  scsihw            = "virtio-scsi-pci"
  bootdisk          = "virtio0"
  disks {
    virtio {
      virtio0 {
        disk {
          size            = 20
          cache           = "writeback"
          storage         = "local-zfs"
        }
      }
    }
  }
  network {
    model           = "virtio"
    bridge          = "vmbr0"
  }

  # Cloud Init Settings
  # Reference: https://pve.proxmox.com/wiki/Cloud-Init_Support
  ipconfig0 = "ip=192.168.22.222/22,gw=192.168.22.1"
  nameserver = "192.168.23.100 192.168.22.100"
  sshkeys = file("${path.root}/test.pub")
}

@victormongi
Copy link

@hestiahacker what should we do? are we have to wait until new version release?

@hestiahacker
Copy link
Contributor

I compiled the latest code and have been using that and it seems to have fixed this issue and #704. If you need a solution now, I'd suggest you take this route.

There are instructions for compiling from source but if you are compiling it on a Debian based machine, it'd look something like this:

git clone https://github.com/hestiahacker/terraform-provider-proxmox
cd terraform-provider-proxmox
git checkout overhaul-qemu-disks
sudo apt install -y ansible make
ansible-galaxy install gantsign.ansible-role-golang
ansible-playbook go.yml
. /etc/profile.d/golang.sh
make

At that point the new provider should be in the ./bin directory. If you aren't compiling it on your deployment machine, copy the executable over to the deployer. You'll need to copy the executable into a particular directory on the deployer. Here are the commands from the aforementioned installation guide:

PLUGIN_ARCH=linux_amd64
mkdir -p ~/.terraform.d/plugins/registry.example.com/telmate/proxmox/1.0.0/${PLUGIN_ARCH}
cp bin/terraform-provider-proxmox ~/.terraform.d/plugins/registry.example.com/telmate/proxmox/1.0.0/${PLUGIN_ARCH}/

That source path of the cp command will change if you are compiling on a different machine, but that should be easy enough. The last step is to tell Terraform to use this new module. That means updating your terraform to look like this:

terraform {
  required_providers {
    proxmox = {
      source  = "telmate/proxmox"
      version = ">=1.0.0"
    }
  }
  required_version = ">= 0.14"
}

That should work, but if you ever need to recompile it, terraform will complain about the checksum having changed. To deal with that you can either manually remove the offending checksum from the .terraform.lock.hcl file with a text editor, or what I personally do is just delete that lock file and regenerate it like so:

rm .terraform.lock.hcl && terraform get -update && terraform init -upgrade && terraform version

Also, do realize that this is the latest code that hasn't even been merged into this repo yet, and a fair amount has changed. So there's some risk of bugs causing you problems. I'd suggest you test the code in your environment and with your configuration even more than you would a new, official release. I've tested it in my environment, but if you're using different features than me, you could hit some code path that has a bug that I didn't run into.

If you are in an environment that has a low risk tolerance and you can't test this out, I have two suggestions:

  1. Get a test environment! 😱
  2. Wait for the official release 😁

@kw149
Copy link

kw149 commented Jan 5, 2024

@hestiahacker thanks for much for the instructions, I was able to follow them (which is saying something).
I'm not a developer and have only started using proxmox / terraform in the last few weeks.
I'm just "testing" this all out in my lab so I have nothing to loose if this all goes wrong.

previously, terraform would create a duplicate scsi disk as well as the virtio, further more each time you "apply" changes it would create another duplicate.
For example making a network device change etc, would result in another additional disk.

Anyway moving on.
I've followed your instructions, tested out, but I have a new error, which I've not seen before:

panic: interface conversion: interface {} is string, not float64

goroutine 52 [running]:
github.com/Telmate/proxmox-api-go/proxmox.NewConfigQemuFromApi(0xc0003e8718, 0xc9d509?)
        github.com/Telmate/[email protected]/proxmox/config_qemu.go:584 +0x4605
github.com/Telmate/terraform-provider-proxmox/proxmox.resourceVmQemuCreate(0xc0003be300, {0xb66f60?, 0xc0003d2e60})
        github.com/Telmate/terraform-provider-proxmox/proxmox/resource_vm_qemu.go:972 +0x2c4d
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xdd7840?, {0xdd7840?, 0xc0002f6570?}, 0xd?, {0xb66f60?, 0xc0003d2e60?})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:695 +0x178
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0000e2ee0, {0xdd7840, 0xc0002f6570}, 0xc00037cc30, 0xc0003bf080, {0xb66f60, 0xc0003d2e60})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:837 +0xa85
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc0002e1ef0, {0xdd7840?, 0xc0002f6450?}, 0xc0001ef810)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1021 +0xe8d
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc0001a0500, {0xdd7840?, 0xc000525b60?}, 0xc00043fdc0)
        github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:818 +0x574
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0xc6bc20?, 0xc0001a0500}, {0xdd7840, 0xc000525b60}, 0xc00043fd50, 0x0)
        github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00019a1e0, {0xddb420, 0xc0003144e0}, 0xc000515680, 0xc0002edce0, 0x128f7a0, 0x0)
        google.golang.org/[email protected]/server.go:1336 +0xd23
google.golang.org/grpc.(*Server).handleStream(0xc00019a1e0, {0xddb420, 0xc0003144e0}, 0xc000515680, 0x0)
        google.golang.org/[email protected]/server.go:1704 +0xa2f
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        google.golang.org/[email protected]/server.go:965 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/[email protected]/server.go:963 +0x28a

Error: The terraform-provider-proxmox_v2.9.14 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

I've not changed any of the .tf files so either I have a miss-configuration or as you mentioned somewhere, there is potential for errors.
thanks again for your efforts.

@kw149
Copy link

kw149 commented Jan 6, 2024

Okay Ignore the above, I made a couple of mistakes, I'll post here incase others do the same thing.

I forgot to update my providers file

 proxmox = {
      source = "registry.example.com/telmate/proxmox"
      version = ">=1.0.0"
    }
  • then follow instructions to clean the lock file

I also needed to update my terraform file, so I changed

#  disk {
#    storage = "toaster-vm"
#    type = "scsi"
#    size = "12G"
#  }

disks {
    virtio{
      virtio0 {
        disk {
        size = 12
        storage = "toaster-vm"
        }
      }
    }
  }


I can confirm working now. I have a LCX container, Rocky8 and 9 and Ubuntu VM's all working within the same plan

For completeness I did the following

I had to run the compile as root, I kept getting ansible/golang errors (built using a LCX container so might have something to do with it)
I copied the complied plug-in from /tmp/{compile directory} to my terraform directory (not my home dir)

Change the provider file and then follow instructions to fix the lock
then modify the .tf file(s) for the disk declaration.

However when I changed the plan (made a modification to the description)
It made some modifications

~ disks {
          - ide {
            }
          ~ virtio {
              ~ virtio0 {
                  ~ disk {
                        id                   = 0
                      - replicate            = true -> null
                        # (18 unchanged attributes hidden)
                    }
                }
            }
        }

The disk is not duplicated but has a replication flag ?

Screenshot 2024-01-06 at 11 01 48 am

@Tchoupinax
Copy link

Hey!
I followed the issue as I'm also impacted since I upgraded Proxmox to v8.1.3.
Following instructions to build the latest version of the provider, I can now start a VM without error. However, I can see the configuration is not the same as previously and expected. It seems the "cloud init" disk is not mounted.

Before:
image

After:
image

What is wrong here:

  • Disk size is 10G, which is the size of the template. It should have been overridden (it worked before)
  • There is no cloud init drive
  • Hard disk has additional parameters which did not have before

@Tinyblargon
Copy link
Collaborator

@Tchoupinax I've ran a few tests on my end could you try the version in #892 as this has the latest patches.

@Tchoupinax
Copy link

Hey @Tinyblargon,
I tested your branch and it fixes my previous three issues. I wrote a comment here.

Thank you a lot for your work!

@TheGameProfi
Copy link

@TheGameProfi do you plan to publish a new version in your repository including this fix?

I will try to look into it today or tomorrow.

@TheGameProfi
Copy link

Sorry, only got now time to check it out.

Tested the newest changes and it worked, and didn't see any Errors.
Released the new fixes inside my Repo.

Thanks Hestia & mleone87 & Tinyblargon for the fix :)

@pescobar
Copy link

thanks @TheGameProfi for publishing a release to terraform registry including this fix. Much appreciated!

GabrielKrueger referenced this issue Jan 16, 2024
* feat: re-implement the way qemu disks are handled

These changes were pulled from Tinyblargon's branch which was out of sync
with the Telmate master branch. I merely dealt with the merge conflicts so
we could re-submit a new merge request that can be applied cleanly.

Ref: #794

* fix: no functional change, just making github CI happy

* Update proxmox-api-go dependency

* Apply patches

* fix: typos

* fix: panic when `disks` is empty

* docs: change disks property names

* chore: update dependencies

* Add debug logging

---------

Co-authored-by: hestia <[email protected]>
Co-authored-by: mleone87 <[email protected]>
@opentokix
Copy link

thanks @TheGameProfi for publishing a release to terraform registry including this fix. Much appreciated!

But there is no new version on the hashicorp registry? Or is it published under some other name than Telmate?

@TheGameProfi
Copy link

thanks @TheGameProfi for publishing a release to terraform registry including this fix. Much appreciated!

But there is no new version on the hashicorp registry? Or is it published under some other name than Telmate?

It is published as a fork under my own name, thegameprofi/proxmox
There are multiple version with unreleased changes from this repo.
But also, at least the newest version, has problems mounting a cloud-init drive
#901

@den-patrakeev
Copy link

Hi!
Check work RC v3.0.1-rc1 on Terraform 1.6.6 / 1.7.1 with ProxMox 8.0.4 / 8.1.3.
The error is gone. Everything works well.
Thanks to all!

@hestiahacker
Copy link
Contributor

I've also verified that v3.0.1-rc1 is able to deploy the small example without any problems.

Thank you all for the testing, fixing, and releasing. 🙂

@devZer0
Copy link

devZer0 commented Feb 9, 2024

i still have this problem with terraform v1.7.3 plugin 3.0.1-rc1 and pve 8.1.4, on clone from a template i still get 2 disks and the vm won't boot, even with the new disks syntax.

what can i do to avoid that 2 disks being created ?

i'm new to terraform and it's totally frustrating.

@Tinyblargon
Copy link
Collaborator

@devZer0, could you create a new issue with a screenshot of the hardware of your template, your terraform config, and a screenshot of the hardware of the cloned vm?

@devZer0
Copy link

devZer0 commented Feb 9, 2024

thank you. i have tried further and by chance i found, that when setting the disk size in terraform file to exactly match the size of the disk in vm template, then it won't happen and the disk isn't getting duplicated.

weird.

@kerem-ak1
Copy link

Having the same problem but with TheGameProfi/proxmox. It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

@ihatethecloud I wanted to try this workound but its failing at parameter verification. I'm using zfs though, not sure is it related or not.

latest version of TheGameProfi/proxmox
proxmox version 8.1.4

file = "vm-137-disk-0"
volume = "local-zfs:vm-137-disk-0

am i doing something wrong ?

@ihatethecloud
Copy link

Having the same problem but with TheGameProfi/proxmox. It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

@ihatethecloud I wanted to try this workound but its failing at parameter verification. I'm using zfs though, not sure is it related or not.

latest version of TheGameProfi/proxmox proxmox version 8.1.4

file = "vm-137-disk-0" volume = "local-zfs:vm-137-disk-0

am i doing something wrong ?

Don’t use TheGameProfi/proxmox. Build this repo if it has not been pushed to the registry yet.

@kerem-ak1
Copy link

kerem-ak1 commented Feb 19, 2024

Having the same problem but with TheGameProfi/proxmox. It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

@ihatethecloud I wanted to try this workound but its failing at parameter verification. I'm using zfs though, not sure is it related or not.
latest version of TheGameProfi/proxmox proxmox version 8.1.4
file = "vm-137-disk-0" volume = "local-zfs:vm-137-disk-0
am i doing something wrong ?

Don’t use TheGameProfi/proxmox. Build this repo if it has not been pushed to the registry yet.

version = "3.0.1-rc1" is out from telmate provider.

new version comes with different disk schema, I had to update my disk schema in tf file, now it works. its still has some glitches regarding disk type/controller though. anway thx to everyone !

ps:if someone needs a running example with clone_ _from_template+cloudinit+lvm/zfs can ping me.

@JGHLab
Copy link

JGHLab commented Feb 22, 2024

Having the same problem but with TheGameProfi/proxmox. It seems defining file and volume fixes it but you need to know the id beforehand....

  disk {
    size    = "16G"
    type    = "scsi"
    storage = "vg_nvme"
    ssd     = 1
    file    = "vm-108-disk-0"
    volume  = "vg_nvme:vm-108-disk-0"
  }

Not sure how to go around this.

@ihatethecloud I wanted to try this workound but its failing at parameter verification. I'm using zfs though, not sure is it related or not.
latest version of TheGameProfi/proxmox proxmox version 8.1.4
file = "vm-137-disk-0" volume = "local-zfs:vm-137-disk-0
am i doing something wrong ?

Don’t use TheGameProfi/proxmox. Build this repo if it has not been pushed to the registry yet.

version = "3.0.1-rc1" is out from telmate provider.

new version comes with different disk schema, I had to update my disk schema in tf file, now it works. its still has some glitches regarding disk type/controller though. anway thx to everyone !

ps:if someone needs a running example with clone_ _from_template+cloudinit+lvm/zfs can ping me.

Could you give me working example? I can't get mine to work as I get an unbootable disk error and the hardware settings are incorrect.

@kerem-ak1
Copy link

kerem-ak1 commented Feb 22, 2024

https://pastebin.com/8RJnNYUK >> you can use this one for proxmox
https://pastebin.com/CPMEcxx7 >> if you need a cloud init template
@JGHLab

Copy link

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

@devZer0
Copy link

devZer0 commented Apr 23, 2024

not stale

Copy link

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

@sebdanielsson
Copy link

/keep open

@Tinyblargon Tinyblargon added the resource/qemu Issue or PR related to Qemu resource label Nov 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
issue/investigate resource/qemu Issue or PR related to Qemu resource
Projects
None yet
Development

Successfully merging a pull request may close this issue.