Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't Use API Token/Secret #687

Closed
bit2pixel opened this issue Feb 10, 2023 · 42 comments
Closed

Can't Use API Token/Secret #687

bit2pixel opened this issue Feb 10, 2023 · 42 comments

Comments

@bit2pixel
Copy link

bit2pixel commented Feb 10, 2023

Hi, I can't seem to use API tokens/secret, though the user/password works.

No matter what combination I tried or gave all the permissions , I still get the following:

│ Error: user does not exist or has insufficient permissions on proxmox: terraform-prov@pve!terraform-provisioner
│
│   with provider["registry.terraform.io/telmate/proxmox"],
│   on main.tf line 10, in provider "proxmox":
│   10: provider "proxmox" {
│

Can someone verify that this is working with the latest provider?
If it is working for you, can you update the example to also show how to add secrets and tokens, in case I'm doing something wrong?

Thanks!

@bit2pixel bit2pixel changed the title Can't Use API Token/ecret Can't Use API Token/Secret Feb 10, 2023
@ryanjkemper
Copy link

I just tried upgrading the proxmox provider from 2.9.11 to 2.9.13 and am getting the same error when trying to use tokens. Rolling back to 2.9.11 resolves the issue, so I suspect something breaks as part of the upgrade. :(

@iamwillbar
Copy link

Thank you, I was tearing my hair out on this. I was using 2.9.13 as well and hitting this issue, downgraded to 2.9.11 and that is working fine.

@iamwillbar
Copy link

I think the issue is here: 530b097#diff-2a95ccf1660fc0de50bdda8fad339f4ac84e9a991e00f5ec569caa4a49b209b9R208

I'm not sure exactly what the issues is, but passing / will only retrieve permissions assigned to that specific path, so this definitely won't work if you're using fine-grained permissions.

@anon8675309
Copy link

For anyone who is looking to use API tokens until this is fixed, the installation docs show how to specify a particular version of the provider, and run terraform init -upgrade && terraform get -update to get the older version after changing your configuration file.

I can confirm that downgrading to 2.9.11 works around this bug. Version 2.9.12 doesn't exist, which is why people are suggesting to go back to 2.9.11.

@bit2pixel
Copy link
Author

cc @ethinx @mleone87 is this related to #649 ?

@ethinx
Copy link
Contributor

ethinx commented Feb 17, 2023

@bit2pixel might be relevant, I just tried 2.9.11 and have no permission error as well, and my PR could solve the problem.

@phpsystems
Copy link

I have this working with API keys on version 2.9.13. Are you assigning the permissions to the actual keys? Also, the user also appears to be of the realm of 'pve' rather than 'pam'.

I think the split permissions might be the issue (as in the api key can have different perms to the user).

@eknowlton
Copy link

Can confirm this seems to happen when I upgrade to 2.9.13 as well, I've tried with PVE users, PAM users and the root user...

│ Error: user does not exist or has insufficient permissions on proxmox: terraform-prov@pve!terraform-token
│
│   with provider["registry.terraform.io/telmate/proxmox"],
│   on main.tf line 10, in provider "proxmox":
│   10: provider "proxmox" {
│
│ Error: user does not exist or has insufficient permissions on proxmox: ethan@pam!terraform-prov
│
│   with provider["registry.terraform.io/telmate/proxmox"],
│   on main.tf line 10, in provider "proxmox":
│   10: provider "proxmox" {
│
│ Error: user does not exist or has insufficient permissions on proxmox: root@pam!terraform-prov
│
│   with provider["registry.terraform.io/telmate/proxmox"],
│   on main.tf line 10, in provider "proxmox":
│   10: provider "proxmox" {
│

@phpsystems
Copy link

What permissions does the token have assigned directly?

@Dr-Shadow
Copy link

Should be fixed in 2.9.14

Merged request : #649

@JamborJan
Copy link

I have tested it. Getting now this:

│ Error: user terraform-prov@pve has valid credentials but cannot retrieve user list, check privilege separation of api token
│ 
│   with provider["registry.terraform.io/telmate/proxmox"],
│   on provider.tf line 13, in provider "proxmox":
│   13: provider "proxmox" {

The user has been created as written in the documentation. In the UI I see this:

grafik

Toggling the checkbox doesn't change the error message.

I'm feeding these credentials via variables plus tfvars file (manual test) or var-flags (pipeline runs).

variable "pm_api_base_url" {}
variable "pm_api_token_id" {}
variable "pm_api_token_secret" {}

Switching back to 2.9.11 makes the setup work again.

@mleone87
Copy link
Collaborator

privilege separation has a meaning beyond the checkbox. If you don't tick the checkbox you must attach a role to the token(you separate privilege of the user from the privilege of the token) and the role must have permissions. If you tick the checkbox, token will have the permission of the user that it belongs to

@JamborJan
Copy link

Thanks for the explanation. As mentioned in my comment.

It’s not working. Are the permissions specified in the documentation not enough for the user, or have they changed for the latest version?

@A1EF
Copy link

A1EF commented Mar 22, 2023

Same issue with

│ Error: user terraform-prov@pve has valid credentials but cannot retrieve user list, check privilege separation of api token
│
│   with provider["registry.terraform.io/telmate/proxmox"],
│   on providers.tf line 1, in provider "proxmox":1: provider "proxmox" {

Privilege separation checkbox does not matter when users token has enough permissions. Downgrade provider version to 2.9.11 solve the issue with no changes with pve user, token or their privileges.

@mleone87
Copy link
Collaborator

@JamborJan the docs are incomplete in the part of privilege separation(there is no explanation on how to attach a separate role to a "privilege separated" token) so if you follow them you must disable privilege separation.
If it does not work please post all you users/role/acl situation

thats my system with 2.9.14:

token with privilege separation enabled
TERRAFORM PLAN
not-working

ACL
acl-not-working

token without privilege separation enabled
TERRAFORM PLAN
working

ACL
acl-working

@A1EF

Privilege separation checkbox does not matter when users token has enough permissions.

false.

if it's not working please provide same evidence as above

@A1EF
Copy link

A1EF commented Mar 22, 2023

false.

if it's not working please provide same evidence as above

I mean that is no difference enabled or not Privilege Separation if user and token already has privileges. Naturally if you create token you must grant priveleges either disable Privilege Separation.

You absolutely right about default behavior when you create token and Privilege Separation enabled by default. You need grant privileges for token either disable Privilege Separation. But if privileges explicit grant, then no matter which current privsep value.

~# pveum user token list terraform-prov@pve
┌─────────┬─────────┬────────┬─────────┐
│ tokenid │ comment │ expire │ privsep │
╞═════════╪═════════╪════════╪═════════╡
│ deploy  │ CI/CD   │      0 │ 1       │
└─────────┴─────────┴────────┴─────────┘
~# pveum user token permissions terraform-prov@pve deploy
┌────────────────┬─────────────────────────────┐
│ ACL path       │ Permissions                 │
╞════════════════╪═════════════════════════════╡
│ /              │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /access        │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /access/groups │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /nodes         │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /pools         │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /sdn           │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /storage       │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /vms           │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
└────────────────┴─────────────────────────────┘
Permissions marked with '(*)' have the 'propagate' flag set.
~# pveum user token modify terraform-prov@pve deploy --privsep 0
┌─────────┬───────┐
│ key     │ value │
╞═════════╪═══════╡
│ comment │ CI/CD │
├─────────┼───────┤
│ expire  │ 0     │
├─────────┼───────┤
│ privsep │ 0     │
└─────────┴───────┘
~# pveum user token list terraform-prov@pve
┌─────────┬─────────┬────────┬─────────┐
│ tokenid │ comment │ expire │ privsep │
╞═════════╪═════════╪════════╪═════════╡
│ deploy  │ CI/CD   │      0 │ 0       │
└─────────┴─────────┴────────┴─────────┘
~# pveum user token permissions terraform-prov@pve deploy
┌────────────────┬─────────────────────────────┐
│ ACL path       │ Permissions                 │
╞════════════════╪═════════════════════════════╡
│ /              │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /access        │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /access/groups │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /nodes         │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /access        │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /access/groups │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /nodes         │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /pools         │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /sdn           │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /storage       │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
├────────────────┼─────────────────────────────┤
│ /vms           │ Datastore.AllocateSpace (*) │
│                │ Datastore.Audit (*)         │
│                │ VM.Allocate (*)             │
│                │ VM.Audit (*)                │
│                │ VM.Clone (*)                │
│                │ VM.Config.CDROM (*)         │
│                │ VM.Config.CPU (*)           │
│                │ VM.Config.Cloudinit (*)     │
│                │ VM.Config.Disk (*)          │
│                │ VM.Config.HWType (*)        │
│                │ VM.Config.Memory (*)        │
│                │ VM.Config.Network (*)       │
│                │ VM.Config.Options (*)       │
│                │ VM.Monitor (*)              │
│                │ VM.PowerMgmt (*)            │
└────────────────┴─────────────────────────────┘
Permissions marked with '(*)' have the 'propagate' flag set.

@mleone87
Copy link
Collaborator

@A1EF thanks for you clarification, we agree 👍

still I dont' understand how this is not working for you since the same setup works on my side. How do you pass auth variables? Can you also provide a debug log?

@JamborJan
Copy link

In my case adjusting the permissions as described by @mleone87 helped for the error message │ Error: user terraform-prov@pve has valid credentials but cannot retrieve user list, check privilege separation of api token.

But now I see another issue. Can someone provide the correct permissions which are required for deploying all types of VMs and LCXs? I guess the one I tested here has one special thing: it should allow nesting inside an LXC. Maybe it's also a good idea to update the docs which are displayed on the terraform registry for the provider.

│ Error: error creating LXC container: 403 Forbidden, error status: {"data":null} (params: {"arch":"amd64","cmode":"tty","console":true,"cores":1,"cpulimit":0,"cpuunits":1024,"features":"nesting=1","hostname":"DEV-PORT-S","memory":1024,"mp0":"zfs:8,backup=0,mp=/data","net0":"name=eth0,ip6=auto,bridge=vmbr0,ip=dhcp","onboot":false,"ostemplate":"local:vztmpl/debian-11-standard_11.3-1_amd64.tar.zst","pool":"DEV","protection":false,"rootfs":"zfs:8","ssh-public-keys":"ssh-rsa ...","start":true,"storage":"local","swap":1024,"tags":"","tty":2,"unprivileged":false,"vmid":106})

@A1EF
Copy link

A1EF commented Apr 2, 2023

still I dont' understand how this is not working for you since the same setup works on my side.

@mleone87 it was my mistake. It was not enough permissions for new version of provider. I needed just grant privileges for role as describe in current version of documentation and problem has been solved.

@github-actions
Copy link

github-actions bot commented Jun 2, 2023

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

@hestiahacker
Copy link
Contributor

I've been locked in at v2.9.11 since February to work around this issue. If it's expected to be fixed in the latest release, I can upgrade and re-test so we can get one step closer to closing or resolving the issue.

@JamborJan
Copy link

JamborJan commented Jun 7, 2023

In my case tokens are not working. I have to make use of user name and password. Then all is as expected. I’m running 2.9.14 of the provider.

@mleone87
Copy link
Collaborator

mleone87 commented Jun 7, 2023

token are extensively tested, "not working" without any other info will not be investigated

@JamborJan
Copy link

I am very keen to do more testing and give you more details. Please let us know what you need to investigate further. My comment was rather a help for others experiencing issues and an options for them to work around issues.

@mleone87
Copy link
Collaborator

mleone87 commented Jun 8, 2023

@JamborJan in this thread you can see a lot of example screenshots and cli commands that can be used to provide details ;)

@hestiahacker
Copy link
Contributor

The additional permissions needed are: Sys.Audit, Sys.Modify, Sys.Console. There is already an open issue about this being more permissions than is actually needed to deploy a VM. 784. I'll switch to that ticket to try to get these documented in the Creating the user and role for terraform section of the documentation. Hopefully this addresses @JamborJan's question about what permissions are required.

The other breaking change that was introduced by 2.9.13 is related to the disk -> backup field. The field was changed from a number to a boolean, the default was changed from false to true, and the old values from your terraform.tfstate file are now rejected instead of being converted to the correct type. See issue 702 for a workaround of manually editing each of your terraform.tfstate files.

In my opinion, after #784 is resolved (e.g. the additional permissions are either removed, or at least documented), this ticket should be able to be closed.

@JamborJan
Copy link

Awesome! Thanks @hestiahacker ! That’s a bunch of very useful info. I’ll test during the next days and let you know the results. I’m not yet sure how the fact that authentication via user and pass makes things working and token not, but I’ll see that when testing more in detail.

@hestiahacker
Copy link
Contributor

hestiahacker commented Jul 21, 2023

I just realized that the exact place in the documentation that I linked to earlier showing how to create a proxmox user and role actually DOES have these three permissions listed (Sys.Audit, Sys.Modify, Sys.Console). So if this works for @JamborJan and @bit2pixel (or we don't hear back from @bit2pixel), then I'd say this ticket should be able to be closed.

I’m not yet sure how the fact that authentication via user and pass makes things working and token not

Yeah, I took a quick look through the source and didn't find anything. My guess is that if you're using a token, it tries to look up the username that is associated with that token. It's even possible this was a change in an underlying library which just happened to get pulled in with the 2.9.13 release. If it is doing this type of lookup, it is clearly not needed, as it wasn't doing so in 2.9.11 and that was able to use an API token just fine.

Update to add: The error message is coming from here but I couldn't figure out what is calling CheckUserExistence() from this repo. The place where I see "user/pass" being handled differently than "API key" is here, and client.SetAPIToken just calls session.SetAPIToken which just puts the token name/value in the session's AuthToken. Maybe these pointers will help someone else track down what's going on here, mainly for the benefit of #784.

@bit2pixel
Copy link
Author

bit2pixel commented Aug 22, 2023

Sys.Audit, Sys.Modify, Sys.Console

@hestiahacker All these permissions are you mentioned are already set, still not working for the latest release. Please don't close this ticket yet, thanks.

@JamborJan
Copy link

JamborJan commented Sep 5, 2023

I am stumbling over this every once in a while when I'm creating new resources. Now I took the time and tried to track it down ones again. What I found in chronological order:

  • the difference between user credentials and API token is important. As I have a dedicated user for infrastructure as code anyway, I opted for using username and password in my terraform scripts (via variables and tfvar files)
  • I logged in as terraform-prov user and tried to create a LCX manually, finding some problems
  • Resource pools need permissions, I assigned the group as Administrator where the IaC user is in
  • The network you want to attach a container to needs some kind of permissions, other wise you cannot select it in a manual process either ...

That's where I'm stuck right now. I was not able to fix that yet. One of my users, specifically the IaC user doesn't see the vmbr0 network on one of my nodes. That's the remaining issue. If I'm using another user for testing purpose, it is working.

Copy link

github-actions bot commented Nov 5, 2023

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

@hestiahacker
Copy link
Contributor

I've been trying to track down some other issues in prod (likely hardware that is failing) and haven't made time for this yet. Sorry everyone.

@ehinkle27
Copy link

can confirm have issues as well on the latest 2.9.14. If I try using api key I get the below error.

Planning failed. Terraform encountered an error while generating this plan.

│ Error: permissions for user/token terraformUser@pam are not sufficient, please provide also the following permissions that are missing: [Sys.Modify]

│ with provider["registry.terraform.io/telmate/proxmox"],
│ on main.tf line 10, in provider "proxmox":
│ 10: provider "proxmox" {

If I use username/password I get the below error.

Error: Plugin did not respond

│ with provider["registry.terraform.io/telmate/proxmox"],
│ on main.tf line 10, in provider "proxmox":
│ 10: provider "proxmox" {

│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).ConfigureProvider call. The plugin logs may contain more
│ details.

I switched to 2.9.11 and the plan completed using the API user. I created two different users for testing, one as it was shown on the plugin site with exact permissions. The other (API user) I created as pveadmin with access to everything. This is on a clean install of ubuntu 2204 and install of terraform via the documentation on the site for install on linux. I am running Proxmox in a cluster and pointing to one of the nodes in the cluster which is running proxmox vesion below.

proxmox-ve: 7.4-1 (running kernel: 5.15.126-1-pve) pve-manager: 7.4-17 (running version: 7.4-17/513c62be) pve-kernel-5.15: 7.4-7 pve-kernel-5.15.126-1-pve: 5.15.126-1 pve-kernel-5.15.116-1-pve: 5.15.116-1 pve-kernel-5.15.85-1-pve: 5.15.85-1 pve-kernel-5.15.83-1-pve: 5.15.83-1 pve-kernel-5.15.74-1-pve: 5.15.74-1 pve-kernel-5.15.30-2-pve: 5.15.30-3 ceph: 17.2.6-pve1 ceph-fuse: 17.2.6-pve1 corosync: 3.1.7-pve1 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx4 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.24-pve2 libproxmox-acme-perl: 1.4.4 libproxmox-backup-qemu0: 1.3.1-1 libproxmox-rs-perl: 0.2.1 libpve-access-control: 7.4.1 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.4-2 libpve-guest-common-perl: 4.2-4 libpve-http-server-perl: 4.2-3 libpve-rs-perl: 0.7.7 libpve-storage-perl: 7.4-3 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 5.0.2-2 lxcfs: 5.0.3-pve1 novnc-pve: 1.4.0-1 proxmox-backup-client: 2.4.3-1 proxmox-backup-file-restore: 2.4.3-1 proxmox-kernel-helper: 7.4-1 proxmox-mail-forward: 0.1.1-1 proxmox-mini-journalreader: 1.3-1 proxmox-offline-mirror-helper: 0.5.2 proxmox-widget-toolkit: 3.7.3 pve-cluster: 7.3-3 pve-container: 4.4-6 pve-docs: 7.4-2 pve-edk2-firmware: 3.20230228-4~bpo11+1 pve-firewall: 4.3-5 pve-firmware: 3.6-6 pve-ha-manager: 3.6.1 pve-i18n: 2.12-1 pve-qemu-kvm: 7.2.0-8 pve-xtermjs: 4.16.0-2 qemu-server: 7.4-4 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.8.0~bpo11+3 vncterm: 1.7-1 zfsutils-linux: 2.1.11-pve1

@hestiahacker
Copy link
Contributor

I'm currently investigating issues #704 and #887, but I plan on switching my test system to use a token instead of a username and will report back here with my test results.

@riverar
Copy link
Contributor

riverar commented Jan 1, 2024

Hm, can't reproduce any token issues with Proxmox 8.0.4 + telmate provider (master). Token was issued against root@pam user as a quick test.

// provider.tf
provider "proxmox" {
  pm_api_url      = "https://proxmox.host/api2/json"
  pm_api_token_id = "xxx"
  pm_api_token_secret = "yyy"
  ...
}

@ehinkle27
Copy link

If this is in reference to the comment I provided earlier on 2.9.14 have you tried creating a user as the doc's suggest? I did not try with api token against root, not sure if that makes a difference. But exact same setup works fine on 2.9.11 which is what I have been using.

@hestiahacker
Copy link
Contributor

It looks like @iamwillbar nailed it with their comment. I did extensive testing under #784 and found that having the permissions on specific paths is not good enough for newer versions of the provider. You need to have the path set to "/" in order to use the API Token/Secret. You can see my full notes in this comment.

I have confirmed that this error happens whether you have privilege separation turned on or not. That does not matter. The thing that matters is whether the Path (in Proxmox UI -> Datacenter -> Permissions -> Add -> API Token Permission) is set to "/" or not. If it is, you're good. If it's not, your going to have a bad time.

Please don't think that I think this is the correct behavior. I do not. I suggested adding a feature to skip any checks to see if permissions are going to fail so people can turn it off if they so desire. Without those checks, it may leave you in a wonky state if it gets part way through deployment and then fails due to some error, but it'd [hopefully] be a quick fix to work around this issue. If you think this sounds like a good path forward, please give my comment a thumbs up so we can gauge community interest in that approach.

Copy link

github-actions bot commented Apr 1, 2024

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

@hestiahacker
Copy link
Contributor

Work is going towards fixing this issue, as mentioned here.

The solution will be to dynamically determine what permissions are needed. This change will likely not be included in the 3.0.1 relase, but rather a subsequent one. We are trying to get 3.0.1 shipped and limiting the number of additional changes in that version is the only way that'll happen. Otherwise we'll always be waiting for "just one more change."

@maksimsamt
Copy link
Contributor

Probably this is related with #784 (comment)
My investigation and workaround

Copy link

This issue is stale because it has been open for 60 days with no activity. Please update the provider to the latest version and, in the issue persist, provide full configuration and debug logs

Copy link

github-actions bot commented Aug 2, 2024

This issue was closed because it has been inactive for 5 days since being marked as stale.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests