You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a k3s cluster on which kube2iam is deployed as a daemonset. It seems to start okay getting access to the correct iam role through kube2iam and is able to access the appropriate aws resource. However, when I do a rollout restart of the deployment it fails with the following error message:
AccessDeniedException: User: arn:aws:sts::XXX:assumed-role/NODE_IAM_ROLE/NODE_NAME is not authorized to perform: ACTION
Indeed, the intention is only provide the access to the pod's iam role and NOT to the node's iam role but I am puzzled as to why the pod's role is not being assumed.
If I delete the deployment and install it afresh, it works again. Not using rollout is not really an option so seeking helpful hints on what I might be doing wrong? Wondering if it might be our upgrade from amazonlinux2 to amazonlinux2023 that broke our kube2iam setup that seemed to be working fine all this while.
Relevant details:
Tried both kube2iam:0.11.1 and 0.11.2 (with IMDSv2 optional and required) but the behavior is the same.
The text was updated successfully, but these errors were encountered:
huckym
changed the title
pod iam-role seems to get ignored during rollout restart
pod gets node's iam-role instead of the role specified in its annotation, during rollout restart
Aug 2, 2024
I have a k3s cluster on which kube2iam is deployed as a daemonset. It seems to start okay getting access to the correct iam role through kube2iam and is able to access the appropriate aws resource. However, when I do a rollout restart of the deployment it fails with the following error message:
AccessDeniedException: User: arn:aws:sts::XXX:assumed-role/NODE_IAM_ROLE/NODE_NAME is not authorized to perform: ACTION
Indeed, the intention is only provide the access to the pod's iam role and NOT to the node's iam role but I am puzzled as to why the pod's role is not being assumed.
If I delete the deployment and install it afresh, it works again. Not using rollout is not really an option so seeking helpful hints on what I might be doing wrong? Wondering if it might be our upgrade from amazonlinux2 to amazonlinux2023 that broke our kube2iam setup that seemed to be working fine all this while.
Relevant details:
Tried both kube2iam:0.11.1 and 0.11.2 (with IMDSv2 optional and required) but the behavior is the same.
Node's iam policy:
Pod's iam policy:
kube2iam daemonset:
The iptables command in cloud-init
The text was updated successfully, but these errors were encountered: