Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pod gets node's iam-role instead of the role specified in its annotation, during rollout restart #377

Open
huckym opened this issue Aug 1, 2024 · 0 comments

Comments

@huckym
Copy link

huckym commented Aug 1, 2024

I have a k3s cluster on which kube2iam is deployed as a daemonset. It seems to start okay getting access to the correct iam role through kube2iam and is able to access the appropriate aws resource. However, when I do a rollout restart of the deployment it fails with the following error message:

AccessDeniedException: User: arn:aws:sts::XXX:assumed-role/NODE_IAM_ROLE/NODE_NAME is not authorized to perform: ACTION

Indeed, the intention is only provide the access to the pod's iam role and NOT to the node's iam role but I am puzzled as to why the pod's role is not being assumed.

If I delete the deployment and install it afresh, it works again. Not using rollout is not really an option so seeking helpful hints on what I might be doing wrong? Wondering if it might be our upgrade from amazonlinux2 to amazonlinux2023 that broke our kube2iam setup that seemed to be working fine all this while.

Relevant details:
Tried both kube2iam:0.11.1 and 0.11.2 (with IMDSv2 optional and required) but the behavior is the same.

Node's iam policy:

data "aws_iam_policy_document" "trust-policy-node" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}

data "aws_iam_policy_document" "role-policy-node" {
  statement {
    actions = [
      "sts:AssumeRole"
    ]
    resources = ["*"]
  }
  statement {
    actions = [
      "ec2:DescribeRegions",
    ]
    resources = ["*"]
  }
}

resource "aws_iam_role" "role-node" {
  name               = local.project_prefix
  assume_role_policy = data.aws_iam_policy_document.trust-policy-node.json

  inline_policy {
    name   = local.project_prefix
    policy = data.aws_iam_policy_document.role-policy-node.json
  }
}

Pod's iam policy:

data "aws_iam_policy_document" "role-policy-app" {
  statement {
    actions = [
      "s3:Get*",
      "s3:List*"
    ]
    resources = [
      "arn:aws:s3:::mybucket",
      "arn:aws:s3:::mybucket/*"
    ]
  }
}

data "aws_iam_policy_document" "trust-policy-pod" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "AWS"
      identifiers = [aws_iam_role.role-node.arn]
    }
  }
}

resource "aws_iam_role" "role-app" {
  name               = "${local.project_prefix}-app"
  assume_role_policy = data.aws_iam_policy_document.trust-policy-pod.json

  inline_policy {
    name   = "${local.project_prefix}-app"
    policy = data.aws_iam_policy_document.role-policy-app.json
  }
}

kube2iam daemonset:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube2iam
spec:
  template:
    spec:
      serviceAccountName: kube2iam
      hostNetwork: true
      containers:
      - image: jtblin/kube2iam:0.11.2
        imagePullPolicy: Always
        name: kube2iam
        args:
        - "--auto-discover-base-arn"
        - "--app-port=8181"
        - "--iam-role-key=iam.amazonaws.com/role"
        - "--iam-role-session-ttl=900s"
        - "--node=$(NODE_NAME)"
        - "--debug"
        - "--log-level=info"
        env:
        - name: AWS_REGION
           value: <region>
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        ports:
        - name: http
          containerPort: 8181
          hostPort: 8181

The iptables command in cloud-init

# Reroute metadata service requests to kube2iam
      iptables \
        --append PREROUTING \
        --protocol tcp \
        --destination 169.254.169.254 \
        --dport 80 \
        --in-interface cni0 \
        --jump DNAT \
        --table nat \
        --to-destination `curl 169.254.169.254/latest/meta-data/local-ipv4`:8181
@huckym huckym changed the title pod iam-role seems to get ignored during rollout restart pod gets node's iam-role instead of the role specified in its annotation, during rollout restart Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant