Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc on installing aws-iam-authenticator #270

Open
rajal-amzn opened this issue Sep 23, 2019 · 9 comments
Open

Doc on installing aws-iam-authenticator #270

rajal-amzn opened this issue Sep 23, 2019 · 9 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@rajal-amzn
Copy link

In the Readme file, the second step to "Run the server" is not clear. Can you modify that to describe how to run the server?

@nckturner nckturner added the kind/documentation Categorizes issue or PR as related to documentation. label Oct 26, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2020
@andarob
Copy link
Contributor

andarob commented Feb 10, 2020

Deployment step instructions are incomplete. Deploying the example DaemonSet will cause the the pod to end up in CrashLoopBackOff.

Logs show the following:

time="2020-02-10T16:21:13Z" level=info msg="generated a new private key and certificate" certBytes=804 keyBytes=1194
time="2020-02-10T16:21:13Z" level=info msg="saving new key and certificate" certPath=/var/aws-iam-authenticator/cert.pem keyPath=/var/aws-iam-authenticator/key.pem
time="2020-02-10T16:21:13Z" level=fatal msg="could not load/generate a certificate" error="open /var/aws-iam-authenticator/cert.pem: permission denied"

This is due to UID in container being 10000 and the DaemonSet creating the hostPaths '/etc/kubernetes/aws-iam-authenticator/' and '/var/aws-iam-authenticator/' which will be created with UID and GID as root.

Potential fixes are to instruct the user to create the directories on the host with the correct permissions prior to deploying or add an initContainer to set the permissions prior to the aws-iam-authenticator pod starting.

initContainers:
- name: chown
  image: busybox
  command: ['sh', '-c', 'chown 10000:10000 /var/aws-iam-authenticator; chown 10000:10000 /etc/kubernetes/aws-iam-authenticator']
  volumeMounts:
  - name: state
    mountPath: /var/aws-iam-authenticator/
  - name: output
    mountPath: /etc/kubernetes/aws-iam-authenticator/

@christopherhein
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 12, 2020
joanayma pushed a commit to joanayma/aws-iam-authenticator that referenced this issue Aug 11, 2021
@yunusemrecatalcam
Copy link

Why this is closed without saying anything?

@anastazya
Copy link

anastazya commented May 3, 2022

The example config and the documentation is just WRONG and misleading.
Not only initContainer was not added to the example, nodeSelector needs to be removed, causes unschedulable deployment on amazon EKS with amazon managed master nodes. https://stackoverflow.com/questions/60834723/aws-iam-authenticator-daemon-set-not-running

LE : also all API versions should be 'v1'

This works on Amazon EKS so far:

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: aws-iam-authenticator
rules:
- apiGroups:
  - iamauthenticator.k8s.aws
  resources:
  - iamidentitymappings
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - iamauthenticator.k8s.aws
  resources:
  - iamidentitymappings/status
  verbs:
  - patch
  - update
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - update
  - patch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  resourceNames:
  - aws-auth
  verbs:
  - get

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: aws-iam-authenticator
  namespace: kube-system

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: aws-iam-authenticator
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: aws-iam-authenticator
subjects:
- kind: ServiceAccount
  name: aws-iam-authenticator
  namespace: kube-system

# ---
# EKS-Style ConfigMap: roles and users can be mapped in the same way as supported on EKS.
# If mappings are defined this way they do not need to be redefined on the other ConfigMap.
# https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
# uncomment if using EKS-Style ConfigMap
# apiVersion: v1
# kind: ConfigMap
# metadata:
#   name: aws-auth
#   namespace: kube-system
# data:
#   mapRoles: |
#     - rolearn: <ARN of instance role (not instance profile)>
#       username: system:node:{{EC2PrivateDNSName}}
#       groups:
#         - system:bootstrappers
#         - system:nodes
#   mapUsers: |
#     - rolearn: arn:aws:iam::000000000000:user/Alice
#       username: alice
#       groups:
#         - system:masters

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: aws-iam-authenticator
  labels:
    k8s-app: aws-iam-authenticator
data:
  config.yaml: |
    # a unique-per-cluster identifier to prevent replay attacks
    # (good choices are a random token or a domain name that will be unique to your cluster)
    clusterID: my-dev-cluster.example.com
    server:
      # each mapRoles entry maps an IAM role to a username and set of groups
      # Each username and group can optionally contain template parameters:
      #  1) "{{AccountID}}" is the 12 digit AWS ID.
      #  2) "{{SessionName}}" is the role session name, with `@` characters
      #     transliterated to `-` characters.
      #  3) "{{SessionNameRaw}}" is the role session name, without character
      #     transliteration (available in version >= 0.5).
      mapRoles:
      # statically map arn:aws:iam::000000000000:role/KubernetesAdmin to a cluster admin
      - roleARN: arn:aws:iam::000000000000:role/KubernetesAdmin
        username: kubernetes-admin
        groups:
        - system:masters
      # map EC2 instances in my "KubernetesNode" role to users like
      # "aws:000000000000:instance:i-0123456789abcdef0". Only use this if you
      # trust that the role can only be assumed by EC2 instances. If an IAM user
      # can assume this role directly (with sts:AssumeRole) they can control
      # SessionName.
      - roleARN: arn:aws:iam::000000000000:role/KubernetesNode
        username: aws:{{AccountID}}:instance:{{SessionName}}
        groups:
        - system:bootstrappers
        - aws:instances
      # map federated users in my "KubernetesAdmin" role to users like
      # "admin:alice-example.com". The SessionName is an arbitrary role name
      # like an e-mail address passed by the identity provider. Note that if this
      # role is assumed directly by an IAM User (not via federation), the user
      # can control the SessionName.
      - roleARN: arn:aws:iam::000000000000:role/KubernetesAdmin
        username: admin:{{SessionName}}
        groups:
        - system:masters
      # map federated users in my "KubernetesOtherAdmin" role to users like
      # "alice-example.com". The SessionName is an arbitrary role name
      # like an e-mail address passed by the identity provider. Note that if this
      # role is assumed directly by an IAM User (not via federation), the user
      # can control the SessionName.  Note that the "{{SessionName}}" macro is
      # quoted to ensure it is properly parsed as a string.
      - roleARN: arn:aws:iam::000000000000:role/KubernetesOtherAdmin
        username: "{{SessionName}}"
        groups:
        - system:masters
      # map federated users in my "KubernetesUsers" role to users like
      # "[email protected]". SessionNameRaw is sourced from the same place as
      # SessionName with the distinction that no transformation is performed
      # on the value. For example an email addresses passed by an identity
      # provider will not have the `@` replaced with a `-`.
      - roleARN: arn:aws:iam::000000000000:role/KubernetesUsers
        username: "{{SessionNameRaw}}"
        groups:
        - developers
      # each mapUsers entry maps an IAM role to a static username and set of groups
      mapUsers:
      # map user IAM user Alice in 000000000000 to user "alice" in "system:masters"
      - userARN: arn:aws:iam::000000000000:user/Alice
        username: alice
        groups:
        - system:masters
      # List of Account IDs to whitelist for authentication
      mapAccounts:
      # - <AWS_ACCOUNT_ID>

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: kube-system
  name: aws-iam-authenticator
  labels:
    k8s-app: aws-iam-authenticator
  annotations:
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
spec:
  selector:
    matchLabels:
      k8s-app: aws-iam-authenticator
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      labels:
        k8s-app: aws-iam-authenticator
    spec:
      # use service account with access to
      serviceAccountName: aws-iam-authenticator

      # run on the host network (don't depend on CNI)
      hostNetwork: true

      # run on each master node, but comment out for Amazon EKS, it creates unschedulable deployment                        
      # nodeSelector:
      #   node-role.kubernetes.io/master: ""   
      tolerations:
      # - effect: NoSchedule
      #   key: node-role.kubernetes.io/master
      - key: CriticalAddonsOnly
        operator: Exists

      # run `aws-iam-authenticator server` with three volumes
      # - config (mounted from the ConfigMap at /etc/aws-iam-authenticator/config.yaml)
      # - state (persisted TLS certificate and keys, mounted from the host)
      # - output (output kubeconfig to plug into your apiserver configuration, mounted from the host)
      # - initContainers is needed because of errors in EKS deployment : https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/270#issuecomment-584238850
    
      initContainers:
      - name: chown
        image: busybox
        command: ['sh', '-c', 'chown 10000:10000 /var/aws-iam-authenticator; chown 10000:10000 /etc/kubernetes/aws-iam-authenticator']
        volumeMounts:
        - name: state
          mountPath: /var/aws-iam-authenticator/
        - name: output
          mountPath: /etc/kubernetes/aws-iam-authenticator/
      containers:
      - name: aws-iam-authenticator
        image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.5.3
        args:
        - server
        # uncomment if using EKS-Style ConfigMap
        - --backend-mode=EKSConfigMap
        - --config=/etc/aws-iam-authenticator/config.yaml
        - --state-dir=/var/aws-iam-authenticator
        - --generate-kubeconfig=/etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml
      # uncomment if using the Kops Usage instructions https://sigs.k8s.io/aws-iam-authenticator#kops-usage
      # the kubeconfig.yaml is pregenerated by the 'aws-iam-authenticator init' step
      # - --kubeconfig-pregenerated=true

        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL

        resources:
          requests:
            memory: 20Mi
            cpu: 10m
          limits:
            memory: 20Mi
            cpu: 100m

        volumeMounts:
        - name: config
          mountPath: /etc/aws-iam-authenticator/
        - name: state
          mountPath: /var/aws-iam-authenticator/
        - name: output
          mountPath: /etc/kubernetes/aws-iam-authenticator/

      volumes:
      - name: config
        configMap:
          name: aws-iam-authenticator
      - name: output
        hostPath:
          path: /etc/kubernetes/aws-iam-authenticator/
      - name: state
        hostPath:
          path: /var/aws-iam-authenticator/

@aliartiza75
Copy link

aliartiza75 commented Jun 1, 2022

Hi @anastazya,

Is it working fine in your EKS environment?

I have created an IAM group, role and policy to assume the role. I have mapped the role to system:master group but somehow when a user try to use it. It doesn't work and generates following error

system:anonymous is not able to perform operations

I am not sure if it is some mis-configuration on my side or something is wrong with aws-iam-authenticator?

@anastazya
Copy link

Hi @anastazya,

Is it working fine in your EKS environment?

I have created an IAM group, role and policy to assume the role. I have mapped the role to system:master group but somehow when a user try to use it. It doesn't work and generates following error

system:anonymous is not able to perform operations

I am not sure if it is some mis-configuration on my side or something is wrong with aws-iam-authenticator?

Yes, that's the error i am getting also, but the reasons are others now. Anyway, this module and the documentation is so wrong and misleading, i'm going to make it a personal matter to write an article about it. Now i'm on parental leave until 22 July and just gathering thoughts. I will get to the bottom of this after that date.

@aliartiza75
Copy link

@anastazya thank you for the update.

@a-mykhailenko
Copy link

Hi @anastazya have You any updates about this topic?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

10 participants