Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS] [request]: Manage IAM identity cluster access with EKS API #185

Closed
ayosec opened this issue Mar 5, 2019 · 119 comments
Closed

[EKS] [request]: Manage IAM identity cluster access with EKS API #185

ayosec opened this issue Mar 5, 2019 · 119 comments
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue

Comments

@ayosec
Copy link

ayosec commented Mar 5, 2019

Tell us about your request

CloudFormation resources to register IAM roles in the aws-auth ConfigMap.

Which service(s) is this request for?
EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

A Kubernetes cluster managed by EKS is able to authenticate users with IAM roles. This is very useful to grant access to Lambda functions. However, as described in the documentation, every IAM role has to be registered manually in a ConfigMap with the name aws-auth.

For every IAM role we add to the CloudFormation stack, we have to add an entry like this:

mapRoles: |
  - rolearn: "arn:aws:iam::11223344:role/stack-FooBarFunction-AABBCCDD"
    username: lambdafoo
    groups:
      - system:masters
  - ...

This process is a bit tedious, and it is hard to automate.

It will be much better if those IAM roles can be registered directly in the CloudFormation template. For example, with something like this:

LambdaKubeUser:
  Type: AWS::EKS::MapRoles::Entry
  Properties:
    Cluster: !Ref EKSCluster
    RoleArn: !GetAtt FunctionRole.Arn
    UserName: lambdafoo
    Groups:
      - system:masters

Thus, CloudFormation will add and remove entries in the ConfigMap as necessary, with no extra manual steps.

Another AWS::EKS::MapUsers::Entry can be used to register IAM users in mapUsers.

With this addition, we can automate the extra step to register the IAM role of the worker nodes when a new EKS instance is created:

NodeInstanceKubeUser:
  Type: AWS::EKS::MapRoles::Entry
  Properties:
    Cluster: !Ref EKSCluster
    RoleArn: !GetAtt NodeInstanceRole.Arn
    UserName: system:node:{{EC2PrivateDNSName}}
    Groups:
      - system:bootstrappers
      - system:nodes
@ayosec ayosec added the Proposed Community submitted issue label Mar 5, 2019
@tabern tabern added the EKS Amazon Elastic Kubernetes Service label Mar 29, 2019
@abelmokadem
Copy link

abelmokadem commented Apr 29, 2019

@ayosec have you created something to automate this as of now? I'm running into this when setting up a cluster using CloudFormation. Do you mind sharing your current approach?

@ayosec
Copy link
Author

ayosec commented Apr 29, 2019

have you created something to automate this as of now?

Unfortunately, no. I haven't found a reliable way to do it 100% automatic.

Do you mind sharing your current approach?

My current approach is to generate the ConfigMap using a template:

  1. All relevant ARNs are available in the outputs of the stack.
  2. A Ruby script reads those outputs, and fills a template.
  3. Finally, the generated YAML is applied with kubectl apply -f -.

@dardanbekteshi
Copy link

Adding this feature on CloudFormation would allow the same feature to be added on AWS CDK. This will greatly simplify the process of adding/removing new nodes, for example.

@willejs
Copy link

willejs commented Jul 2, 2019

I also thought about this. An api to manage the config map for aws-iam-authenticator is interesting, i think would be a bit clunky. I am using terraform to create an eks cluster, and this approach is alot nicer
terraform-aws-modules/terraform-aws-eks#355

@inductor
Copy link

I'd love this

@schlomo
Copy link

schlomo commented Nov 28, 2019

Anybody from AWS care to comment on this feature request?

@mikestef9
Copy link
Contributor

mikestef9 commented Nov 28, 2019

With the release of Managed Nodes with CloudFormation support, EKS now automatically handles updating aws-auth config map for joining nodes to a cluster.

Does this satisfy the initial use case here, or is there a separate ask to manage adding users to the aws-auth config map via CloudFormation?

@inductor
Copy link

@mikestef9 I think that #554 can be one of the similar issues why you would like this kind of options

@ayosec
Copy link
Author

ayosec commented Nov 28, 2019

@mikestef9

Does this satisfy the initial use case here, or is there a separate ask to manage adding users to the aws-auth config map via CloudFormation?

My main use case is with Lambda functions.

The managed nodes feature is pretty cool, and very useful for new EKS clusters, but most of our modifications to the aws-auth ConfigMap are to add or remove roles for Lambda functions.

@tnh
Copy link

tnh commented Jan 8, 2020

@mikestef9 It would be useful to then allow people / roles to be able to then run kubectl commands.

Right now, we have a CI deploy role - but we want to allow other saml based users to be able to kubectl

We do a post cluster creation to kubectl

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: {{ ClusterAdminRoleArn }}
      username: system:node:{{ '{{EC2PrivateDNSName}}' }}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::{{ AccountId }}:role/{{ AdminRoleName }}
      username: admin
      groups:
        - system:masters
    - rolearn: arn:aws:iam::{{ AccountId }}:role/{{ CIRoleName }}
      username: ci
      groups:
        - system:masters
    - rolearn: arn:aws:iam::{{ AccountId }}:role/{{ ViewRoleName }}
      username: view

But I'd much rather have this configMAp created by me during the cluster creation

@nckturner
Copy link

nckturner commented Feb 19, 2020

@mikestef9 Some relevant issues related to EKS users debugging authentication problems (kubernetes-sigs/aws-iam-authenticator#174 and kubernetes-sigs/aws-iam-authenticator#275) that imo are data points in favor of API and Cloudformation management of auth mappings (and configurable admin role: #554).

@nemo83
Copy link

nemo83 commented Feb 25, 2020

This ^^, how can we get this implemented? Can anyone from AWS tell us if they gonna support this at CF template level? Or a workaround is needed at eksctl level?

@inductor
Copy link

@nemo83 AWS team tagged researching this issue, so that's the stage of this issue.

@nicorikken
Copy link

nicorikken commented Aug 18, 2020

I'm also looking into automating the updates to this configmap from cloudformation. Doing so via lambda seems doable.

My main concern with automation are race-conditions on the contents of the configmap when applying updates as the content has to be parsed. A strategic merge is not possible. If the configuration would be implemented in one or more (one per entry) CRD's it would be easier to apply a patch. In that case existing efforts on Kubernetes support for CloudFormation like kubernetes-resources-provider can be reused.

Update: we gave up on writing a lambda to update the configmap. The code became too complex and fragile. We now template it separately.

Update 2: I had a concern for automatically updating the configmap if it would become corrupt and thereby prevent API access. With the current workings of AWS (1 sept 2020) there is a way of recovering from an aws-auth configmap corruption:

aws-auth configmap recovery (tested 1 sept 2020)

The prerequisite is to have a pod in the cluster running with a serviceaccount that can update the aws-auth configmap. Ideally something that you can interact with, like k8s-dashboard or in our case ArgoCD.

Then if the aws-auth become corrupt you can hopefully still update the configmap that way.

If that is not the case because the nodes have lost their access we can use the EKS-managed Node Group to restore node access to the Kubernetes API. You can create an EKS-managed Node Group of just 1 node with the role that is also used by your cluster nodes. (Note: this is not recommended by AWS, but we abuse AWS's power to update the configmap on the managed master nodes.)

AWS will now add this role to the aws-auth configmap:

apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    ...
    # this entry is added by AWS automically
    - rolearn: < NodeInstanceRole ARN >
      username: system:node:{{EC2PrivateDNSName}}
      groups:
      - system:bootstrappers
      - system:nodes

Deletion of that Node Group will remove that entry (for which AWS warns you), so the serviceaccount access is required to ensure another method of cluster access, like via the kubectl CLI. Update to aws-auth configmap to get that method of access. Then the Node Group can be removed, which in turn removes the added aws-auth configmap entry that was automatically created earlier. Now the persistent connection (e.g. kubectl CLI) can be used to permanently fix the configmap to ensure the nodes have access.

Note: if a service is automatically but incorrectly updating the configmap it would be harder, of not impossible to recover. ⚠

@hellupline
Copy link

I would go a extra mile and ask AWS to create an API to manage aws-auth, with IAM action associated

in case I delete the IAM role/user associated with the cluster creation ( detail: this user/role is not visible after, you have to save this info outside the cluster, or tag the cluster with it. )

and if I dont add another admin to the cluster, I am now locked out of the cluster,

for me, this is a major issue, because I use federated auth, users ( and my day-to-day account ) are efemeral... my user can be recreated without warning with another name/ID,

the ideia is: can AWS add an IAM action like ESHttpGet/ESHttpPost ? ( example from ElasticSearch, because is a third party software )

@mikestef9
Copy link
Contributor

Hey @hellupline

We are actually working on exactly that right now, an EKS API to manage IAM users and their permissions to an EKS cluster. This will allow you to manage IAM users via IaC tools like CloudFormation

@yanivpaz
Copy link

@mikestef9 how it's going to be different compared to https://github.com/aws-quickstart/quickstart-amazon-eks/blob/main/templates/amazon-eks-controlplane.template.yaml#L109

@markussiebert
Copy link

markussiebert commented Dec 1, 2020

I wonder, why this isn't possible with eks clusters (but with selfhosted k8s clusters on AWS?)

https://github.com/kubernetes-sigs/aws-iam-authenticator#crd-alpha

Even looking at the cdk implementation of auth mapping, it would be simple to get rid of some limitations that exist right now (stack barrier, imported clusters ...)

So if something like CF Support for Auth-Mapping will be implemented (i support this) it would be good, if it won't conflict with the crd's I hope coming soon to eks.

@gp42
Copy link

gp42 commented Jan 1, 2021

Hey @hellupline

We are actually working on exactly that right now, an EKS API to manage IAM users and their permissions to an EKS cluster. This will allow you to manage IAM users via IaC tools like CloudFormation

Any news on this issue?

@lynnnnnnluo
Copy link

Thanks for your reply! It turns out there is a code bug in CFN update, we will be working on the fix as soon as possible.

@lynnnnnnluo
Copy link

Hello, the fix to above issue was shipped, thanks again for the feedback.

@joebowbeer
Copy link
Contributor

@mikestef9 Do both the EKS User Guide and EKS Best Practices Guide need updating now that the Cluster Access Manager API has been added and is the preferred way to manage access of AWS IAM principals to Amazon EKS clusters?

aws/aws-eks-best-practices#463

@Nuru
Copy link

Nuru commented Feb 22, 2024

@mikestef9 Thank you very much for the detailed documentation on this set of features.

There seems to be a feature missing from the CLI, and I don't know where to report it. As you said, you can associate an IAM principal with Kubernetes roles, without associating an access policy:

aws eks create-access-entry --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole \
   --kubernetes-groups dev test

However, I cannot find a way to retrieve the list of Kubernetes groups associated with a principal via the CLI. I would expect this to be either get-access-entry or an optional additional output ot list-access-entries (or both), but no such luck.

Update

🤦 Thank you @bryantbiggs for telling me the command I want is there, I just didn't see it:

aws eks describe-access-entry \
   --cluster-name my-cluster \
   --principal-arn arn:aws:iam::012345678910:role/MyRole

@bryantbiggs
Copy link
Member

describe-access-entry contains the kubernetesGroups property

@dennispan
Copy link

@mikestef9 Regarding SSO/IAM Identity Center

You still need to add each auto generated role as a separate access entry.

Where to get the auto generated role arn for federated users? The ARN from aws sts get-caller-identity is rejected when adding Access Entry with it.

@kennethredler
Copy link

@mikestef9 Regarding SSO/IAM Identity Center

You still need to add each auto generated role as a separate access entry.

Where to get the auto generated role arn for federated users? The ARN from aws sts get-caller-identity is rejected when adding Access Entry with it.

What principal type are you using (role, user, etc.)? What does the ARN look like which you are attempting to use with an access entry? What is the error message you see?

@dennispan
Copy link

@kennethredler This is a role from sso, e.g. arn:aws:sts::012345678901:assumed-role/AWSReservedSSO_UserAccess_d2506a67da7d7d9b/[email protected]

This returns error:

The principalArn parameter format is not valid

I've also tried changing its format, which works with aws-auth config map, e.g. arn:aws:iam::012345678901:role/AWSReservedSSO_UserAccess_d2506a67da7d7d9b

That gives this error:

The specified principalArn is invalid: invalid principal.

@MartinEmrich
Copy link

Try removing the role/. This snippet is from a working eksctl ClusterConfig:

accessConfig:
  authenticationMode: API_AND_CONFIG_MAP
  accessEntries:
    - principalARN: "arn:aws:iam::123456789012:AWSReservedSSO_MyAdministrators_123456789abcdef"
      type: STANDARD
      kubernetesGroups:
        - system:masters

@dennispan
Copy link

Still got The principalArn parameter format is not valid

@joshuabaird
Copy link

joshuabaird commented Oct 10, 2024

I can confirm that adding an ARN like arn:aws:iam::1234455435:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Blah_834943dcf9f248 to an access entry works just fine.

@bryantbiggs
Copy link
Member

bryantbiggs commented Oct 10, 2024

if you use Terraform, you can use the aws_iam_roles data source which will accept a regex and then pass the first match to the access entry. This helps work around the random bits at the end of the generated identity store name:

data "aws_iam_roles" "admin" {
  name_regex  = "AWSReservedSSO_MyAdministrators_.*"
  path_prefix = "/aws-reserved/sso.amazonaws.com/"
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.24"

  # Truncated for brevity ...

  authentication_mode = "API"

  access_entries = {
    operators = {
      principal_arn = one(data.aws_iam_roles.admin.arns)

      policy_associations = {
        operators = {
          policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
          access_scope = {
            type = "cluster"
          }
        }
      }
    }
  }
}

@bryantbiggs
Copy link
Member

bryantbiggs commented Oct 10, 2024

kubernetesGroups:
- system:masters

also, this part is not valid

system:masters is not supported in cluster access entry. instead, you would associate your role to the "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" policy

@MartinEmrich
Copy link

@bryantbiggs what makes you say that? This works fine here (EKS 1.29.5).
Docs for the kubernetesGroups: option: https://eksctl.io/usage/access-entries/

Analyzing the current state, indeed EKS mapped system:masters to "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy", so it effectively had the same effect.
If this is just a temporary compatibility layer, please let me/us know.

OFC giving users full admin privileges it is usually not the most secure/best-practisey thing to do, but this is a playground cluster for developers, so no worries there.

@cdenneen
Copy link

@bryantbiggs what makes you say that? This works fine here (EKS 1.29.5). Docs for the kubernetesGroups: option: https://eksctl.io/usage/access-entries/

Analyzing the current state, indeed EKS mapped system:masters to "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy", so it effectively had the same effect. If this is just a temporary compatibility layer, please let me/us know.

OFC giving users full admin privileges it is usually not the most secure/best-practisey thing to do, but this is a playground cluster for developers, so no worries there.

@MartinEmrich because you are mixing ConfigMap and Access entries:
https://eksctl.io/usage/access-entries/#iam-entities

So from the example in the docs:

accessConfig:
  authenticationMode: API_AND_CONFIG_MAP
  accessEntries:
    - principalARN: arn:aws:iam::111122223333:user/my-user-name
      type: STANDARD
      kubernetesGroups: # optional Kubernetes groups
        - group1 # groups can used to give permissions via RBAC
        - group2

    - principalARN: arn:aws:iam::111122223333:role/role-name-1
      accessPolicies: # optional access polices
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
          accessScope:
            type: namespace
            namespaces:
              - default
              - my-namespace
              - dev-*

    - principalARN: arn:aws:iam::111122223333:role/admin-role
      accessPolicies: # optional access polices
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
          accessScope:
            type: cluster

    - principalARN: arn:aws:iam::111122223333:role/role-name-2
      type: EC2_LINUX

the type: STANDARD with kubernetesGroups would be for the config map and the following would be for an access entry (no kubernetesGroups because it's not creating an entry in the configmap so doesn't need to map a kubernetes group like system:masters, instead it has accessPolicies and accessScope):

    - principalARN: arn:aws:iam::111122223333:role/admin-role
      accessPolicies: # optional access polices
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
          accessScope:
            type: cluster

@bryantbiggs clarify if I'm wrong

@cdenneen
Copy link

Try removing the role/. This snippet is from a working eksctl ClusterConfig:

accessConfig:
  authenticationMode: API_AND_CONFIG_MAP
  accessEntries:
    - principalARN: "arn:aws:iam::123456789012:AWSReservedSSO_MyAdministrators_123456789abcdef"
      type: STANDARD
      kubernetesGroups:
        - system:masters

so this example you are creating a configMap entry vs an accessScope. Which is using the CONFIG_MAP part of the authentication instead of the API part. The API part will deprecate the CONFIG_MAP part in the future I believe on the road map but not sure when.

I think the confusion here is the eksctl documentation is using the term accessEntries which should be used solely for API authentication so since all CONFIG_MAP and API (accessEntries) are created under accessEntries in the eksctl configuration I can see where the confusion might happen here.

@bryantbiggs
Copy link
Member

@cdenneen that is correct! the use of system:* within the aws-auth ConfigMap would be replaced with the provided EKS cluster access policy equivalents when using EKS cluster access entries

@dennispan
Copy link

Thanks @joshuabaird ! arn:aws:iam::1234455435:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_Blah_834943dcf9f248 does work. How should I understand role/aws-reserved/sso.amazonaws.com/? I mean is that something that we get from sts?

@MartinEmrich
Copy link

@bryantbiggs @cdenneen Thanks!

So my eksctl ClusterConfig basically created both entries (AccessEntry on the "outside", aws-auth ConfigMap entry on the "inside"), and they have in "hybrid" API_AND_CONFIG_MAP mode just the same effect?

Will play around with pure API mode in the future and see what happens...

@atheiman
Copy link

atheiman commented Nov 5, 2024

@bryantbiggs - Im running into an issue where an identity in an EKS has the AmazonEKSAdminPolicy access policy granted, but cannot create a Role + RoleBinding which declares permissions that exist in the access policy? The error is attempting to grant rbac permissions not currently held. Are the permissions contained in an eks access policy not inspected by kubernetes when determining if the identity is allowed to pass permissions to another subject?

Edit - yes I have confirmed that access policies less than AmazonEKSClusterAdminPolicy are limited to not capable of creating (Cluster)Role / (Cluster)RoleBinding resources b/c they do not include escalate or bind verbs and their rules are not visible to kubernetes internals when it is evaluating if the user has at least the permissions declared in the Role / RoleBinding request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service Proposed Community submitted issue
Projects
Status: Shipped
Development

No branches or pull requests