-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error: You must be logged in to the server (Unauthorized) -- same IAM user created cluster #174
Comments
that |
I've having the exact same issue Created a new cluster from scratch using a non root account the |
I ended up blowing away the cluster and creating a new one. I never had the issue again on any other cluster. I wish I had better information to share. |
Have the same issue, token could be verified using |
same issue here.. is there any debugging information I could provide? |
Found the issue with help from AWS support - it appears the Manually running
Then pulling out the token and running
to make sure its all working Oddly the |
So we were hitting this issue with IAM users that didn't initially create the EKS cluster, they always got We had to explicitly grant our IAM users access to the EKS cluster in our Terraform code. |
can you elaborate on that ^^ ? |
I stumbled upon this same issue ;). Did you find a fix @sonicintrusion? |
You need to map IAM users or roles into the cluster using the https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html Here is a script example for adding a role: |
Thank you very much!
… On Mar 25, 2019, at 9:48 PM, Aaron Roydhouse ***@***.***> wrote:
You need to map IAM users or roles into the cluster using the aws-auth ConfigMap. This is done automatically for the user who creates the cluster.
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Thank you ,its working.. |
This was it for me. That was actually in the file and was overriding my real |
I'm not sure what user is given permissions when creating the EKS cluster through https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html |
Short story Long story
So this caused the cluster creation to fail with 25m timeout waiting for nodes to be ready. And when I tried a |
You can add them as environment variables in the config file as well: env:
|
I figured this out too, the user you create the cluster with (whether console or cli) is the only user that can execute k8s api calls via kubectl. I find this kind of strange as we use deployment users to do this work they would not be administering the cluster via kubectl. Is there a way to assign api rights to another user other than a deployment account? |
@napalm684 use this guide: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html With that said, it is not working for me when I try to add an IAM user |
In my case I created the cluster with a role and then neglected to use the profile switch when using the update-kubeconfig command. When I modified the command to be What this did was modify my env to (spacing munged):
|
the solution from @whereisaaron helped me. thanks a lot! |
I resolved this issue by checking/updating the date/time on my client machine. |
Just wanted to add that you can add your credentials profile in your ~/.kube/config file.
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html I don't like environmental variables personally and this is another option if you have a credentials file for AWS. |
What about the use case where you have an "EKS admin" and user's can create their own clusters. As an admin, I don't want to be locked out of the clusters, and I don't want to have to tell each user to update the aws-auth configmap as @whereisaaron suggests. Is there a way I can incorporate the ability for admin users to have access to the cluster by default? (btw, user's will create they clusters by passings in a yaml config. i.e: |
I had the same issue end I solved when I set the aws_access_key_id and aws_secret_access_key of the who has created the cluster on AWS (In me case, the root user) but I made a new profile in the .aws/credentials for example new profile: [oscarcode] So in my kubernetes config has exec: |
So in the event that you are not the cluster creator, you are out of luck getting access? |
link does not work:
|
There are instructions for fixing this issue in the EKS docs and the customer support blog as well. |
This happens when the cluster is created by user A and when u try accessing the cluster service using user B credentials . |
Looks like the link to the script example has been updated to https://eksworkshop.com/intermediate/220_codepipeline/configmap/ |
worked for me...thanks a lot |
|
Unauthorized Error in Kubectl after modifying aws-auth configMap I am not sure but I think I messed up the aws-auth configmap. After modifying it, I cannot find a way to authenticate again. Anyone who encountered the same problem and have a solution? I tried to assume the EKS Cluster role and use the role in the kubeconfig but no luck. |
Hi all, thank you so much for sharing all these info (course I have same issue here), and :
aws sts get-caller-identity | jq .Arn
Ok, now look how you can make your situation clearer :
export RICKY_S_CREATED_EKS_CLUSTER_NAME=the-good-life-cluster
# AWS REGION where Ricky created his cluster
export AWS_REGION=eu-west-1
export BOBBY_S_BOURNE_ID=$(aws sts get-caller-identity | jq .Arn)
# So now, Bobby wants to kubectl into the Cluster Ricky created
# So Booby does this :
aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION}
# and that does not fire up any error, so Bobby's happy and thinks he can
kubectl get all
# Ouch, Booby's now in dismay, he gets a "error: You must be logged in to the server (Unauthorized)" !
# Okay, Bobby now runs this :
aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION} --role-arn ${BOBBY_S_BOURNE_ID}
kubectl get all
# And there you go, Now Bobby has an error, pretty explicit : He now knows how to test, whetjher or not, he can assume role of Ricky .. And there he smiles cause what he did, is trying to assume his own role !
# Got it , Bobby should assume role of Ricky, that way :
export RICKY_S_BOURNE_IDENTITY=$(Ricky will give you that one)
aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION} --role-arn ${RICKY_S_BOURNE_IDENTITY}
I'll be glad to discuss this with anyone, and I 'll feedback when I have finished solving this issue Note :
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-XXXXXXXXXXX
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
# Don't EVER touch what is above : when you retrieve your [aws-auth] ConfigMap from your EKS Cluster
# this section above will already be there, with values very specific to your cluster, and
# most importantly your cluster node's AWS IAM Role ARN
# so there below, added mapped users
# but what we want is to add a role, not a specific user (for hot user management), os
# let's do it like they did it at AWS, for the Cluster nodes IAM Role, but with
# groups such as admin and ops-user below
- rolearn: WELL_YOU_KNOW_THE_ARN_OF_THE_ROLE_U_JUST_CREATED
username: bobby
groups:
# bobby needs access to master ndes, to hit the K8S APi with kubectl, doesn't he? Sure he does.
- system:masters
mapUsers: |
- userarn: arn:aws:iam::555555555555:user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters Typical super admin / many devops setup, only it is just two users. found at https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION} --role-arn ${ARN_OF_THAT_NEW_ROLE_YOU_CREATED} So, Roles only, no specific users (except a few only for senior devops, just in case) :
More fine grained permissions now
Refs. :
(See my aws-auth
|
Got this today and the cause & solution was different If you created the cluster as some user, but not as a role / roled one (you can switch to roles in AWS console from IAM roles panel), and your kube config was created by using
and got this error message then just remove a My understanding is that you can bind users to a role so they can perform operation on that specific cluster, but the user for some reason (maybe missing entry in Trusted entities) was not bound to the role at cluster creation phase. This is not an error since I suppose I could add myself to my cluster role and this would work fine Adding users to roles is probably there: Aside of that - its very misleading when you log in as user with AdministratorAccess policy (bascially Allow *) and there is no assumption to take the cluster role TL:DR remove |
hi @ProteanCode , actually :
# see https://aws.amazon.com/premiumsupport/knowledge-center/eks-iam-permissions-namespaces/
aws sts assume-role --role-arn arn:aws:iam::yourAccountID:role/yourIAMRoleName --role-session-name abcde
|
@ProteanCode neverthe less I will test that again, cause question is : why would any AWS
|
I am a 90% dev, 10% ops, and also the only user in that (private) project so my way of thinking is not really team-oriented. I followed the AWS guideline to create a separate Administrators IAM group & user for any non-root related operation (which is totally fine), but somewhere in their guides they wrote to create the kube config with Since my account was never bound to the cluster role the kubectl told me that I am unauthorized even if I am the owner of a cluster, this is neither good nor bad, but for sure it can be misleading for single developer. I suppose most of time people assign resources to a roles, and then user to roles for authorization simplicity. You are totally right in what you wrote, but since we can assign user to role there would be no need to share any aws credentials. I will for sure remanage the group and roles in my project to increase the security. Currently I am writing an API that will handle scaling the EKS nodes when external customer do a recursive purchase so I think about separate account on which shopping backend will operate. |
Please try upgrading your kubectl version to > 1.18.1 and giving it a try (we are using AAD to manage access for users, when creating the aks cluster only our admin context was able to execute kubectl commands but once we upgraded the kubectl for our users with cluster-admin access were running on, they could communicate with the cluster successfully) |
hi @ProteanCode , interesting project, there many autoscalers that do exactly what you describe, eg you could have a look a kubeone. I'd tell you then taht "someone (an IAM user)" will conduct those scaling operations o behalf of the human user. I'd call that some one a robot. Think of it all as this : you are alone as a human, but you have a whole team of robots, and you are their boss. You will not talk to them, so you will delegate the role to a robot, boss of all robots. The approach I describe is a very basic one, and I would tell you my best adivce is to look at OIDC AWS IAM integration. With that team yo u wil be able o track who did what when, where, and why. Accountability. Thank you for your answer, and bon courage. |
That's because you AAD does this for you , and you do not "see" it. Di you not give permissions to team members in AAD before updating ? Yes you did. @ProteanCode does not use any IAM solution (Identity and Access Management) solution to do this all, as explained by him he (thinks) he has got just one user (himself). Absolutely nothing to do with upgrading Kubernetes version. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
My AWS CLI credentials are set to the same IAM user which I used to create my EKS cluster. So why would
kubectl cluster-info dump
give meerror: You must be logged in to the server (Unauthorized)
?kubectl config view
is as follows:The text was updated successfully, but these errors were encountered: