Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intermittent 403 authentication issues with EKS #678

Closed
nakulpathak3 opened this issue Nov 2, 2018 · 9 comments
Closed

Intermittent 403 authentication issues with EKS #678

nakulpathak3 opened this issue Nov 2, 2018 · 9 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@nakulpathak3
Copy link

nakulpathak3 commented Nov 2, 2018

I'm using the 8.0.0a1 version and I'm doing

api_client = config.new_client_from_config(kube_config_yaml_file)
v1_core = client.CoreV1Api(api_client)

with kube_config_file -

apiVersion: v1
clusters:
- cluster:
    server: https://<ajsdhajkshdka>.eks.amazonaws.com
    certificate-authority-data: <asdjasjdhas>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: test_user
  name: test_name
current-context: test_name
kind: Config
preferences: {}
users:
- name: test_user
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "cluster-test"
        - "-r"
        - "arn:aws:iam::<87787898789>:role/cluster-test-k8s-access-role"

and this seems to pass almost always but every now and then I get an error from the python client

ERROR:root:exec: process returned 1. could not get token: AccessDenied: Access denied
	status code: 403, request id: 296d0777-de24-12b8-b352-c942b2ac475e

which seems to be getting triggered here in the exec_provider in python-base.

The main change that I could think of it being is that I'm using the -r flag and passing in an access role to use with the authenticator command which I don't see a test for in the exec_provider. Even with the flag the command passes sometimes but fails at other times.

This issue only occurs with the Python Client not with using kubectl subprocess calls.

I'm using EKS with aws-iam-authenticator.

@nakulpathak3
Copy link
Author

nakulpathak3 commented Dec 13, 2018

Could this be related to expiration of tokens after a certain amount of time? Is there any under the hood refreshing of the token that happens after a failed request?

And update, still seeing this with 8.0.0.

@nakulpathak3
Copy link
Author

nakulpathak3 commented Dec 19, 2018

kubernetes-sigs/aws-iam-authenticator#157 seems related. Might be an issue with EKS and aws-iam-authenticator. It is correct that this happens with the usage of the role specifically not with the user that created the cluster itself.

@geerlingguy
Copy link

Gah, I'm hitting this now too, after I upgraded python via brew upgrade on my Mac. Was working fine till now. Everything seems to be correct:

# Locally.
$ aws sts get-caller-identity
{
    "Account": "ACCOUNT_ID", 
    "UserId": "MY_USER_ID", 
    "Arn": "arn:aws:iam::ACCOUNT_ID:user/jeff.geerling"
}

# On the EKS cluster.
$ kubectl describe configmap -n kube-system aws-auth
...
Data
====
mapUsers:
----
- userarn: arn:aws:iam::ACCOUNT_ID:user/jeff.geerling
  groups:
    - system:masters

But using Ansible with kubernetes===7.0.0 I get:

fatal: [127.0.0.1]: FAILED! => changed=false 
  error: 403
  msg: |-
    Failed to retrieve requested object: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces \"any-namespace-here\" is forbidden: User \"system:anonymous\" cannot get namespaces in the namespace \"any-namespace-here\"","reason":"Forbidden","details":{"name":"any-namespace-here","kind":"namespaces"},"code":403}
  reason: Forbidden
  status: 403

@geerlingguy
Copy link

Oops... I realized 7.0.0 is old. Upgraded kubernetes with Pip to kubernetes==8.0.1 and I'm back in business. Python environment--

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 28, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 28, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@thallesdaniell
Copy link

I had a constant 403 and 401 error using this authentication script. I looked for others and found this https://github.com/peak-ai/eks-token and it worked really well for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants