Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: You must be logged in to the server (Unauthorized) -- same IAM user created cluster #174

Closed
mrichman opened this issue Nov 19, 2018 · 44 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mrichman
Copy link

My AWS CLI credentials are set to the same IAM user which I used to create my EKS cluster. So why would kubectl cluster-info dump give me error: You must be logged in to the server (Unauthorized)?

kubectl config view is as follows:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://64859043D67EB498AA6D274A99C73C58.yl4.us-east-2.eks.amazonaws.com
  name: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
contexts:
- context:
    cluster: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
    user: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
  name: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
current-context: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - EKSDeepDive
      command: aws-iam-authenticator
      env: null
aws sts get-caller-identity
{
    "UserId": "AIDAJAKDBFFCB4EVPCQ6E",
    "Account": "629054125090",
    "Arn": "arn:aws:iam::629054125090:user/mrichman"
}
@sonicintrusion
Copy link

that env: null might be causing you a problem. have you tried removing it?

@ciribob
Copy link

ciribob commented Dec 12, 2018

I've having the exact same issue

Created a new cluster from scratch using a non root account

the env: null only appears when kubectl config view is run - its not in the main file

@mrichman
Copy link
Author

I ended up blowing away the cluster and creating a new one. I never had the issue again on any other cluster. I wish I had better information to share.

@pavel-khritonenko
Copy link

Have the same issue, token could be verified using aws-iam-authenticator just well

@VojtechVitek
Copy link

same issue here.. is there any debugging information I could provide?

@ciribob
Copy link

ciribob commented Dec 31, 2018

Found the issue with help from AWS support - it appears the aws-iam-authenticator wasn't picking up the credentials properly from the path

Manually running

export AWS_ACCESS_KEY_ID=KEY
export AWS_SECRET_ACCESS_KEY=SECRET-KEY
aws-iam-authenticator token -i cluster-name

Then pulling out the token and running

aws-iam-authenticator verify -t k8s-aws-v1.really_long_token -i cluster-name

to make sure its all working

Oddly the aws-iam-authenticator did give me a token - I have no idea to what...

@VojtechVitek
Copy link

So we were hitting this issue with IAM users that didn't initially create the EKS cluster, they always got error: You must be logged in to the server (Unauthorized) error when using kubectl (even though aws-iam-authenticator gave them some token).

We had to explicitly grant our IAM users access to the EKS cluster in our Terraform code.

@sonicintrusion
Copy link

can you elaborate on that ^^ ?
do you have to grant access to the EKS cluster specifically?

@kenerwin88
Copy link

I stumbled upon this same issue ;). Did you find a fix @sonicintrusion?

@whereisaaron
Copy link

whereisaaron commented Mar 26, 2019

You need to map IAM users or roles into the cluster using the aws-auth ConfigMap. This is done automatically for the user who creates the cluster.

https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Here is a script example for adding a role:

https://eksworkshop.com/codepipeline/configmap/

@kenerwin88
Copy link

kenerwin88 commented Apr 1, 2019 via email

@kumarifet
Copy link

Found the issue with help from AWS support - it appears the aws-iam-authenticator wasn't picking up the credentials properly from the path

Manually running

export AWS_ACCESS_KEY_ID=KEY
export AWS_SECRET_ACCESS_KEY=SECRET-KEY
aws-iam-authenticator token -i cluster-name

Then pulling out the token and running

aws-iam-authenticator verify -t k8s-aws-v1.really_long_token -i cluster-name

to make sure its all working

Oddly the aws-iam-authenticator did give me a token - I have no idea to what...

Thank you ,its working..

@jaygorrell
Copy link

env: null

This was it for me. That was actually in the file and was overriding my real env section with a specific profile to use.

@aaronrryan
Copy link

aaronrryan commented Jul 24, 2019

I'm not sure what user is given permissions when creating the EKS cluster through
the web console, so I ended up building the cluster using "eksctl", and then I was
able to access the cluster with kubectl from the cli.

https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html

@f-ld
Copy link

f-ld commented Aug 2, 2019

Short story
This happens too on badly created clusters

Long story
Came here after a first attempt to create a cluster with eksctl.
Actually, the creation failed for the following reason on some nodes:

Aug  2 12:58:17 ip-10-13-1-111 cloud-init[558]: Cloud-init v. 0.7.9 running 'modules:final' at Fri, 02 Aug 2019 12:58:17 +0000. Up 11.08 seconds.
Aug  2 12:58:17 ip-10-13-1-111 cloud-init[558]: 2019-08-02 12:58:17,458 - util.py[WARNING]: Failed running /var/lib/cloud/scripts/per-instance/bootstrap.al2.sh [1]

So this caused the cluster creation to fail with 25m timeout waiting for nodes to be ready. And when I tried a kubectl get nodes and got the error mentioned in this issue

@mkamrani
Copy link

AWS_SECRET_ACCESS_KEY

You can add them as environment variables in the config file as well:

env:

  • name: AWS_SECRET_ACCESS_KEY
    value: "xxxxxx" ...

@NapalmCodes
Copy link

I figured this out too, the user you create the cluster with (whether console or cli) is the only user that can execute k8s api calls via kubectl. I find this kind of strange as we use deployment users to do this work they would not be administering the cluster via kubectl. Is there a way to assign api rights to another user other than a deployment account?

@michael-burt
Copy link

@napalm684 use this guide: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

With that said, it is not working for me when I try to add an IAM user

@philborlin
Copy link

philborlin commented Oct 31, 2019

In my case I created the cluster with a role and then neglected to use the profile switch when using the update-kubeconfig command. When I modified the command to be aws eks update-kubeconfig --name my-cluster --profile my-profile a correct config was written and kubectl started authenticating correctly.

What this did was modify my env to (spacing munged):

env:
- name: AWS_PROFILE
value: my-profile

@nitrogear
Copy link

the solution from @whereisaaron helped me. thanks a lot!

@smaser-talend
Copy link

I resolved this issue by checking/updating the date/time on my client machine.

@0foo
Copy link

0foo commented Nov 17, 2019

Just wanted to add that you can add your credentials profile in your ~/.kube/config file.
If your kubectl config view shows env: null this might be the issue.

 user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - <some name>
      command: aws-iam-authenticator
      env:
        - name: AWS_PROFILE
          value: "<profile in your ~/.aws/credentials file>"

https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

I don't like environmental variables personally and this is another option if you have a credentials file for AWS.

@onprema
Copy link

onprema commented Nov 18, 2019

What about the use case where you have an "EKS admin" and user's can create their own clusters. As an admin, I don't want to be locked out of the clusters, and I don't want to have to tell each user to update the aws-auth configmap as @whereisaaron suggests. Is there a way I can incorporate the ability for admin users to have access to the cluster by default? (btw, user's will create they clusters by passings in a yaml config. i.e: eksctl create cluster foo -f config.yaml)

@OscarCode9
Copy link

I had the same issue end I solved when I set the aws_access_key_id and aws_secret_access_key of the who has created the cluster on AWS (In me case, the root user) but I made a new profile in the .aws/credentials

for example new profile:

[oscarcode]
aws_access_key_id = XXXX
aws_secret_access_key = XXXX
region = us-east-2

So in my kubernetes config has

exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- cluster_name
command: aws-iam-authenticator
env:
- name: AWS_PROFILE
value: oscarcode

@kflavin
Copy link

kflavin commented Jan 21, 2020

So in the event that you are not the cluster creator, you are out of luck getting access?

@michaelday008
Copy link

You need to map IAM users or roles into the cluster using the aws-auth ConfigMap. This is done automatically for the user who creates the cluster.

https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Here is a script example for adding a role:

https://eksworkshop.com/codepipeline/configmap/

link does not work:

<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>C3E07E0A4D665FA7</RequestId>
<HostId>
9FUa2BJaw75T3fcUKA8OVv85u/tqknezUsI/AiUQ/YFkNhXNKsnQAMR41MfO19wWNEqfq6w/2ug=
</HostId>
</Error>

@tabern
Copy link

tabern commented Feb 25, 2020

There are instructions for fixing this issue in the EKS docs and the customer support blog as well.

@Vinay-Venkatesh
Copy link

Vinay-Venkatesh commented Mar 12, 2020

This happens when the cluster is created by user A and when u try accessing the cluster service using user B credentials .
@OscarCode9 : Has explained it perfectly.

@tobisanya
Copy link

You need to map IAM users or roles into the cluster using the aws-auth ConfigMap. This is done automatically for the user who creates the cluster.

https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Here is a script example for adding a role:

https://eksworkshop.com/codepipeline/configmap/

Looks like the link to the script example has been updated to https://eksworkshop.com/intermediate/220_codepipeline/configmap/

@Pranjal-sopho
Copy link

Found the issue with help from AWS support - it appears the aws-iam-authenticator wasn't picking up the credentials properly from the path

Manually running

export AWS_ACCESS_KEY_ID=KEY
export AWS_SECRET_ACCESS_KEY=SECRET-KEY
aws-iam-authenticator token -i cluster-name

Then pulling out the token and running

aws-iam-authenticator verify -t k8s-aws-v1.really_long_token -i cluster-name

to make sure its all working

Oddly the aws-iam-authenticator did give me a token - I have no idea to what...

worked for me...thanks a lot

@lihonosov
Copy link

lihonosov commented May 23, 2020

How do I provide access to other users and roles after cluster creation?
https://www.youtube.com/watch?time_continue=3&v=97n9vWV3VcU

@aedcparnie
Copy link

Unauthorized Error in Kubectl after modifying aws-auth configMap

I am not sure but I think I messed up the aws-auth configmap. After modifying it, I cannot find a way to authenticate again. Anyone who encountered the same problem and have a solution?

I tried to assume the EKS Cluster role and use the role in the kubeconfig but no luck.

@Jean-Baptiste-Lasselle
Copy link

Jean-Baptiste-Lasselle commented Jun 29, 2020

Hi all, thank you so much for sharing all these info (course I have same issue here), and :

  • In my team, we are 2 engineers working on a set of 2 clusters
  • my colleague creates one cluster, I created another
  • we use the same AWS profile (~/.aws/credentials etc...)
  • but we have different aws users , and you can check that yourselves in your teams, with :
aws sts get-caller-identity | jq .Arn

Ok, now look how you can make your situation clearer :

  • You are AWS user Bobby),
  • The AWS user Ricky, created the EKS cluster the-good-life-cluster
  • And Bobby, wants to kubectl into the-good-life-cluster
  • To do that, Bobby must be able to assume Ricky 's AWS IAM Role : Why ? because we do not want anyone to be able to impersonate, any one else (certainly not the super admin ruling them all), for secruity accountability 's sake
  • well you have your ideas here taking over the subject, so what you need, now, is to run this, to bite on it :
export RICKY_S_CREATED_EKS_CLUSTER_NAME=the-good-life-cluster
# AWS REGION where Ricky created his cluster 
export AWS_REGION=eu-west-1
export BOBBY_S_BOURNE_ID=$(aws sts get-caller-identity | jq .Arn)
# So now, Bobby wants to kubectl into the Cluster Ricky created
# So Booby does this : 
aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION}
# and that does not fire up any error, so Bobby's happy and thinks he can
kubectl get all
# Ouch, Booby's now in dismay, he gets a "error: You must be logged in to the server (Unauthorized)" ! 
# Okay, Bobby now runs this : 
aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION} --role-arn ${BOBBY_S_BOURNE_ID}

kubectl get all

# And there you go, Now Bobby has an error, pretty explicit : He now knows how to test, whetjher or not, he can assume role of Ricky .. And there he smiles cause what he did, is trying to assume his own role ! 
# Got it , Bobby should assume role of Ricky, that way : 
export RICKY_S_BOURNE_IDENTITY=$(Ricky will give you that one)


aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION} --role-arn ${RICKY_S_BOURNE_IDENTITY}

I'll be glad to discuss this with anyone, and I 'll feedback when I have finished solving this issue

Note :

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  mapRoles: |
    - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-XXXXXXXXXXX
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
# Don't EVER touch what is above : when you retrieve your [aws-auth] ConfigMap from your EKS Cluster
# this section above will  already be there, with values very specific to your cluster, and  
# most importantly your cluster node's AWS IAM Role ARN 
# so there below, added mapped users
# but what we want is to add a role, not a specific user (for hot user management), os
# let's do it like they did it at AWS, for the Cluster nodes IAM Role, but with 
# groups such as admin and ops-user below
    - rolearn: WELL_YOU_KNOW_THE_ARN_OF_THE_ROLE_U_JUST_CREATED
      username: bobby
      groups:
        # bobby needs access to master ndes, to hit the K8S APi with kubectl, doesn't he? Sure he does.
        - system:masters
  mapUsers: |
    - userarn: arn:aws:iam::555555555555:user/admin
      username: admin
      groups:
        - system:masters
    - userarn: arn:aws:iam::111122223333:user/ops-user
      username: ops-user
      groups:
        - system:masters

Typical super admin / many devops setup, only it is just two users. found at https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

  • and there you go :
aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION} --role-arn ${ARN_OF_THAT_NEW_ROLE_YOU_CREATED}

So, Roles only, no specific users (except a few only for senior devops, just in case) :

  • because AWS IAM integration to cluster auth : it's like an IPAM in linux and openssh ... (So we must work without mentioning any user, so that we do not exclude future users, or include them with zero work)
  • so that you can manage users with federation, with very little dependencies, eg. with keycloak, why not ?

More fine grained permissions now

Refs. :

(See my aws-auth ConfigMap)

jbl@poste-devops-jbl-16gbram:~/gravitee-init-std-op$ kubectl get configmap/aws-auth --namespace kube-system
NAME       DATA   AGE
aws-auth   1      19d
jbl@poste-devops-jbl-16gbram:~/gravitee-init-std-op$ kubectl describe configmap/aws-auth --namespace kube-system
Name:         aws-auth
Namespace:    kube-system
Labels:       app.kubernetes.io/managed-by=pulumi
Annotations:  
Data
====
mapRoles:
----
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::XXXXXXXXXX:role/my-cluster-gateway-profile-role
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::XXXXXXXXXX:role/my-cluster-front-profile-role
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::XXXXXXXXXX:role/my-cluster-back-profile-role
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::XXXXXXXXXX:role/my-cluster-system-profile-role
  username: system:node:{{EC2PrivateDNSName}}

Events:  <none>


Events:  <none>

( XXXXXXXXXX is me obfuscating a value that any way, will be different for you )

Final update : success

Alright, what you cna read above was just tested functional so help yourselves (and can we close the issue ? @EppO @mortent @Atoms @flaper87 @joonas ?

update : I beg you pardon for the solution I gave does not answer the original issue, which was the case of a user trying to kubectl against a cluster he created himself

@ProteanCode
Copy link

ProteanCode commented Aug 22, 2020

Got this today and the cause & solution was different

If you created the cluster as some user, but not as a role / roled one (you can switch to roles in AWS console from IAM roles panel), and your kube config was created by using --role-arn parameter ie.

aws eks --region eu-central-1 update-kubeconfig --name my-cluster-name --role-arn arn:aws:iam::111222333444:role/my-eks-cluster-role

and got this error message then just remove a --role-arn parameter.

My understanding is that you can bind users to a role so they can perform operation on that specific cluster, but the user for some reason (maybe missing entry in Trusted entities) was not bound to the role at cluster creation phase. This is not an error since I suppose I could add myself to my cluster role and this would work fine

Adding users to roles is probably there:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Aside of that - its very misleading when you log in as user with AdministratorAccess policy (bascially Allow *) and there is no assumption to take the cluster role

TL:DR remove --role-arn

@Jean-Baptiste-Lasselle
Copy link

can

hi @ProteanCode , actually :

  • the update config is important : no way will I ever "send" the KUBECONFIG to my devops team members, and I xwill 🤕 them if they ever do (don't play with security, we are devops)
  • so I want them to update kubeconfig : they have each their own IAM User, and I give them permissiosn to assume role
  • yet, you could (I think, to test) remove the --role-arn, if before update kubeconfig, you :
# see https://aws.amazon.com/premiumsupport/knowledge-center/eks-iam-permissions-namespaces/
aws sts assume-role --role-arn arn:aws:iam::yourAccountID:role/yourIAMRoleName --role-session-name abcde
  • but you know I heard that generations back, I have Polish ancesters, plus I love the (polish)-notation ;)

@Jean-Baptiste-Lasselle
Copy link

@ProteanCode neverthe less I will test that again, cause question is : why would any AWS IAM user see the Cluster when they aws eks list-clusters ? :

  • well that 's the case, if I create the IAM User , being authenticated to AWS IAM, with my own personal user :
    • then I have to securely "send" my team members, their AWS ~/.aws/credentials file
    • Best way to secure that, is to discuss : One thing we did with "Ricky", is that I GPG encrypted the file, sent the file to him on slack, and sent him the link to my GPG public key https://keybase.io/jblasselle : there he knows it was actually me who sent it.
    • I am thinking of Hashicorp vault to do a lot better : that 's a secret management case, and I think HashiCorp vault definitely is the good way to go , what do you think ?

@ProteanCode
Copy link

@ProteanCode neverthe less I will test that again, cause question is : why would any AWS IAM user see the Cluster when they aws eks list-clusters ? :

  • well that 's the case, if I create the IAM User , being authenticated to AWS IAM, with my own personal user :

    • then I have to securely "send" my team members, their AWS ~/.aws/credentials file
    • Best way to secure that, is to discuss : One thing we did with "Ricky", is that I GPG encrypted the file, sent the file to him on slack, and sent him the link to my GPG public key https://keybase.io/jblasselle : there he knows it was actually me who sent it.
    • I am thinking of Hashicorp vault to do a lot better : that 's a secret management case, and I think HashiCorp vault definitely is the good way to go , what do you think ?

I am a 90% dev, 10% ops, and also the only user in that (private) project so my way of thinking is not really team-oriented.

I followed the AWS guideline to create a separate Administrators IAM group & user for any non-root related operation (which is totally fine), but somewhere in their guides they wrote to create the kube config with --role-arn so I blindly followed it without understanding the consequences.

Since my account was never bound to the cluster role the kubectl told me that I am unauthorized even if I am the owner of a cluster, this is neither good nor bad, but for sure it can be misleading for single developer.

I suppose most of time people assign resources to a roles, and then user to roles for authorization simplicity. You are totally right in what you wrote, but since we can assign user to role there would be no need to share any aws credentials.

I will for sure remanage the group and roles in my project to increase the security. Currently I am writing an API that will handle scaling the EKS nodes when external customer do a recursive purchase so I think about separate account on which shopping backend will operate.

@asingh014
Copy link

Please try upgrading your kubectl version to > 1.18.1 and giving it a try (we are using AAD to manage access for users, when creating the aks cluster only our admin context was able to execute kubectl commands but once we upgraded the kubectl for our users with cluster-admin access were running on, they could communicate with the cluster successfully)

@Jean-Baptiste-Lasselle
Copy link

I will for sure remanage the group and roles in my project to increase the security. Currently I am writing an API that will handle scaling the EKS nodes when external customer do a recursive purchase so I think about separate account on which shopping backend will operate.

hi @ProteanCode , interesting project, there many autoscalers that do exactly what you describe, eg you could have a look a kubeone. I'd tell you then taht "someone (an IAM user)" will conduct those scaling operations o behalf of the human user. I'd call that some one a robot. Think of it all as this : you are alone as a human, but you have a whole team of robots, and you are their boss. You will not talk to them, so you will delegate the role to a robot, boss of all robots. The approach I describe is a very basic one, and I would tell you my best adivce is to look at OIDC AWS IAM integration. With that team yo u wil be able o track who did what when, where, and why. Accountability.

Thank you for your answer, and bon courage.

@Jean-Baptiste-Lasselle
Copy link

Please try upgrading your kubectl version to > 1.18.1 and giving it a try (we are using AAD to manage access for users, when creating the aks cluster only our admin context was able to execute kubectl commands but once we upgraded the kubectl for our users with cluster-admin access were running on, they could communicate with the cluster successfully)

That's because you AAD does this for you , and you do not "see" it. Di you not give permissions to team members in AAD before updating ? Yes you did. @ProteanCode does not use any IAM solution (Identity and Access Management) solution to do this all, as explained by him he (thinks) he has got just one user (himself). Absolutely nothing to do with upgrading Kubernetes version.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 24, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 24, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests