-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New version of kube2iam fails to get regions #367
Comments
We also added ec2:DescribeRegions to the policy, but that didn't resolve this issue although we left it in the kube2iam policy. I'm fairly certain the nodes role had this permission as part of it's other policies. |
|
We are also seeing similar issue after upgrading to EKS 1.25 $curl http://169.254.169.254:80/latest/meta-data/iam/security-credentials/node-iam-role |
@nsharma-fy have you tried with a previous version of |
@jtblin , to add more, the only change we made was to upgrade EKS. The current version of kube2iam was working with EKS 1.24. The issue went away after removing kube2iam and adding a new node. AWS support also said it is kube2iam related. I can try an older version of kube2iam, do you have a suggestion on which version to test |
@jtblin, old image 0.11.1 works, so something changed in the latest version. It may not be related to the EKS upgrade. The timing was just a coincidence I guess. Thanks for your help. Would you be looking at fixing the issue? |
The only differences between 0.11.1 and 0.11.2 were upgrading Go and the Alpine image so not sure why this would be happening and what could be "fixed": 0.11.1...0.11.2 I've changed Btw if you use EKS, may I ask why you are still using kube2iam and not IAM roles for service accounts |
@jtblin , I tried jtblin/kube2iam:dev, the We have a few application codes, that do not support sts assumerole API yet, so we depend on kube2iam for them |
I'm also seeing same issue in AWS with kube2iam v0.11.2.
As @samsilborydoxo mentioned, I added "--use-regional-sts-endpoint" to the command args and set the default aws region
then kube2iam started to work. |
I want to confirm this issue and looks like image tag on Dockerhub is not matching 0.11.2 git tag (but matching origin/release-0.11.2 instead).
So 0.11.2 container image contains some significant changes added after 0.11.2 git tag ;-) |
We had had a cluster set to use the image tag latest, and when we attempted to add a new nodegroup we got this error.
{"level":"fatal","error":"operation error RDS: DescribeDBClusters, failed to resolve service endpoint, an AWS region is required, but was not found","time":"2022-12-22T02:00:30Z","message":"error initializing application"}
Upon doing a rollout restart on the daemonset the issue appeared on our older node group. We tested this on another cluster which was running docker.io/jtblin/kube2iam@sha256:2bcf95c937b0b5149ffe518e087de811e365badeea2a70b094e84b74ae156f33
It broke upon upgrading the image in the same fashion.
After a bit of thrashing around we added "--use-regional-sts-endpoint" to the command line and set the default aws region
The text was updated successfully, but these errors were encountered: