-
Notifications
You must be signed in to change notification settings - Fork 579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ec2-metadata-service errors in up to date AWS EKS cluster using Pod Identity #6667
Comments
Hi @shaftoe , Your comparison between the CLI and the SDK is not the same. In the CLI you are not specifying any specific method of credentials and letting the CLI's default credential chain resolve your creds for you. In the SDK you are using a specific client which is EC2 IMDS-specific. I don't think that functionality extends to the container metadata service which is different (IMDS endpoint and container metadata endpoint have different IP addresses). This begs the question, what are you trying to do? If you are just trying to use the SDK on an EKS pod, you don't need to use any of this. The default credential chain will be able to fetch credentials from the container metadata endpoint automatically if correctly configured. If your pod gets injected with the relevant env variables on start time, the SDK will hook into those and make that request to the container metadata service on your behalf. See SDK docs for more info. Thanks, |
Thanks for the detailed explanation @RanVaknin.
I am (or better, the application I'm trying to fix) trying to "getting AnnouncedIp from ec2 meta data api" (as from inline comments), or better, to retrieve the I suppose at this point the question is: what's the right way to use PS AWS env vars like |
Hi @shaftoe , Thanks for the clarification. Can you ssh into your pod and log all the available env variables you have there, and do the same for the previous working cluster's pod to see if there are any discrepancies between the two? Also, I haven't tested this, but maybe this would work? const metadataService = new MetadataService({
endpoint: "http://169.254.170.2",
disableFetchToken: true
}); Thanks, |
Of course, thanks a ton for the quick help. So, env vars (which don't seem to contain anything to be redacted): root@nodetest:/# export
declare -x AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE="/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token"
declare -x AWS_CONTAINER_CREDENTIALS_FULL_URI="http://169.254.170.23/v1/credentials"
declare -x AWS_DEFAULT_REGION="us-west-2"
declare -x AWS_REGION="us-west-2"
declare -x AWS_STS_REGIONAL_ENDPOINTS="regional"
declare -x HOME="/root"
declare -x HOSTNAME="nodetest"
declare -x KUBERNETES_PORT="tcp://10.31.0.1:443"
declare -x KUBERNETES_PORT_443_TCP="tcp://10.31.0.1:443"
declare -x KUBERNETES_PORT_443_TCP_ADDR="10.31.0.1"
declare -x KUBERNETES_PORT_443_TCP_PORT="443"
declare -x KUBERNETES_PORT_443_TCP_PROTO="tcp"
declare -x KUBERNETES_SERVICE_HOST="10.31.0.1"
declare -x KUBERNETES_SERVICE_PORT="443"
declare -x KUBERNETES_SERVICE_PORT_HTTPS="443"
declare -x NODE_VERSION="18.20.4"
declare -x OLDPWD="/"
declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/tmp"
declare -x SHLVL="1"
declare -x TERM="xterm"
declare -x YARN_VERSION="1.22.19" I've also tested the code change suggestion but it seem to hang (without short timeout, waited for 1 minute or so...) PS out of curiosity I've tried with |
Old pod env vars differs slightly (redacted): declare -x AWS_DEFAULT_REGION="us-west-2"
declare -x AWS_REGION="us-west-2"
declare -x AWS_ROLE_ARN="arn:aws:iam::xxxxxx:role/eks-main-dev-app-xxxxxxx"
declare -x AWS_STS_REGIONAL_ENDPOINTS="regional"
declare -x AWS_WEB_IDENTITY_TOKEN_FILE="/var/run/secrets/eks.amazonaws.com/serviceaccount/token"
declare -x HOME="/root"
declare -x HOSTNAME="ip-xxx.us-west-2.compute.internal"
declare -x HTTP_LISTEN_PORT="4443"
declare -x INTERACTIVE="0"
declare -x KUBERNETES_PORT="tcp://10.31.0.1:443"
declare -x KUBERNETES_PORT_443_TCP="tcp://10.31.0.1:443"
declare -x KUBERNETES_PORT_443_TCP_ADDR="10.31.0.1"
declare -x KUBERNETES_PORT_443_TCP_PORT="443"
declare -x KUBERNETES_PORT_443_TCP_PROTO="tcp"
declare -x KUBERNETES_SERVICE_HOST="10.31.0.1"
declare -x KUBERNETES_SERVICE_PORT="443"
declare -x KUBERNETES_SERVICE_PORT_HTTPS="443"
declare -x NODE_VERSION="18.20.4"
declare -x OLDPWD
declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/app"
declare -x SHLVL="1"
declare -x TERM="xterm"
declare -x TLS_CERT_SECRET_ID="platocorp.com"
declare -x YARN_VERSION="1.22.19" |
Request
|
Hi @shaftoe , Thanks for the info. The main difference I see between the two clusters is that your older cluster is using IRSA which is the newer more secure way of authenticating with EKS. Admittedly, I'm lightyears away from being an EKS expert, and my knowledge is really based on debugging these type of issues with customers, so please bear with me while I'm trying to understand your setup.
That is interesting. Based on this issue, it might be able to resolve if you add a trailing slash - If you are just trying to hit the container metadata endpoint, you shouldn't need to use the SDK. In theory you can just ssh into your pod, and make a curl request to the endpoint to get that metadata. If it were me debugging my own environment, I will just try all of the following and see if one of them sticks: # Basic endpoint probing
curl http://169.254.170.2/
curl http://169.254.170.23/
# IMDS v2 endpoints (EC2 metadata service)
curl http://169.254.169.254/latest/meta-data/
curl http://169.254.169.254/latest/meta-data/public-ipv4
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/
# Pod Identity endpoints
curl http://169.254.170.23/v1/credentials
TOKEN=$(cat $AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE)
curl -H "Authorization: $TOKEN" http://169.254.170.23/v1/credentials Thanks again, |
This is actually funny, we're currently setting up a new EKS cluster following all found recommendations and EKS pod identity seems to be "the new way" (so supposedly "the correct way" too, right?) for interacting with IAM, see the announcement blog post if you're curious.
Fails with 301, but TOKEN=$(cat $AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE)
curl -H "Authorization: $TOKEN" http://169.254.170.23/v1/credentials works and shows the tokens as expected. I agree that I don't need to make use of the SDK if all that's needed is to parse some HTTP response (probably in JSON format), the question is where to find the documentation for the API exposed by http://169.254.170.23/, so far the |
Hey @shaftoe , Thanks for that! I guess I'm playing catch up with EKS auth.. I read through the EKS pod identity docs and I'm not seeing anything about the metadata endpoint. All I see is clarifications about the credentials endpoint which you've confirmed is working. I'll have to look a little deeper into this, perhaps setting my own cluster with pod identity and reaching out to the EKS team internally for clarification. This might take some time, so please hang tight while I try to find out more. Thanks again, |
Hang on, it's me who has to thank you for the help so far! Take your time, I'll check emails and answer asap if you have any related question. Enjoy your weekend |
Checkboxes for prior research
Describe the bug
Using latest version of https://www.npmjs.com/package/@aws-sdk/ec2-metadata-service seems to not work out of the box with NodeJS v18 in an AWS EKS kubernetes cluster running pod with service account associated and valid policy attached.
Regression Issue
SDK version number
@aws-sdk/[email protected]
Which JavaScript Runtime is this issue in?
Node.js
Details of the browser/Node.js/ReactNative version
node v18.20.4
Reproduction Steps
Trying getting metadata info via JS module:
Observed Behavior
Error when trying to fetch metadata
Expected Behavior
Metadata fetched correctly
Possible Solution
No response
Additional Information/Context
No response
The text was updated successfully, but these errors were encountered: