-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EKS] [BAD-DECISION]: EKS Pod Identity agent daemonset mapped to node-port 80 #10
Comments
This design will also break Fargate only cluster: terraform-aws-modules/terraform-aws-eks#2850 (comment) |
This also breaks when enabling the addon in a cluster running Project Contour's Envoy proxy (they use the same ports in hostNetwork). Has anyone found a work around this issue or is there any news? |
Also breaking our Traefik daemonset installation as we install Traefik with hostPort 80 and 443 |
does eks-pod-identity-agent really require listening on port 80 ? what is exactly this port used for? if another port were configured, will everything still work? |
https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html#pod-id-considerations
That's the port that actually being hit when pods requesting credentials via |
With current config, yes, they can change it to target a different port but it seems not important enough to work on it yet. 🙄 |
+1 request to be able to change the listening port at deployment for pod-identity-agent ds. This directly conflicts with being able to configure ingress-controller pod to listen on port 80 on the same node and causes extremely volatile behavior with autoscalers like karpenter. |
I had this same behavior, the weird thing is that for my cluster I get only 1 unstable There is also an When I look at all the hostPorts in the entire cluster I see that only the agents have a explicit port 80 assigned. I must be missing something crucial, but I am very curious why it does work for some of the pods 🤔 |
Update, so apparently in this setup a certain crucial pod was assigned to NodePort 80, so only one node could not have the PodIdentity enabled. Not sure if and how I can work around it but time pressure wise we cannot work forward with this |
Specifying port 80 as a hostPort seems to be incorrect. |
I've found a simple workaround for this situation:
|
The managed addon doesn't support changing the port. For self managed, currently "80" is hard coded, example the helm chart. Port # change is only the first half, the second half is the pod identity webhook must also inject the correct value. |
Copied over from aws/containers-roadmap#2356
Community Note
Tell us about your request
What do you want us to build?
Which service(s) is this request for?
EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
What outcome are you trying to achieve, ultimately, and why is it hard/impossible to do right now? What is the impact of not having this problem solved? The more details you can provide, the better we'll be able to understand and solve the problem.
We tried to install the eks-pod-identity-agent addon so that we could set the auth config to allow both options.
The addon installs as a daemonset with HostNetwork set to true, pod permissions to map to the node, and a default port set to 80.
The instant that the service started to install, all of our Haproxy ingress pods were evicted so that the identity service could map to port 80.
I'd love to know the rationale that went into choosing to map the node-port to what is literally the main http port; and then not to document how to change it to avoid collisions. Through all the documentation that mentions it the only warning is here https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html#pod-id-considerations and it's a note rather than informative. The majority of links go straight to https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html which doesnt mention it at all.
Are you currently working around this issue?
How are you currently solving this problem?
Uninstalled the Addon
The text was updated successfully, but these errors were encountered: