You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Tell us about your request
What do you want us to build?
Please separate aws-auth for worker nodes and others(like IAM role, users etc).
Which service(s) is this request for?
EKS(not sure about others)
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
What outcome are you trying to achieve, ultimately, and why is it hard/impossible to do right now? What is the impact of not having this problem solved? The more details you can provide, the better we'll be able to understand and solve the problem.
Mostly we edit aws-auth to give access users or IAM role for users(not worker node IAM role)
When we have same aws-auth for worker node and others(like iam role, users etc) and we make mistake in this cm. Because of this mistake, cluster breaks as worker node goes in to notready state. This causes hosted application to break.
If we have two separate aws-auth one for worker node and one for others so when we have to modify the other cm and do some mistake only thing affected will be others access where as worker node will continue to server and application will be available.
Are you currently working around this issue?
How are you currently solving this problem?
Additional context
Anything else we should know?
Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)
The text was updated successfully, but these errors were encountered:
I recently had some troubles with the aws-auth configmap.
While this proposal sounds like a good baseline, I would propose another possible solution:
Instead of using one aws-auth configmap all configmaps with a specific label should be used, so that as an end user i can split them as i see fit.
E.g. one for admin roles, one for team roles and so on.
Community Note
Tell us about your request
What do you want us to build?
Please separate aws-auth for worker nodes and others(like IAM role, users etc).
Which service(s) is this request for?
EKS(not sure about others)
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
What outcome are you trying to achieve, ultimately, and why is it hard/impossible to do right now? What is the impact of not having this problem solved? The more details you can provide, the better we'll be able to understand and solve the problem.
Mostly we edit aws-auth to give access users or IAM role for users(not worker node IAM role)
When we have same aws-auth for worker node and others(like iam role, users etc) and we make mistake in this cm. Because of this mistake, cluster breaks as worker node goes in to notready state. This causes hosted application to break.
If we have two separate aws-auth one for worker node and one for others so when we have to modify the other cm and do some mistake only thing affected will be others access where as worker node will continue to server and application will be available.
Are you currently working around this issue?
How are you currently solving this problem?
Additional context
Anything else we should know?
Attachments
If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)
The text was updated successfully, but these errors were encountered: