-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-eks] Can we remove the use of {cluster resource,kubectl} provider completely if the situation is changed? #11403
Comments
@foriequal0 Your analysis is correct.
It might be, we would need to dive deeper into this. Right now it doesn't look like CloudFormation supports importing EKS clusters. Regardless, the KubectlProvider, will still remain because it serves not just for aws auth, but also for applying general manifests. We would also need to consider whether or not its worth the effort, do you have specific considerations or are you asking for the sake of maintainability of the EKS construct? |
Oh I thought CF had supported importing ELS clusters. I didn't know that. Thanks for pointing it out. |
I understand. As far as maintainability, while I think that using the CFN resource would provide some relief, at this point, i'm not sure the migration is worth it. As for refactoring, this is known discussion point (which I see you also participated in :)), but I don't see how using CFN resources helps in that regard, apart from just reducing the number of resources/moving-parts, but it doesn't change anything inherently. |
I understand that. Thank you for replying the question :) |
|
❓ General Issue
This is a summary of my understandings and questions about why we don't use CfnCluster directly, and whether it'll be forever like this.
Correct me if I'm wrong. Thanks.
If I understand correctly, the following issues are major reasons why we have {cluster resource,kubectl} providers.
So we use a cluster resource provider to delegate the creation/management of a EKS cluster.
We have 2 roles so far:
adminRole
( ==kubectlRole
): Creates and manage EKS cluster, and issues kubectl command. For CF to automation. Cannot be changed.mastersRole
: For users. Users assume this. Can be chagned.Quickstart Amazon EKS takes similar approach for this, but with custom
AWSQS::EKS::Cluster
type: https://github.com/aws-quickstart/quickstart-amazon-eks/(I feel like re:inventing wheels around the limitations, not fixing the core problem. It looks like it would support import since it implements update handler, but more complex. Why it isn't public/upstream?)
The Question
Would it be possible if situations changed?
What if we have
AdminRole
option? What if we have EKS API to manage IAM permissions to a cluster?Even if without the fix, what if
new eks.Cluster()
spawns a nested stack that has acloudFormationExecutionRoleArn: adminRole
?Or can we put
AWS::EKS::Cluster
directly on our stack, and delegate only unsupported options such asendpointPrivateAccess
?If all the restrictions are gone, so all the providers become obsolete, how do we migrate after that?
Would it be simply setting the deletion policy to
Retain
and importing them to the main stack?Environment
Other information
The text was updated successfully, but these errors were encountered: