-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Private/Public Endpoints #649
Comments
Thanks @christopherhein! Do you think we should consider enabling private access by default? Seems like we could actually do it quite safely. |
I don't see any cons to enabling both by default. It is worth tracking aws/containers-roadmap#22 for the private only situation. |
@mindfulmonk @errordeveloper I agree with this for the default to having both turned on, with flags/config to turn off. |
Not enabling the private access seems odd to me, why would you want worker nodes to have to loop through the Internet to get to their own control plane? So when they added this feature, my first thought was 'Wha? They weren't doing this already?' 😄 |
Although CF does not have these setting yet, you can change the settings to enable private access on existing clusters from the AWS API/cli
So we could apply the setting after CF creation completes. Does that mess up CF? Given it is unaware of the settings would it keep reverting the setting on updates, or just not touch it? I would wait, but sometimes the AWS CF team takes actual years to add the simplist things, or just never does. |
Any idea when the "endpointPublicAccess=false" feature will be available approximately? Is there a way to support this change to raise the priority? |
Until now we were waiting for CloudFormation support, but due to many requests we are going to add this by calling EKS API directly. @BernhardLenz in the future, please be sure to reach out on Slack and ping me or @kalbir there. |
Some considerations that must be worked through to implement this functionality:
It's possible we could account for the Worker Node VPC access to the API server, but the other options would be out of scope as we wouldn't know from whence the traffic might come. We might be able to create a README for these topics (which could be a short document with just pointers to the Amazon documentation) |
@D3nn those are valid considerations for the most complex, PublicAccess=false / PrivateAccess=true situation, where there is no public access. and end-user need to route to, resolve DNS, and be giving access to the VPC-internal endpoint. The initial proposal here is to enable the much simpler PublicAccess=true / PrivateAccess=true by default, which is really what the default should be IMHO. This is where nodes access the control plane API internal to the VPC, and end-user access the public API/DNS. This avoids the current undesirable default (PublicAccess=true / PrivateAccess=false) where cluster-internal communication is looped out to the Internet and back. In the long run, all three functioning options could be supported:
|
@whereisaaron Just a small correction to the behavior when PublicAccess=true/PrivateAccess=false. According to Amazon's documentation for the default state:
I don't disagree that all three options need to be supported, just that either we need to make the additional changes in the public=false/private=true circumstance to make at least worker node VPC to control plane API server communications work or we should refer users to Amazon's documentation on how to make this communication functional. |
@D3nn yeah I read that too, but I remain unimpressed 😄 I read it and translated it to ‘yeah we know this is far from ideal and we’re sensitive about it too’ 🤣
|
Hey, sorry to bother but is there any updates on this? 😅 |
Working on this currently. Need to add some tests and make sure all results are as expected, to wit:
|
Eagerly waiting for this :) Currently my best bet is to just update this setting using the EKS Console UI, is that right? |
|
The change merged in #1149 is behaving differently for the create and update cases. In the case of update, it allows setting of "Private Only", but in create it does not (See Update vs Create). It seems to be a valid use case (in fact, it is for us) to allow creation of a private-only cluster (we only manage the cluster from a bastion host with in the VPC). Was this simply an oversight in the newly merged PR? |
It’s documented that way @morinap so I dont think it was an oversight: https://github.com/weaveworks/eksctl/blob/master/site/content/usage/06-vpc-networking.md#managing-access-to-the-kubernetes-api-server-endpoints
Do you agree with the private endpoint during cluster creation “prevents eksctl from being able to join the worker nodes to the cluster“? |
@atheiman Thanks for pointing that out, I had missed that in the documentation.
I'm not sure? Let me build this and test this on my own to verify. I don't see at a glance why this wouldn't work if I'm actually running eksctl from a host within the same VPC as the EKS cluster. |
Ahh, I see now. Even from a bastion host within the VPC, a rule needs to be added to the control plane security group to allow that host to access when only private access is enabled. I'll stew on this and see if I can come up with an acceptable change. |
@morinap @atheiman The current workaround to create a private cluster is to first create the cluster with public access using eksctl and then use the "aws eks update-cluster-config" cli to update the endpoint to private. There is not so much value in changing the 2nd steps to use eksctl instead of the aws cli. The value add would be to be able to achieve the creation of a private cluster in one step... |
@BernhardLenz I've just opened a PR at #1434 that takes one simple approach to resolve this; this approach has worked successfully for me today. I was able to create a new cluster with private-only access from a bastion host in one command using my forked code. |
Fix overlays not being updated for gcr migration
❗️ Currently this is blocked on CloudFormation supporting the Params - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html
Announcement - https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-eks-introduces-kubernetes-api-server-endpoint-access-cont/
Why do you want this feature?
This would add support for enabling Private and/or Public API Server endpoints allowing you to gate access to your clusters via the VPC. This allows you to isolate the API server limiting exposure.
What feature/behavior/change do you want?
Add support for
endpointPrivateAccess
andendpointPublicAccess
on the ResourcesVpcConfig`.https://docs.aws.amazon.com/eks/latest/APIReference/API_VpcConfigRequest.html#AmazonEKS-Type-VpcConfigRequest-endpointPrivateAccess
https://docs.aws.amazon.com/eks/latest/APIReference/API_VpcConfigRequest.html#AmazonEKS-Type-VpcConfigRequest-endpointPublicAccess
The text was updated successfully, but these errors were encountered: