-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add --internal flag for export kubecfg that targets the internal dns name #9732
Add --internal flag for export kubecfg that targets the internal dns name #9732
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rifelpet The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
433c30e
to
9632450
Compare
/retest |
@@ -45,6 +45,9 @@ var ( | |||
|
|||
# export using a user already existing in the kubeconfig file | |||
kops export kubecfg kubernetes-cluster.example.com --user my-oidc-user | |||
|
|||
# export using the internal DNS name which bypasses the cloud load balancer | |||
kops export kubecfg kubernetes-cluster.example.com --internal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a quick note here that you need an additional SG?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe should add by default 443 when sslCertificateId
is enabled and export "internal"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that is a good idea!
Non-internal endpoints doesn't work anyway with this setup, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the problem with modifying security groups is that kops export kubecfg
is decoupled from where SG definitions are managed in kops update cluster
. At update cluster
time we don't know if the cluster operator will ever be running kops export kubecfg --internal
. while we do generate the kubecfg in update cluster --yes
, I suppose we could add the security group rule if the --internal
flag is passed to update cluster
but that could lead to confusion. Users running kops export kubecfg --internal
would need to know to include --internal
in their update cluster
commands as well. I don't know if we have precedent for making a "pair" of commands require similar flags like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just add the port if sslCertificateId is enabled?
The export part can be explained in docs or printed at update cluster summary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think always adding the rule if sslCertificateId is present might be reasonable. It assumes that anyone using an ACM cert will want the ability to use client cert auth for api access. I'm failing to think of scenarios in which that wouldn't be the case. I haven't looked into the code but I'm assuming we can use the same sources (CIDRs or SG IDs) we use on the API ELB for the instance 443 as well. I don't see a need to split that out into its own flag or diverge those settings.
One could go a step further and imply the --internal
flag's behavior when exporting kubecfg if sslCertificateId
and --admin
are set. In fact 1.19 might be the best time to do that because the --admin flag is being added, so we could document this change in behavior along side the new flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could see a site with an alternate authentication provider not wanting the ability to use client cert auth from outside the cluster. But I see little reason to restrict anything that can hit the API ELB from hitting the instance's port 443.
I agree that sslCertificateId
and --admin
should imply the --internal
flag's behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After more thought, creating the additional SG rules when kops update cluster --internal
would be problematic for organizations in which multiple people run their kops commands manually. If someone were to forget to include --internal
then kops would revoke the SG rules.
It would be much preferable to add the SG rules based on fields in the ClusterSpec. Either sslCertificateId
being set or we add an additional field that allows users to toggle the behavior. Which we choose depends on if we think anyone using sslCertifiateId
would not want the SG rules added.
Regardless, I think adding --internal
while also implying --internal
when sslCertificateId
and --admin
are used is a good idea.
The specific scenario in which we're relying on kubernetes api access with |
260e5ec
to
7edc7fd
Compare
7edc7fd
to
cb50008
Compare
cb50008
to
60e3bf0
Compare
60e3bf0
to
4e8da48
Compare
…name Kops creates an "api.internal.$clustername" dns A record that points to the master IP(s) This adds a flag that will use that name and force the CA cert to be included. This is a workaround for client certificate authentication not working on API ELBs with ACM certificates. The ELB has a TLS listener rather than TCP, so the client certificate is not passed through to the apiserver. Using --internal will bypass the API ELB so that the client certificate will be passed directly to the apiserver. This also requires that the masters' security groups allow 443 access from the client which this does not handle automatically.
4e8da48
to
d0b8c65
Compare
I think this is ready for review. It isn't the ideal solution for the ACM certificate issue but the original ask was independent of that. We can go ahead and support this even with a better cert solution later on at which point we can update the cluster_spec.md text. |
/lgtm |
/hold cancel |
Kops creates an "api.internal.$clustername" dns A record that points to the master IP(s)
This adds a flag that will use that name and force the CA cert to be included.
This is a workaround for client certificate authentication not working on API ELBs with ACM certificates.
The ELB has a TLS listener rather than TCP, so the client certificate is not passed through to the apiserver.
Using --internal will bypass the API ELB so that the client certificate will be passed directly to the apiserver.
This also requires that the masters' security groups allow 443 access from the client which this does not handle automatically. I suppose if sslCertificateId is set, kops could automatically open 443 on the masters to the same sources that the ELB's 443 allows but I could see that being frowned upon for security reasons.
Fixes #800
(wow)
/hold for comment