-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify user on export kubecfg #9280
Conversation
cmd/kops/export_kubecfg.go
Outdated
@@ -71,6 +73,8 @@ func NewCmdExportKubecfg(f *util.Factory, out io.Writer) *cobra.Command { | |||
|
|||
cmd.Flags().StringVar(&options.KubeConfigPath, "kubeconfig", options.KubeConfigPath, "The location of the kubeconfig file to create.") | |||
cmd.Flags().BoolVar(&options.all, "all", options.all, "export all clusters from the kops state store") | |||
cmd.Flags().BoolVar(&options.admin, "admin", options.admin, "also export the admin user") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cmd.Flags().BoolVar(&options.admin, "admin", options.admin, "also export the admin user") | |
cmd.Flags().BoolVar(&options.admin, "admin", options.admin, "export the admin user") |
cmd.Flags().BoolVar(&options.admin, "admin", options.admin, "also export the admin user") | |
cmd.Flags().BoolVar(&options.admin, "admin", options.admin, "export the admin user and use in the cluster context") |
maybe one of these? I'll keep thinking of how this could be worded...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"include the admin user and credentials"
Thanks for looking at this @rifelpet . The current staet is pretty much a brain dump of how I want this to work. It is in no way close to merge :) I just wanted some comments that more people agree to this approach. The final solution will probably need a bit of refactor etc, so I didn't want to do too much up front :) Should have said so initially.Apologies. |
cmd/kops/export_kubecfg.go
Outdated
@@ -71,6 +73,8 @@ func NewCmdExportKubecfg(f *util.Factory, out io.Writer) *cobra.Command { | |||
|
|||
cmd.Flags().StringVar(&options.KubeConfigPath, "kubeconfig", options.KubeConfigPath, "The location of the kubeconfig file to create.") | |||
cmd.Flags().BoolVar(&options.all, "all", options.all, "export all clusters from the kops state store") | |||
cmd.Flags().BoolVar(&options.admin, "admin", options.admin, "also export the admin user") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"include the admin user and credentials"
pkg/kubeconfig/create_kubecfg.go
Outdated
} | ||
if cert != nil { | ||
b.CACert, err = cert.AsBytes() | ||
if admin { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why omit the certificate-authority-data when admin
is false?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. That wasn't intentional. Fixed.
@@ -225,7 +225,7 @@ func TestBuildKubecfg(t *testing.T) { | |||
} | |||
for _, tt := range tests { | |||
t.Run(tt.name, func(t *testing.T) { | |||
got, err := BuildKubecfg(tt.args.cluster, tt.args.keyStore, tt.args.secretStore, tt.args.status, tt.args.configAccess) | |||
got, err := BuildKubecfg(tt.args.cluster, tt.args.keyStore, tt.args.secretStore, tt.args.status, tt.args.configAccess, true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should add a test for false
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And fixed (which caught the above-mentioned error)
pkg/kubeconfig/kubecfg_builder.go
Outdated
|
||
if config.Clusters == nil { | ||
config.Clusters = make(map[string]*clientcmdapi.Cluster) | ||
} | ||
config.Clusters[b.Context] = cluster | ||
} | ||
|
||
{ | ||
if b.KubeUser == "admin" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why only write out the client cert if there is a plaintext password?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a bit of a hack. I changed the check to something matching kops assumptions on user names.
} | ||
|
||
if c.CreateKubecfg && !c.admin && c.user == "" { | ||
return nil, fmt.Errorf("--create-kube-config requires that either --admin or --user is set") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To some extent code related. It is not straightforward in the current code to see if there is already a context for the cluster early in the update
code. I didn't want to let the user run update
to provision a new cluster, only for the command to fail at the very end when exporting the user and then complain about --admin
or --user
not being set.
So we assume that the user only adds if they want to add the new context --create-kube-config
. That is also why I defaulted this flag to false rather than true now since usually one only wants this if one runs update
to create the cloud resources the first time.
The other use case I can think of is when one rotates certs, and at least if one uses the admin user, one would need to add --admin
anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it makes sense to export a context for the cluster without either adding admin credentials or setting the context to a precreated singular user.
Our authentication setup has per-cluster tokens which are obtained by an exec hook. That means a single user shared across multiple clusters would not work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit concerned that if one sets --create-kube-config
, but does not catch that one probably should also set --admin
or --user
one will get a broken config. So I do agree that there are use cases for not doing this, I am wondering if this is worth it usabilitywise.
For your use case, you have to remove the user from the context once and then kops will at least leave the context alone and not keep re-adding the admin user, which is a good benefit on its own I should think. I assume that typically you would only export the cluster config once per operator per cluster so this isn't too much hassle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since --admin
and --user
imply --create-kube-config
it is unlikely that --create-kube-config
would be explicitly set unless the invoker is doing something special.
Our wrapper around kops has two forms of export kubecfg
, one which corresponds to --admin
and one which adds a per-cluster user with our local exec hook added. Since the admin credential is so powerful, I would prefer not to have that second case extract the credential, write it to disk, and then try to remove it.
I suppose I could let the current PR through and then address this case in a followup PR. I am planning on reducing the default lifetime of the admin credential to 18 hours.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that would be helpful. You have a much better idea of how this should work for your use case than I do.
I do agree that passing only --create-kube-config
would be something special. If you log a warning about creating a context without user, I think that should be fine. It would at least give a hint to the user about why the credentials aren't working.
|
||
if c.user != "" && !c.CreateKubecfg { | ||
return nil, fmt.Errorf("--user requires that --create-kube-config is set") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps make either --admin
or --user
imply --create-kube-config
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good idea! I'll give that a go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logging probably isn't necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think they could be helpful and I guess common for CLI tools. But we also do document that it is implied. If you think they should go, I can remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think most users aren't going to keep --create-kube-config
in mind. They're going to know either --admin
or --user
, whichever fits their use case. So I think this would just be log chatter.
The error from the AWS job is expected since it is not setting the I am strongly for requiring the |
kubetest currently only adds args to |
If you agree with me, I can look at a PR to kubetest to just add the admin flag |
hm it'd be great if we could make the transition a bit smoother. I realize this is a breaking change regardless of how we implement it, but maybe we could do something like add the new cli flags in release-1.18 with defaults to the existing behavior, and then change the defaults in master. This way kubetest could test both master and release-1.18 kops builds. Thoughts? |
We'd have to go further back than 1.18, wouldn't we? If we don't change kubetest, we'd have to backport noops of these flags as far back as we want to do any patch releases. |
|
When kubernetes/test-infra#17890 goes in, this should pass. |
/retest |
cmd/kops/create_cluster.go
Outdated
@@ -1376,7 +1374,7 @@ func RunCreateCluster(ctx context.Context, f *util.Factory, out io.Writer, c *Cr | |||
fmt.Fprintf(&sb, " * edit your master instance group: kops edit ig --name=%s %s\n", clusterName, masters[0].ObjectMeta.Name) | |||
} | |||
fmt.Fprintf(&sb, "\n") | |||
fmt.Fprintf(&sb, "Finally configure your cluster with: kops update cluster --name %s --yes\n", clusterName) | |||
fmt.Fprintf(&sb, "Finally configure your cluster with: kops update cluster --name %s --yes --admin --create-kube-config\n", clusterName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps leave off the --create-kube-config
since it's implied?
|
||
if c.user != "" && !c.CreateKubecfg { | ||
return nil, fmt.Errorf("--user requires that --create-kube-config is set") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logging probably isn't necessary.
a041557
to
31a19db
Compare
Looks like tests are passing and all now! /assign @rifelpet |
@olemarkus: GitHub didn't allow me to assign the following users: rifelpe. Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @rifelpet |
cmd/kops/update_cluster.go
Outdated
conf.User = cluster.ObjectMeta.Name | ||
} else { | ||
conf.User = c.user | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this logic should be pushed down into kubeconfig.BuildKubecfg()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why I did not do this earlier. Much better and the logic is now tested properly too.
cmd/kops/update_cluster.go
Outdated
if err != nil { | ||
return nil, err | ||
} | ||
|
||
if c.admin { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if c.admin { | |
if c.user == "" { |
Review comments on create_cluster.go and update_cluster.go still unaddressed. |
/lgtm |
…en exporting context
/lgtm |
this lgtm as well but since it's a breaking change and rather user-facing, i'd prefer if justin gives the final approval. if it doesnt get merged before next office hours we can discuss it then. /lgtm |
No objections during office hours |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: olemarkus, rifelpet The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Fixes #8017