-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: create serviceaccount token for v1.24 clusters #9546
Conversation
Codecov Report
@@ Coverage Diff @@
## master #9546 +/- ##
==========================================
+ Coverage 45.79% 45.89% +0.10%
==========================================
Files 222 222
Lines 26377 26458 +81
==========================================
+ Hits 12079 12143 +64
- Misses 12650 12658 +8
- Partials 1648 1657 +9
Continue to review full report at Codecov.
|
Hey @danielhelfand , Thanks for the awesome work |
422080a
to
3e2b3c1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the nitpicks. :-)
I think the only substantial things are:
- timeouts for requests
- using patch instead of update for the sa
417dd70
to
97d3061
Compare
util/clusterauth/clusterauth.go
Outdated
|
||
if len(serviceAccount.Secrets) != 0 { | ||
for _, s := range serviceAccount.Secrets { | ||
existingSecret, err := clientset.CoreV1().Secrets(ns).Get(context.Background(), s.Name, metav1.GetOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about an outside-the-loop context with a 30s
timeout and an inside-the-loop context with a 10s timeout?
If for some absurd reason there are 100 secrets, the outer context will keep us from waiting forever.
If there's high latency, inner context will make sure we give a nice amount of time to get the secret.
If there's high latency and 3+ secrets and the target secret isn't in the first 3... well that's just a bummer.
util/clusterauth/clusterauth.go
Outdated
Type: corev1.SecretTypeServiceAccountToken, | ||
} | ||
|
||
secret, err = clientset.CoreV1().Secrets(ns).Create(context.Background(), secret, metav1.CreateOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How bout the ol' 10s. Maybe we need a constant. :-P
e86cc95
to
88032a9
Compare
Signed-off-by: Daniel Helfand <[email protected]>
Signed-off-by: Daniel Helfand <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thank you Daniel!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested adding a 1.24 docker-desktop cluster from a 1.20 minikube cluster. lgtm. Thanks so much!
* fix: create serviceaccount token for v1.24 clusters Signed-off-by: Daniel Helfand <[email protected]> * change create to get in err Signed-off-by: Daniel Helfand <[email protected]>
Cherry-picked onto 2.4. |
Hi @crenshaw-dev will this be backported to 2.3 too? |
Good question... I'm not opposed to it, if anyone needs it. Want to ask in #argo-contributors on CNCF Slack? |
Yeah sure, we'll need it |
* fix: create serviceaccount token for v1.24 clusters Signed-off-by: Daniel Helfand <[email protected]> * change create to get in err Signed-off-by: Daniel Helfand <[email protected]>
Cherry-picked onto release-2.3 for 2.3.7. |
* fix: create serviceaccount token for v1.24 clusters Signed-off-by: Daniel Helfand <[email protected]> * change create to get in err Signed-off-by: Daniel Helfand <[email protected]>
Closes #9422
This pull request allows users to add v1.24 Kubernetes clusters to be managed by Argo CD by creating a service account secret with a bearer token. In previous versions of Kubernetes, this secret was created when serviceaccounts were created, but was removed in order to discourage the use of long lived tokens.
Two approaches were explored to address this:
The decision was made to go with option 1 above since using the tokenrequest api would require moving the InstallClusterManagerRBAC func behind an API endpoint. This would need to be done in order to persist a TokenManager that would be injected into the cluster server upon start up to manage tokens created by the token request api.
There may need to be a longer term discussion on adding this new endpoint/token manager to the cluster server so the current proposal would keep the use of secrets for storing bearer tokens for service accounts.
Due to k3s not having a stable 1.24 release available, testing for 1.24 clusters was conducted manually using kind.
Note on DCO:
If the DCO action in the integration test fails, one or more of your commits are not signed off. Please click on the Details link next to the DCO action for instructions on how to resolve this.
Checklist: