Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constructing many clients error #308

Closed
arukaen opened this issue Apr 7, 2020 · 18 comments
Closed

Constructing many clients error #308

arukaen opened this issue Apr 7, 2020 · 18 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@arukaen
Copy link

arukaen commented Apr 7, 2020

Hello all, recently we've enabled aws-iam-authenticator and I'm seeing an error message whenever I run anything that uses --all-namespaces (i.e. kubectl --context cluster get pods --all-namespaces).

The error message in question is:

W0407 12:20:16.744257   32782 exec.go:203] constructing many client instances from the same exec auth config can cause performance problems during cert rotation and can exhaust available network connections; 1001 clients constructed calling "aws-iam-authenticator"

kubectl version v1.18.0
aws-iam-auth version: {"Version":"v0.5.0","Commit":"1cfe2a90f68381eacd7b6dcfa2bf689e76eb8b4b"}
OSX: 10.14.6

@micahhausler
Copy link
Member

micahhausler commented Apr 13, 2020

This is actually a warning message, and is mainly targeted for resource usage when using client certificate auth. Kubectl/client-go will only invoke aws-iam-authenticator once and reuse the token for multiple connections.

@arukaen
Copy link
Author

arukaen commented Apr 17, 2020

Is there a way to turn off this message?

@micahhausler
Copy link
Member

You might be able to set --stderrthreshold=ERROR (see klog), but if that doesn't work you'd have to file an issue on kubernetes.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 16, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 15, 2020
@max-rocket-internet
Copy link

I'm getting about 2000 lines of this error also when running kubectl today. There is no --stderrthreshold option for aws-iam-authenticator. Is there any other fix?

@max-rocket-internet
Copy link

My errors looked like this:

W0902 17:12:19.796919   25579 exec.go:203] constructing many client instances from the same exec auth config can cause performance problems during cert rotation and can exhaust available network connections; 1384 clients constructed calling "aws-iam-authenticator"

So I just filtered it out with grep, but this is quite hacky:

$ kubectl describe pods 2>&1 | grep -v W0902

@sysadmiral
Copy link

This warning appears continuously when running https://github.com/vmware-tanzu/octant

@dunjoye4real
Copy link

dunjoye4real commented Oct 2, 2020

@sysadmiral I am getting the same warning using octant. Know of any work around?

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@HeavensRegent
Copy link

/reopen
This is still an issue Nothing has been resolved.

@k8s-ci-robot
Copy link
Contributor

@HeavensRegent: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
This is still an issue Nothing has been resolved.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@philoserf
Copy link

related kubernetes/kubernetes#91913

@ajcann
Copy link

ajcann commented Dec 7, 2020

Also getting this and commented on #91913 with more information.

@mooreniemi
Copy link

Seeing this (slightly different message so adding for Google) when using aws to load data from S3 in initContainer:

connections; 1377 clients constructed calling "aws"
W1218 16:36:11.240927 1982077 exec.go:271] constructing many client instances from the same exec auth config can cause performance problems during cert rotation and can exhaust available network
connections; 1378 clients constructed calling "aws"
W1218 16:36:11.240941 1982077 exec.go:271] constructing many client instances from the same exec auth config can cause performance problems during cert rotation and can exhaust available network
connections; 1379 clients constructed calling "aws"

@irl-segfault
Copy link

I get this when using octant, just FYI. Wondering if there's a way to force reuse of the same client.

@TBBle
Copy link

TBBle commented Mar 29, 2022

Per kubernetes/kubernetes#91913 (comment) it should be resolved by updating the version of k8s.io/client-go being used, which was done for aws-iam-authenticator in #398, released in v0.5.4.

Note that this is a high hit in a search for this error message, but other tools use k8s.io/client-go (or just call-out to kubectl) as well, so make sure you're looking at the right project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests