-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld #86715
Comments
/sig api-machinery |
kubeadm config
|
It is normal for those checks to fail until they complete their startup operation. After the individual healthz get returns ok, doesn't the overall /healthz return ok as well? |
If you want to see the detailed error, you can enable the following log:
|
I observe the same behaviour with v1.17.2.
But
|
I guess the health check is ok. |
Have just installed fresh 1.17.2 and see the same issue:
And this is the only check which is failing. |
k8s v1.16.7 got the same error |
and the issues can be closed |
but the log doesn't print succeed |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
We're observing the same behavior with Kubernetes 1.16 and Kubernetes 1.14
|
is there any idea on what is causing this? |
If healthz is succeeding from the box (Master node) -
but you encounter
one possibility is that your cluster might have modified CRB Please check that if thats the cause and if so, modify the CRB to add this
|
I check the crb is ok, it's not been modified
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle rotten |
same error on v1.20.4 |
Same error on
W0329 14:57:43.082477 36365 api_server.go:99] status: https://192.168.99.101:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
| I0329 14:57:43.565964 36365 api_server.go:221] Checking apiserver healthz at https://192.168.99.101:8443/healthz ...
I0329 14:57:43.576080 36365 api_server.go:241] https://192.168.99.101:8443/healthz returned 200:
ok |
Without more information, this isn't actionable. It is possible for the startup hook to fail if it takes too long to create the bootstrap roles. If this is encountered, please provide the output of /close |
@liggitt: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I seem to have solved this problem, the memory and cpu limited by kubelet are too small |
What happened:
the kube-apiserver logs
but this is ok
the code
https://github.com/kubernetes/kubernetes/blob/v1.16.4/staging/src/k8s.io/apiserver/pkg/server/healthz/healthz.go#L162-L206
What you expected to happen:
healthz check passed
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):v1.16.4
cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: