-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve liveness and readiness probes accuracy #1546
Comments
I believe this issue is already addressed with #1676 (and cherry-picked to 0.5.x)/ |
@mimowo thanks for the update. I'd say #1676 fixes most of this issue. There are a couple of things in the scope of this issue that we may still want to address like:
So I think we can close this one, and we can create some finer-grained ones for the two left-overs above. WDYT? |
Seems like a valid improvement indeed, otherwise we may be getting errors from the server.
Any concrete example, other than the visibility API server, where we could improve accuracy?
I prefer more fine-grained issues, so that we can prioritize / close them independently. |
Agreed.
I wonder if probing for the webhooks in the liveness probe, as done in the readiness probe now with #1676, would be useful to mitigate the situation where cert-controller might fail generating / injecting the certificates at start time, and possibly over cert rotation. Do you think that would be an improvement over the current liveness probe implementation?
Sounds good, I'll create them and close this one. |
Oh, interesting. I assumed the liveness probe = readiness probe, but it indeed the liveness probe = health probe in controller manager: https://github.com/kubernetes-sigs/controller-runtime/blob/7032a3cc91d2afc4c2d54e4a4891cf75da9f75f5/pkg/manager/internal.go#L281-L285. And currently, the health probe is just ping in Kueue. So, yes I think using the liveness probe checking webhook server is better so that we can see if the server is down. cc @trasc @alculquicondor |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/assign |
@astefanutti since this question was posted we have added the readiness probe to Kueue in #1676 (and later improved in follow ups). Do we still have some known gaps to cover in the readiness probe, or scenarios that we want to cover in the liveness probe? If not, I would suggest to park this issue (close) until we have such scenarios. |
@mimowo apologies for the late reply. I think what's been done to improve the readiness mostly addresses this issue, and I agree with your suggestion to close it, and create some finer-grained ones if needed. |
/close |
@astefanutti: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What would you like to be added:
The liveness and readiness probes should succeed only when the operator is fully operational. This means they should report a health status only when all the components, i.e, webhooks, extension API servers, controllers, have started and are servicing requests.
Why is this needed:
At the moment, the liveness and readiness probes report a healthy status unconditionally, as soon as the controller-runtime manager has started, despite the operator and its components (webhooks, extension API servers, controllers) may not be operational yet.
That can happen over the period of time it takes for the certificates generated by cert-controller to be propagated into the secret volume mount for example.
In such cases, having the probes reporting an accurate status can help recovering issues, and other components correctly waiting for the operator readiness.
Completion requirements:
This enhancement requires the following artifacts:
The artifacts should be linked in subsequent comments.
The text was updated successfully, but these errors were encountered: