-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Nginx Ingress Controller - Missing healthcheck params in upstream #818
Comments
Actually the ideal case would be to scrape your pod endpoints and grap the parameters from there. It's something I haven't got to yet. It makes sense to have one idiom for periodic health checking in kube (liveness/readiness), instead of one for your pods that the kubelets/kube-proxy understand, and another that is different for each cloudprovider. If the pod endpoint doesn't have a health check the lb backend can just pick a reasonable default. The suggested default makes sense. I'd entertain the "health check via annotation per ingress idea" as a last resort, I don't think it's necessary but if enough people want it we can do it as a short term thing. It shouldn't be too hard. |
@aledbf fyi |
So do you plan to add default max_fails and fail_timeout configurable via ConfigMap? |
@nottix I will add |
I was actually suggesting that you set so the entier system has a consolidated view of "health" |
Thank you @aledbf. @bprashanth i'll add readiness probe to my pods, but the current nginx controller skips these values, is it right? |
no, it just uses the defaults from nginx ( |
Yeah currently you can get it to do what we're thinking by telling nginx to not health check at all, and waiting for Kube to remove the endpoints when they fail their own readiness. Down the line it would be slick if the nginx controller just scraped these params from pod spec and update its own health checks. The eventual goal is one cross platform HC instead of N per cp/ingress controller etc. |
But how can i tell to nginx to not health check my pod currently (0.5 version)? Thank you guys. |
@nottix if you need this now you need to build a custom version changing the go template |
@aledbf ok, thank you. I'll change the go template for now, waiting for this improvements. |
Hi,
With current version (0.5) nginx marks a proxied server with fail status when a single request fails.
I think that the ningx template needs the healthcheck parameters for upstream section, like this:
upstream backend {
server 10.1.0.102 max_fails=3 fail_timeout=30s;
}
The text was updated successfully, but these errors were encountered: