-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Annotations for proxy-send-timeout not honored in upstream connection #10987
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug
|
|
|
Beware that the issue is not regarding POD responding slow. So any test with httpbun and it's delay endpoint will not be a verification of the issue. The problem faced is that the POD will close incomming connections once they have been idle for 5 seconds. So the requirement is that the NGINX controller must close it's upstream connections to the POD when they have been idle for 5s or less. The only way I currenlty found to have NGINX behave that way is to set the global
This will indeed have NGINX close it's upstream connections after 4 seconds of being idle. Now the documentation for annotations on the ingress regarding he various
Reading the relative NGINX documention for the proxy_send_timeout the paramter set by the
One would expect to see NGINX close the connection to the upstream once Nginx has not send any data on the upstream connection for the configured amount off time. However if you obeserve the connetions to the upstream with |
I agree with you.
|
I am unable to figure out how to create a delay with the vanilla httpd:alpine image (or nginx:alpine image for that matter) |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
What happened:
Having web application that will close incomming connections after 5 seconds idle , without the possibility to change this setting in the app.
We set the following annotations on the ingress
But upstream connections to the pod are still kept open by nginx even after more than 4s have passed between write or read operations on the upstream connection. thus resulting in errors being logged and 502 in access logs
Error log:
What you expected to happen:
Having configured the annotations to close connections with no write or read operations for more than the set time I'd expect the upstream connection to be closed when that time is exceeded.
What do you think went wrong?
In the
upstream_balancer
upstream block there's akeepalive-timeout 60s;
thus instructing the upstream module to keep idle connections to the upstream server open for 60s.While the annotations generate the configurations in the server block for the ingress they are only applied to the
proxy_pass http://upstream_balancer;
thus having no impact on the upstream connections.NGINX Ingress controller version :
also noticed same behavior on an EKS setup on which I have no admin access , but seems running v 1.5.1
Kubernetes version (use
kubectl version
):Environment:
uname -a
):Linux apiserver 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
kubectl get nodes -o wide
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable:
How to reproduce this issue:
Run tcpdump on the application pod listening for HTTP traffic comming from ingress controller
Make HTTP request tot he applciation with interval >4s
Observe nginx open a connection to the pod and reuse that same connection even after more than 4s of no traffic on said connection.
Anything else we need to know:
While the above information is for a locally installed cluster I've obeserved same behaviour on EKS running Kubernetes
v1.25.15-eks-e71965b
and nginx ingress-controllerregistry.k8s.io/ingress-nginx/controller-chroot:v1.5.1
where I have no administraive access to the cluster or nginx-controller namespaceThe text was updated successfully, but these errors were encountered: