-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
skipper times out before clients on slow connections can finish their requests #795
Comments
Thanks for creating this detailed issue! Did you also tried to change the |
if we raised the number we could transfer the upload until the timeout happened, ie set it to 15 min and transfer would take 30 min, it gets cut at the timeout. I verified it with curls rate limited at different time ranges never tried 0 cause no timeouts doesn't sound to healthy, also raising the timeout really high would affect other ingress rules on the same controller. might be worth adding option per ingress to configure this timeout or maybe reset the timeout if the client still is sending data. |
Option per ingress is not scalable (connection pool per ingress). |
Can you ensure, that Skipper/go is closing the TCP connections on the other timeouts? Otherwise you might end up with the to-many-open-file-descriptors-problem - when setting read timeout to 0. |
@tkrop please proof that it doesn’t. I appreciate if you show that the assumption is wrong, because it would be another issue in golang we can put upstream. I am pretty sure that my assumption from reading docs is right. Readheadertimeout is http and if timeout, which is tcp, don’t close the connection, what is the purpose of a tcp timeout? |
@roffe a timeout of 15 minutes is already easily to DoS. Why bother with the timeout if you have to ratelimit anyways? |
The ratelimit is to simulate a slow connection |
99% of all the uploads would be from reliable connections but sometimes people sit on a flaky cellphone connection and upload slow and the total transfer time would go over |
I am not sure if we talk about the same problem. Do I miss something? |
@szuecs If I'm not mistaken while reading the article above – "The Complete guide to Go net/http timeouts" – you control the maximum amount of time that Skipper will wait for a client to finish If this is correctly understood, then the behaviour we're seeing is intentional. |
@ptrf yes exactly. In code you find this at https://github.com/zalando/skipper/blob/master/skipper.go#L588-L596 You could also set http.ReadHeaderTimeout with -read-header-timeout-server=30s for timeout client connections and use -read-timeout-server=0 to have the intended behaviour, I would expect. |
@roffe Well, I guess we should close this issue? |
@szuecs And thanks for looking at our inquiry :-) |
@szuecs you are probably right. I can't proof this. It is just base on the intuition, that go is propagating the 0 down to the socket. This may be no problem, if the go's |
closing then, thanks again |
We have found a bug in skipper, which affects connections where a client attempts to send large request payload on a slow connection. Specifically, skipper errors out due to an i/o timeout:
Our setup contains a kubernetes deployed service that exposes a POST endpoint. This endpoint can accept payloads of several hundreds of MBs.
When a client attempts to POST a large payload on a slow connection, skipper aborts the request after reaching the timeout specified by
-read-timeout-server
.It seems that skipper times out because the backend has not responded, even though the client's request has yet to complete.
We attempted a workaround where the service would send multiple 100 Continue informational responses, after reading each 8 MB chunk, but here we in turn ran into issues golang/go#26089 and golang/go#26088.
Steps to reproduce:
-read-timeout-server
to, say, 10 seccurl(1)
to POST a file using--limit-rate
options:If
somelargefile
is 50 KB, it should take 25+ seconds forcurl(1)
to finish the request, which exceeds the timeout specified by-read-timeout-server
, and you should see the request abort, even though the client hasn't finished the request.cc @roffe
The text was updated successfully, but these errors were encountered: