-
-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
better TCP_NODELAY handling: only use it when it is useful #619
Comments
I'm seeing some weird behaviour with win32 clients trying to improve #999 and detecting late acks. The network layer's |
Done in r18149 + r18150. Implementation notes:
As of r18151, we can use
TODO:
@maxmylyn: this ticket is tailor made for the automated tests - we want to compare before and after to see if this helps, especially under bandwidth constrained conditions. (only slight problem is, there is a bug I'm working on which causes congestion detection to kick in too early, and the counter measures are too aggressive, causing the framerate to drop..) |
2018-01-25 19:28:19: maxmylyn commented
|
2019-01-03 18:06:51: maxmylyn commented
|
2019-01-03 18:07:21: maxmylyn uploaded file
|
It's not clear what command lines were used for each run, as there are 3 possible values for It also doesn't look like this was being tested with any bandwidth constraints? |
2019-01-21 18:22:53: maxmylyn commented
|
2019-01-23 17:36:39: maxmylyn uploaded file
|
2019-01-23 17:43:21: maxmylyn commented
|
r21493 waits until after we have sent the last chunk before enabling NODELAY. |
r21495: also disable NODELAY for multiple chunks (doh) |
See also #2130 |
2019-08-09 03:26:28: smo uploaded file
|
Attached some charts and data for this. I'm not sure if the script for charting took into account the instances I ran with trickle. I could have just included the network/packet stuff in the charts but left all the details there. |
Please include the Some thoughts on what I was expecting to see:
|
2019-08-15 03:07:31: smo uploaded file
|
Attached is with the combinations of XPRA_SOCKET_NODELAY and XPRA_SOCKET_CORK compared. Longer tests this time a few different ones. |
Sorry, I forgot to ask you to include the default case with Very interesting to have 4 combinations already. Maybe we should combine more test results? So far:
|
2019-08-19 16:01:49: smo uploaded file
|
2019-08-19 16:03:43: smo changed owner from smo to Antoine Martin |
2019-08-19 16:03:43: smo commented
|
@smo: there are two sets of |
2019-08-19 17:55:02: smo uploaded file
|
2019-08-19 17:56:06: smo commented
|
The charts are now available here: https://xpra.org/stats/nodelay-cork/ |
Follow up from #514: at present we enable
TCP_NODELAY
globally which is a bit wasteful.It ensures that packets go out as soon as we queue them, but when the packets contain large-ish binary data this means that the binary data and the actual xpra packet structure are likely to travel in separate TCP-level packets.
It would be better to only enable
TCP_NODELAY
when aggregating packets is not helping: when we have no more data to send or when the output buffer is full. As per: Is there a way to flush a POSIX socket? and this answer:*What I do is enable Nagle, write as many bytes (using non-blocking I/O) as I can to the socket (i.e. until I run out of bytes to send, or the send() call returns EWOULDBLOCK, whichever comes first), and then disable Nagle again. This seems to work well (i.e. I get low latency AND full-size packets where possible) *
Good read: The Caveats of TCP_NODELAY
The text was updated successfully, but these errors were encountered: