-
-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
latency spikes #3337
Comments
The fact that ack packets seem to arrive bundled up together and that the decoding time is low would seem to point towards network issues (I was stressing the network with downloads at the time):
So perhaps the only thing we need to change is how quickly we adapt and lower our bandwidth usage. |
Running glxgears at the default resolution of 300x300:
Then adding bandwidth contention by sending a huge video file to the client over the same link (going in the same direction as the xpra data packets):
So we get a dozen delayed acks with ~260ms round trip latency, the next compress packet comes out after 140ms rather than the usual ~12ms. That's enough to fix the problems and the latency drops back down to 50ms, before creeping back up again and repeating this pattern. Doing the same experiment with ~1080p glxgears:
So the baseline in this case is 40ms to 140ms latency (huge variation already) and the frames are roughly 30ms apart.
Transferring a file in the opposite direction - potentially competing with the ACK packets has no noticeable effect. Questions:
|
Notes and more questions than answers:
Log samples:
The very first frame is also
Since we convert it back to |
and make it configurable via an env var
we can use more specialized ones if we have the content type hint
* always do rgb threshold first * prefer jpeg / web for lossy * avoid png
we adjust speed and quality earlier if there is congestion
just use the get_auto_encoding() method which works well instead of partially re-implementing it wrong
above ~0.5MPixels, it takes too long to compress using webp
#3357 would help with downscaling performance. |
so all the picture codecs can be called generically with the same first 4 arguments: coding, image, quality, speed
Many more updates, not all of which have been tagged with this ticket number, in particular 8302076 (the blame goes to |
also introspect the list of encodings each encoder supports
"normalize" the values so we stretch the low end of the scale
at quality 100, the picture is not subsampled and almost lossless
so don't remove it from the list of 'non-video' encodings, also remove the 'nvjpeg' special case, improve debug logging
make it possible to run these codecs as a command line script for diagnostics
the new codec interface does not have the device context so pass it using the options dictionary
so the get_encoding methods don't need quality or speed as arguments, saves extracting those values from the options
This has been reported on other network connections, with better latency and bandwidth, without the html5 client.
-d xpra.server.source.source_stats
:Although this was tested using the html5 client, the problem may well lie server side, as it should not let the latency degrade that badly.
The latency is normally around 50ms (goes up to 150ms when connecting using the html5 client!) via an 802.11g connection, but it sometimes climbs to 1500ms or more!
The
decode_time
was incorrectly set by the html5 client: Xpra-org/xpra-html5@dd6fcfd + Xpra-org/xpra-html5@c5e6415 which made it harder to debug using versions before this fix.The text was updated successfully, but these errors were encountered: