Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix triggering loss for uTP packets that were never sent #7107

Merged
merged 1 commit into from
Sep 25, 2022
Merged

Fix triggering loss for uTP packets that were never sent #7107

merged 1 commit into from
Sep 25, 2022

Conversation

thrnz
Copy link
Contributor

@thrnz thrnz commented Sep 25, 2022

// if this socket tries to send a packet via the utp socket
// manager, and it fails with EWOULDBLOCK, the socket
// is stalled and this is set. It's also added to a list
// of sockets in the utp_socket_manager to be notified of
// the socket being writable again
bool m_stalled:1;

A uTP packet that fails to send due to a 'stall' isn't marked as need_resend. The send is only retried once the packet has timed out or is picked up by a SACK, causing it to trigger loss and the cwnd to be halved despite never having been sent yet. This also means packets with a higher seq_nr may be sent out first if the socket becomes writable before the unsent packet has timed out/been picked up by SACK.

From this utp log note that 6542-6544 stall while 6545-6594 were sent as normal as the socket became writable again. 6542-6544 were only sent once they are noticed as missing from a SACK, and are then treated as lost packets:

[000026139017] 0000028257BE58C0: sending packet seq_nr:6542 ack_nr:27277 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:16315 cwnd:82107 adv_wnd:78873 in-flight:76161 mtu:1457 timestamp:3248500762 time_diff:3078327667 mtu_probe:0 extension:0
[000026139091] 0000028257BE58C0: socket stalled
...
[000026139136] 0000028257BE58C0: sending packet seq_nr:6543 ack_nr:27277 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:14878 cwnd:82107 adv_wnd:81747 in-flight:74724 mtu:1457 timestamp:3248500879 time_diff:3078328093 mtu_probe:0 extension:0
[000026139142] 0000028257BE58C0: socket stalled
...
[000026139221] 0000028257BE58C0: sending packet seq_nr:6544 ack_nr:27277 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:13454 cwnd:82107 adv_wnd:81747 in-flight:76161 mtu:1457 timestamp:3248500967 time_diff:3078328093 mtu_probe:0 extension:0
[000026139228] 0000028257BE58C0: socket stalled
...
[000026139382] 0000028257BE58C0: sending packet seq_nr:6545 ack_nr:27277 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:93989 cwnd:82107 adv_wnd:81747 in-flight:77598 mtu:1457 timestamp:3248501126 time_diff:3078328093 mtu_probe:0 extension:0
...
[000026763903] 0000028257BE58C0: got SACK first:6543 00111111111111111111111111111100 our_seq_nr:6594
...
[000026763376] 0000028257BE58C0: Packet 6542 lost. (1 duplicate acks, trigger fast-resend)
[000026763379] 0000028257BE58C0: cwnd cut:41053 packets_in_flight:52 loss_seq_nr:6645
[000026763381] 0000028257BE58C0: experienced loss, slow_start -> 0 ssthres:41053
...
[000026763553] 0000028257BE58C0: Packet 6543 lost. (1 duplicate acks, trigger fast-resend)
---
[000026763922] 0000028257BE58C0: Packet 6544 lost. (1 duplicate acks, trigger fast-resend)

This suggested change marks such packets as need_resend when the stall happens, meaning they will be sent the next time send_pkt() is called once the socket is writable again. This should prevent packets that weren't sent from being 'lost' and needlessly halving the cwnd, and also sending packets out of order.

I've marked this as a draft for the time being as I'm not yet sure that I'm not missing/misunderstanding something.

@thrnz
Copy link
Contributor Author

thrnz commented Sep 25, 2022

This could possibly be related to #3542

Looking at the log attached to this comment, the cwnd is cut 21 times due to loss, contributing to the sawtooth shape in the graph. Of these 21 cuts, 20 were due to loss of packets that appear to never have been sent in the first place due to 'stalling'.

Here they are with the loss event and the following cwnd halving tied to the preceding 'stall' where the packet wasn't sent. Packet 56090 appears to be the only real 'loss':

[000026139017] 0000028257BE58C0: sending packet seq_nr:6542 ack_nr:27277 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:16315 cwnd:82107 adv_wnd:78873 in-flight:76161 mtu:1457 timestamp:3248500762 time_diff:3078327667 mtu_probe:0 extension:0
[000026139091] 0000028257BE58C0: socket stalled
...
[000026763376] 0000028257BE58C0: Packet 6542 lost. (1 duplicate acks, trigger fast-resend)
[000026763379] 0000028257BE58C0: cwnd cut:41053 packets_in_flight:52 loss_seq_nr:6645

[000038962095] 0000028257BE58C0: sending packet seq_nr:9194 ack_nr:27281 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:173465 cwnd:153728 adv_wnd:274514 in-flight:148011 mtu:1457 timestamp:3261323841 time_diff:3078327985 mtu_probe:0 extension:0
[000038962127] 0000028257BE58C0: socket stalled
...
[000039276825] 0000028257BE58C0: Packet 9194 lost. (1 duplicate acks, trigger fast-resend)
[000039276827] 0000028257BE58C0: cwnd cut:78322 packets_in_flight:113 loss_seq_nr:9419

[000040530849] 0000028257BE58C0: sending packet seq_nr:9489 ack_nr:27282 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:93887 cwnd:88884 adv_wnd:240574 in-flight:73287 mtu:1457 timestamp:3262892595 time_diff:3078327010 mtu_probe:0 extension:0
[000040530875] 0000028257BE58C0: socket stalled
...
[000040844874] 0000028257BE58C0: Packet 9489 lost. (1 duplicate acks, trigger fast-resend)
[000040844876] 0000028257BE58C0: cwnd cut:45675 packets_in_flight:64 loss_seq_nr:9616

[000062431647] 0000028257BE58C0: sending packet seq_nr:16517 ack_nr:27293 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:275583 cwnd:247200 adv_wnd:563399 in-flight:245727 mtu:1457 timestamp:3284793391 time_diff:3078327089 mtu_probe:0 extension:0
[000062431674] 0000028257BE58C0: socket stalled
...
[000062747995] 0000028257BE58C0: Packet 16517 lost. (0 duplicate acks, trigger fast-resend)
[000062747997] 0000028257BE58C0: cwnd cut:125037 packets_in_flight:170 loss_seq_nr:16856

[000095478538] 0000028257BE58C0: sending packet seq_nr:29444 ack_nr:27313 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:408461 cwnd:331577 adv_wnd:828174 in-flight:320451 mtu:1457 timestamp:3317840283 time_diff:3078328085 mtu_probe:0 extension:0
[000095478565] 0000028257BE58C0: socket stalled
...
[000095793033] 0000028257BE58C0: Packet 29444 lost. (1 duplicate acks, trigger fast-resend)
[000095793035] 0000028257BE58C0: cwnd cut:167226 packets_in_flight:235 loss_seq_nr:29913

[000096740416] 0000028257BE58C0: sending packet seq_nr:29917 ack_nr:27313 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:450215 cwnd:173623 adv_wnd:809256 in-flight:165255 mtu:1457 timestamp:3319102162 time_diff:3078327443 mtu_probe:0 extension:0
[000096740440] 0000028257BE58C0: socket stalled
...
[000097364479] 0000028257BE58C0: Packet 29917 lost. (1 duplicate acks, trigger fast-resend)
[000097364481] 0000028257BE58C0: cwnd cut:88285 packets_in_flight:59 loss_seq_nr:30034

[000099560647] 0000028257BE58C0: sending packet seq_nr:30408 ack_nr:27315 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:23410 cwnd:106668 adv_wnd:563106 in-flight:104901 mtu:1457 timestamp:3321922391 time_diff:3078328684 mtu_probe:0 extension:0
[000099560678] 0000028257BE58C0: socket stalled
...
[000099875569] 0000028257BE58C0: Packet 30408 lost. (1 duplicate acks, trigger fast-resend)
[000099875571] 0000028257BE58C0: cwnd cut:54814 packets_in_flight:78 loss_seq_nr:30563

[000102955522] 0000028257BE58C0: sending packet seq_nr:30923 ack_nr:27315 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:54001 cwnd:82853 adv_wnd:413998 in-flight:77598 mtu:1457 timestamp:3325317267 time_diff:3078328465 mtu_probe:0 extension:0
[000102955565] 0000028257BE58C0: socket stalled
...
[000103570616] 0000028257BE58C0: Packet 30923 lost. (2 duplicate acks, trigger fast-resend)
[000103570618] 0000028257BE58C0: cwnd cut:42841 packets_in_flight:61 loss_seq_nr:31044

[000112047771] 0000028257BE58C0: sending packet seq_nr:32576 ack_nr:27315 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:88999 cwnd:123938 adv_wnd:293585 in-flight:120708 mtu:1457 timestamp:3334409516 time_diff:3078329279 mtu_probe:0 extension:0
[000112047829] 0000028257BE58C0: socket stalled
...
[000112640537] 0000028257BE58C0: Packet 32576 lost. (3 duplicate acks, trigger fast-resend)
[000112640539] 0000028257BE58C0: cwnd cut:63372 packets_in_flight:92 loss_seq_nr:32759

[000115183828] 0000028257BE58C0: sending packet seq_nr:33069 ack_nr:27316 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:52835 cwnd:86887 adv_wnd:266686 in-flight:83346 mtu:1457 timestamp:3337545571 time_diff:3078328953 mtu_probe:0 extension:0
[000115183880] 0000028257BE58C0: socket stalled
...
[000115797916] 0000028257BE58C0: Packet 33069 lost. (3 duplicate acks, trigger fast-resend)
[000115797918] 0000028257BE58C0: cwnd cut:44907 packets_in_flight:65 loss_seq_nr:33198

[000124261055] 0000028257BE58C0: sending packet seq_nr:34760 ack_nr:27318 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:98815 cwnd:125943 adv_wnd:289480 in-flight:117834 mtu:1457 timestamp:3346622801 time_diff:3078328964 mtu_probe:0 extension:0
[000124261082] 0000028257BE58C0: socket stalled
...
[000124574649] 0000028257BE58C0: Packet 34760 lost. (0 duplicate acks, trigger fast-resend)
[000124574651] 0000028257BE58C0: cwnd cut:64295 packets_in_flight:84 loss_seq_nr:34927

[000126678463] 0000028257BE58C0: sending packet seq_nr:34889 ack_nr:27319 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:44631 cwnd:156923 adv_wnd:114033 in-flight:67539 mtu:1457 timestamp:3349040209 time_diff:3078327767 mtu_probe:0 extension:0
[000126678507] 0000028257BE58C0: socket stalled
...
[000126991826] 0000028257BE58C0: Packet 34889 lost. (0 duplicate acks, trigger fast-resend)
[000126991828] 0000028257BE58C0: cwnd cut:78844 packets_in_flight:90 loss_seq_nr:35068

[000128872886] 0000028257BE58C0: sending packet seq_nr:35280 ack_nr:27319 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:7678 cwnd:94847 adv_wnd:209804 in-flight:90517 mtu:1457 timestamp:3351234631 time_diff:3078328686 mtu_probe:0 extension:0
[000128872914] 0000028257BE58C0: socket stalled
...
[000129486529] 0000028257BE58C0: Packet 35280 lost. (3 duplicate acks, trigger fast-resend)
[000129486531] 0000028257BE58C0: cwnd cut:48876 packets_in_flight:71 loss_seq_nr:35421

[000163628902] 0000028257BE58C0: sending packet seq_nr:51402 ack_nr:27342 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:419250 cwnd:372706 adv_wnd:997770 in-flight:362124 mtu:1457 timestamp:3385990648 time_diff:3078328642 mtu_probe:0 extension:0
[000163628954] 0000028257BE58C0: socket stalled
...
[000163942349] 0000028257BE58C0: Packet 51402 lost. (0 duplicate acks, trigger fast-resend)
[000163942351] 0000028257BE58C0: cwnd cut:187601 packets_in_flight:229 loss_seq_nr:51859

[000167556901] 0000028257BE58C0: sending packet seq_nr:51945 ack_nr:27345 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:442399 cwnd:86460 adv_wnd:380904 in-flight:70413 mtu:1457 timestamp:3389918646 time_diff:3078328070 mtu_probe:0 extension:0
[000167556943] 0000028257BE58C0: socket stalled
...
[000167870344] 0000028257BE58C0: Packet 51945 lost. (1 duplicate acks, trigger fast-resend)
[000167870346] 0000028257BE58C0: cwnd cut:44444 packets_in_flight:62 loss_seq_nr:52068


[000184146260] 0000028257BE58C0: Packet 56090 lost. (2 duplicate acks, trigger fast-resend)
[000184146262] 0000028257BE58C0: cwnd cut:97130 packets_in_flight:138 loss_seq_nr:56365

[000185399243] 0000028257BE58C0: sending packet seq_nr:56422 ack_nr:27351 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:207016 cwnd:106700 adv_wnd:329730 in-flight:77598 mtu:1457 timestamp:3407760989 time_diff:3078328134 mtu_probe:0 extension:0
[000185399270] 0000028257BE58C0: socket stalled
...
[000185713698] 0000028257BE58C0: Packet 56422 lost. (0 duplicate acks, trigger fast-resend)
[000185713700] 0000028257BE58C0: cwnd cut:54288 packets_in_flight:70 loss_seq_nr:56561

[000192291697] 0000028257BE58C0: sending packet seq_nr:57689 ack_nr:27352 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:108022 cwnd:114913 adv_wnd:318247 in-flight:110649 mtu:1457 timestamp:3414653442 time_diff:3078329467 mtu_probe:0 extension:0
[000192291726] 0000028257BE58C0: socket stalled
...
[000192888852] 0000028257BE58C0: Packet 57689 lost. (2 duplicate acks, trigger fast-resend)
[000192888854] 0000028257BE58C0: cwnd cut:58915 packets_in_flight:84 loss_seq_nr:57856

[000231410681] 0000028257BE58C0: sending packet seq_nr:13001 ack_nr:27382 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:434719 cwnd:424406 adv_wnd:1041391 in-flight:422478 mtu:1457 timestamp:3453772426 time_diff:3078329677 mtu_probe:0 extension:0
[000231410717] 0000028257BE58C0: socket stalled
...
[000231886202] 0000028257BE58C0: Packet 13001 lost. (3 duplicate acks, trigger fast-resend)
[000231886204] 0000028257BE58C0: cwnd cut:213695 packets_in_flight:300 loss_seq_nr:13600

[000235363008] 0000028257BE58C0: sending packet seq_nr:13632 ack_nr:27385 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:446204 cwnd:444718 adv_wnd:432804 in-flight:73287 mtu:1457 timestamp:3457724753 time_diff:3078328628 mtu_probe:0 extension:0
[000235363050] 0000028257BE58C0: socket stalled
...
[000235677307] 0000028257BE58C0: Packet 13632 lost. (2 duplicate acks, trigger fast-resend)
[000235677309] 0000028257BE58C0: cwnd cut:222614 packets_in_flight:312 loss_seq_nr:14255

[000241934837] 0000028257BE58C0: sending packet seq_nr:14274 ack_nr:27385 type:ST_DATA id:15964 target:193.176.124.10:37495 size:1457 error:The operation completed successfully send_buffer_size:32167 cwnd:229762 adv_wnd:243144 in-flight:193841 mtu:1457 timestamp:3464296582 time_diff:3078328639 mtu_probe:0 extension:0
[000241934869] 0000028257BE58C0: socket stalled
...
[000243169512] 0000028257BE58C0: Packet 14274 lost. (0 duplicate acks, trigger fast-resend)
[000243169514] 0000028257BE58C0: cwnd cut:115518 packets_in_flight:94 loss_seq_nr:14461

@arvidn
Copy link
Owner

arvidn commented Sep 25, 2022

I believe your understanding of m_stalled is correct and that you have identified a real issue.
The patch looks good to me

@thrnz
Copy link
Contributor Author

thrnz commented Sep 25, 2022

I've added this to send_syn() as well as it also appears to also be effected

@thrnz thrnz marked this pull request as ready for review September 25, 2022 19:45
@arvidn arvidn merged commit 4269b7b into arvidn:RC_2_0 Sep 25, 2022
@Seeker2
Copy link

Seeker2 commented Sep 29, 2022

This could possibly be related to #3542

This could be an issue for peers that libtorrent is mostly receiving from while the socket's send buffer is occupied by streams to other peers.

This fix may partially help heavy-load high-contention usage.

But my test scenarios are almost the exact opposite of that...with 1 seed (uT) to 1 peer (qBT) via uTP transfer.
...and pretty much everything else disabled.

I've tested with and without speed limits on both the seed and peer, from 10 KB/sec to 1000 KB/sec.
I've also tried testing using HDDs and ramdrives as well as CPU under very low load and at nearly 100% usage by other apps.

Almost all of my testing is relying on uTorrent's graphs...specifically uTP "retransmission upload" under "Network Overhead".
I have been inferring that retransmissions are occurring due to packet loss, but more sinister possibilities need to be considered.

qBitTorrent may be corrupting uTorrent's data packets and/or failing to reply back (missing ACKs) in a timely manner.
uTorrent's upload retransmissions also tend to individually be too small to be "full" data packets of 1200-1400 bytes, suggesting "runt"/broken data packets or BitTorrent protocol command-and-control packets that are malformed.

5-25% retransmission rate seems a high price to pay for uTorrent seeds uploading to qBitTorrent peers using uTP ...when uTorrent uTP uploading to non-libtorrent peers shows effectively no retransmissions.

@ValeZAA
Copy link

ValeZAA commented Jun 5, 2023

when uTorrent uTP uploading to non-libtorrent peers shows effectively no retransmissions

What happens when you send from libtorrent to libtorrent using uTP? Is it better than uTP in uTorrent?

@Seeker2
Copy link

Seeker2 commented Jun 13, 2023

What happens when you send from libtorrent to libtorrent using uTP? Is it better than uTP in uTorrent?

It's...complicated.
In my tests... (which often do not resemble internet conditions):
uTorrent to uTorrent uTP speeds are not much less than by TCP, making it typically much faster than qBT to qBT via uTP or even TCP.
uTorrent to qBT via uTP is extremely slow.
libtorrent to libtorrent (qBT to qBT in my tests) via uTP was slower than using TCP.

Test conditions typically consisted of local loopback (127.0.0.1) and running multiple BT clients at once on a single computer, with the torrents' contents stored on a ramdrive, using a single seed to single peer at a time.
Earlier results:
#3542 (comment)
#1432 (comment)

Other people's test results:
#3542 (comment)
...and a few posts after that as well!
another example:
qbittorrent/qBittorrent#13073 (comment)

@ValeZAA
Copy link

ValeZAA commented Jun 13, 2023

uTorrent to uTorrent uTP speeds are not much less than by TCP

That should be false. On optical links and links over oceans uTP is always faster, just as QUIC aka http3, compressed with brotli. Where is BTW brotli or others for uTP?

All tests should be redone AFTER many recent fixes into libtorrent, uTP patches should be merged into Wireshark. Also, I do not understand what stops you from using uTorrent utp lib code directly? It is open source.

Finally, there is at least one place where utorrent is wrong: #7071, that should be fixed upstream.

@Seeker2
Copy link

Seeker2 commented Jun 13, 2023

Someone else can do the tests, I've spent years on it already.

@ValeZAA
Copy link

ValeZAA commented Jun 13, 2023

Someone else can do the tests, I've spent years on it already.

Did you ever test much faster Tixati?

And anyway, these pulls should be tested.

#7232

#7113

#7107
#7082
#7055
#7056

Also there is a key allegation from yourself:

One of the reasons why testing uTP takes so long is... uTorrent is NOT handshaking with qBitTorrent on the 1st uTP try. I confirmed this in uTorrent's logger, Process Monitor, and TCP view. Process Monitor even showed qBitTorrent received the 20 byte UDP packet successfully, it just didn't handshake/reply back.

Was that fixed!? Why even that is not obvious. This delays the connection for 15 seconds, which makes all other tests useless.

qBitTorrent 4.1.5 DOES connect to uTorrent 3.5.5 using uTP -- it just has to fail at least once and that can delay the connection's start for >15 seconds.

Which is BTW rediculous and MUST be a bug in uTorrent.

Also the main issue is closed by a stale bot, so not really closed is very bad.

@Seeker2
Copy link

Seeker2 commented Jun 15, 2023

Did you ever test much faster Tixati?

I have tested Tixati (v3.19 even) and I find it cripplingly slow with very high CPU usage depending on what window section I have open.
It had much lower CPU usage when set to a more static window and/or minimized.
At best, speeds were slow compared to uTorrent or qBitTorrent with the only exception that uTorret uploading to qBitTorrent via uTP was slower.

One of the reasons why testing uTP takes so long is...

Please provide a link to where you saw that quote as I don't recall all the context and situation.

Situations change, and in my later tests I see something that may look like the opposite:
#3542 (comment)

@ValeZAA
Copy link

ValeZAA commented Jun 15, 2023

References:

#3542 (comment)

#3542 (comment)

Okay, so Tixati is useless.

@Seeker2
Copy link

Seeker2 commented Jul 10, 2023

One of the reasons why testing uTP takes so long is... uTorrent is NOT handshaking with qBitTorrent on the 1st uTP try. I confirmed this in uTorrent's logger, Process Monitor, and TCP view. Process Monitor even showed qBitTorrent received the 20 byte UDP packet successfully, it just didn't handshake/reply back.

In my recent tests...
These connection failures could be a combination of initially having both uT and qBT set to use port 65505.
I changed uT to port 65509 and even changed qBT's port from 65505 to 65506 and back to 65505.

[2023-07-01 04:41:29] Incoming connection from 127.0.0.1:65505
[2023-07-01 04:41:29] Incoming connection from 127.0.0.1:65505
[2023-07-01 04:41:33] Incoming connection from 127.0.0.1:65505
[2023-07-01 04:41:33] Incoming connection from 127.0.0.1:65505
[2023-07-01 04:41:58] 127.0.0.1:65505 [uTP]: Disconnect: Timed out
[2023-07-01 04:41:58] 127.0.0.1:65505 [uTP]: Disconnect: Timed out
[2023-07-01 04:42:02] 127.0.0.1:65505 [uTP]: Disconnect: Timed out
[2023-07-01 04:42:02] 127.0.0.1:65505 [uTP]: Disconnect: Timed out

And...then I realized qBT was still set in advanced options to allow multiple connections to the same ip.

Once that was disabled, this still happened:

[2023-07-01 04:43:36] Incoming connection from 127.0.0.1:65505
[2023-07-01 04:44:06] 127.0.0.1:65505 [uTP]: Disconnect: Timed out
[2023-07-01 04:46:39] Incoming connection from 127.0.0.1:65505
[2023-07-01 04:47:08] 127.0.0.1:65505 [uTP]: Disconnect: Timed out

So...I closed and restarted qBT and then restarted both torrents:

[2023-07-01 04:50:32] Incoming connection from 127.0.0.1:65505
[2023-07-01 04:50:32] Incoming connection from 127.0.0.1:65505
[2023-07-01 04:50:32] 127.0.0.1:65505 [uTP] [qBittorrent/4.5.4.0 (0.0)]: Encrypted handshake completed
[2023-07-01 04:50:32] 127.0.0.1:65505 [uTP] [qBittorrent/4.5.4.0 (0.0)]: Encrypted handshake completed
[2023-07-01 04:50:32] 127.0.0.1:65505 [uTP] [qBittorrent/4.5.4 (0.0)]: Sending 19 bytes of aggregated data
[2023-07-01 04:50:32] 127.0.0.1:65505 [uTP] [qBittorrent/4.5.4 (0.0)]: Sending 19 bytes of aggregated data
[2023-07-01 04:50:42] 127.0.0.1:65505 [uTP] [qBittorrent/4.5.4 (0.0)]: Send PEX: 1 added/0 dropped
[2023-07-01 04:55:04] 127.0.0.1:65505 [uTP] [qBittorrent/4.5.4 (100.0)]: Disconnect: Is seed
[2023-07-01 04:55:15] 127.0.0.1:65505 [uTP] [qBittorrent/4.5.4 (100.0)]: Disconnect: Is seed

...Which worked correctly, except for the fact that the combined speeds was only 1.2-1.5 MB/sec and resends (packet loss) were from 800 to just over 1000 KB/sec!

While I was doing connection/handshake testing again, I spotted something a little like that:
Bug: qBitTorrent not reliably making outgoing TCP connections to other peers+seeds

qBitTorrent would initially make 3 uTP connections (1 second interval between each), then wait a minute and retry uTP once, repeating that every minute.

Okay, so Tixati is useless.

I have used Tixati in particular in testing because its core code as far as I know isn't based on uTorrent/BitTorrent or qBitTorrent/libtorrent code.
As long as my test torrent is a monolithic large file on a ramdrive and using large piece size (like 4 MB and up) and I keep Tixati minimized and on a less busy window/tab, Tixati outperforms qBitTorrent in the uT->Tixati vs uT->qBitTorrent uTP upload tests, both in terms of raw speed and in terms of much lower upload retransmits.

I need to try Transmission's uTP performance as well, but there doesn't seem to be a recent official Windows version for it.

@ValeZAA
Copy link

ValeZAA commented Jul 10, 2023

I need to try Transmission's uTP performance as well, but there doesn't seem to be a recent official Windows version for it.

https://build.transmissionbt.com/job/trunk-linux/

BTW, why not use directly uTorrent open source library? I do not get it.

@Seeker2
Copy link

Seeker2 commented Jul 10, 2023

Because I am not much of a programmer and in poor health.

This isn't my project -- I was a uTorrent beta-tester for a long time, so that's where I draw my experience from.

Even Arvidn seems stumped by this problem, so my lack of precise network training (I don't know what to expect) means I really can't help much more.

@ValeZAA
Copy link

ValeZAA commented Jul 12, 2023

I gave you a link for windows nightly builds for Transmission. Here is even more specific https://build.transmissionbt.com/job/trunk-win32/

@Seeker2
Copy link

Seeker2 commented Jul 17, 2023

I tried to test uTP on Transmission...and I didn't get Transmission to use uTP at all!

I didn't get Transmission to use uTP in the past either:
#1432 (comment)

Transmission uses a fork of BitTorrent's uTP library, which hasn't been updated in 5 or 6 years.
So I'm not going to bother with it for now.

uTorrent to uTorrent uTP speeds are not much less than by TCP

That should be false. On optical links and links over oceans uTP is always faster, just as QUIC aka http3, compressed with brotli. Where is BTW brotli or others for uTP?

Theory is irrelevant when other problems block putting that theory into practice.
uTP has not been put through the same level of scrutiny as QUIC.
uTorrent being closed source and poorly documented in places also extremely hindered other BT client software from crafting good uTP behavior.
Going by BEP spec is a joke -- much of their rules are suggestions, outmoded or impractical in practice...or ignored by everyone! (Tracker-handling spec esp.)

If there is a perceived advantage from not following the rules, some BT clients will do so.

Finally, there is at least one place where utorrent is wrong: #7071, that should be fixed upstream.

And that's why everything that interfaces with buggy software has to work around their bugs...even if the workarounds aren't great.

The whole point of using dissimilar BT client software is to determine where the underlying problems are -- in a particular BT client, in the OS used, or elsewhere.
This is important when looking at the source code, because sometimes compiler software may alter the desired behavior.

@ValeZAA
Copy link

ValeZAA commented Jul 17, 2023

I tried to test uTP on Transmission...and I didn't get Transmission to use uTP at all!

Did you enable uTP? It is an option in Transmission.

@Seeker2
Copy link

Seeker2 commented Jul 17, 2023

Yes... plus it's enabled by default.

It's found under Transmission Preferences, Network, Options, 1st Enable checkbox there.
I disabled and re-enabled it, tested doing that numerous times and ways -- such as closing and reopening Transmission after each setting change, and closing and reopening the Transmission Preferences window just to verify the setting "took" in the interface.
I also tried with both uTorrent and qBitTorrent to get a uTP connection to Transmission.

If you happen to have Transmission installed you can test its uTP as well with qBitTorrent. I wouldn't mind knowing if the problem is specific to Transmission on Windows or not.

@ValeZAA
Copy link

ValeZAA commented Jul 19, 2023

Preferences window just to verify the setting "took" in the interface.

Transmission uses libutp from uTorrent. So it is a bug then. Can you open a bug in Transmission?

@Seeker2
Copy link

Seeker2 commented Jul 19, 2023

You need to verify the bug before I'll bother.
I did check using TCPView.exe (a sysinternals windows app) that Transmission was at least listening via UDP on the incoming port I set it to.

Transmission seems to get incoming TCP connections on its listening port just fine...but possibly a uTP bug in how qBitTorrent/libtorrent contacts it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants