Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve SFTP performance on medium/high latency connections #866

Merged
merged 6 commits into from
Nov 13, 2023

Conversation

zybexXL
Copy link
Contributor

@zybexXL zybexXL commented Aug 30, 2021

This PR changes just 2 constants:

  • Increases maxPendingReads from 10 to 100
  • Increases socket send/receive buffer to 10 SSH packets (320K)

This results in huge improvements in transfer speeds when using sftpClient.DownloadFile/UploadFile, in particular with fast local internet connections and remote servers with high latency. The results are even better when used together with PR #865 - speeds are now comparable with Filezilla.

I tested on two servers: a relatively close one (Frankfurt) and a distant one (AWS-hosted SFTP Server). Here are the results:

Server   Ping  Pending  PR865  Iterations               Average
------  -----  -------  -----  -------------------      ----------
FRA      30ms     10      No   29.59  28.90  26.55  =>  28.35 MB/s
FRA      30ms    100      No   43.28  37.74  41.54  =>  40.85 MB/s (144%)
FRA      30ms    100     YES   65.38  67.76  66.97  =>  66.70 MB/s (235%) [link limit 500Mbps]
  
AWS     100ms     10      No    7.01   6.69   7.45  =>   7.05 MB/s 
AWS     100ms    100      No   19.97  21.31  20.96  =>  20.74 MB/s (294%)
AWS     100ms    100     YES   25.50  26.36  26.22  =>  26.02 MB/s (369%)

The 500Mbps link is satisfyingly saturated when combining these 2 PRs! 😎

Reasoning

1. Increasing the socket send/receive buffer size
This change has low impact. This buffer holds packets that have been received and are waiting to be processed by the application. In cases where the machine is temporarily bogged down, it may happen that the incoming packets are not processed quickly enough and the previous 2-packet buffer gets full, and a 3rd incoming packet then causes the connection to stall. Increasing the buffer to 10 packets reduces the probability of this scenario, and makes the transfer more smooth and resilient to hiccups in CPU usage.

2. Increasing the maxPendingReads
This setting controls the maximum number of SFTP Read Requests that are sent to the server and are yet unanswered. When a packet arrives at the client, it sends another request for the next packet. In other words, it's the maximum number of in-flight data packets.

Each SFTP data packet is 32KB in size, so maxPendingReads=10 means that there's a maximum of 320 KB in flight before the transfer stalls. With 100, this is raised to 3.2MB in flight.

Modern internet connections are FAST. Transferring 320KB at 100Mbps takes about 30ms, and at 500Mbps it takes just 6.4ms (plus latency, ignoring details). This means that when talking to a server that is 100ms away this happens:

  • SSH.Net sends 10 read requests
  • connection is idle for 100ms, while the requests and replies are on their way
  • ~100ms later, the 10 packets arrive: this takes about 10ms on a fast connection
  • as the 10 packets arrive, SSH.Net sends 10 more requests
  • the connection is now idle for ~90ms, waiting for the 2nd batch
  • the loop repeats - the connection is idle up to 90% of the time!

The numbers are not accurate and I'm ignoring lots of detail but you get the idea. 320KB of in-flight data is just not suitable for today's fast connections anymore. Raising it to ~3.2MB fixes this problem (for now), and restores performance on distant servers.

@darkoperator
Copy link

darkoperator commented Aug 30, 2021 via email

@@ -59,7 +59,7 @@ private void SetupMocks()
.Setup(p => p.EndLStat(_statAsyncResult))
.Throws(new SshException());
_sftpSessionMock.InSequence(seq)
.Setup(p => p.CreateFileReader(_handle, _sftpSessionMock.Object, _chunkSize, 3, null))
.Setup(p => p.CreateFileReader(_handle, _sftpSessionMock.Object, _chunkSize, 10, null))
Copy link
Contributor Author

@zybexXL zybexXL Aug 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: This value needs to be the same as ServiceFactory.cs:115


private void SetupData()
{
var random = new Random();

_maxPendingReads = 100;
Copy link
Contributor Author

@zybexXL zybexXL Aug 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: This value needs to be the same as ServiceFactory.cs:145
I've renamed this testcase to match what it does

@drieseng
Copy link
Member

@zybexXL Could you check how many read-aheads putty performs? I recall that it does not perform that many, but I could be wrong. How does it compare with SSH.NET on speed/throughput ?

@zybexXL
Copy link
Contributor Author

zybexXL commented Dec 15, 2021

@drieseng, according to Wireshark, psftp filezilla sends 128 requests in about 6ms. Data starts arriving 30ms later from my remote test server, and then psftp sends a new requests every ~32KB of received data.

EDIT: I'm running tests with psftp now - it seems it only sends a couple of read-aheads, and it's much slower than Filezilla. I'll post results soon.

@zybexXL
Copy link
Contributor Author

zybexXL commented Dec 15, 2021

Confirmed, psftp sends only a single read request and waits for the data to arrive before sending another one. There's always a gap of a few ms between each 32KB block; this is not that bad when the server is close, but for distant servers and fast connections it just kills the speed. It ends up at about half-speed compared to this PR.

For SSH.NET with/without this patch, see benchmarks on top post. Filezilla, with the 128 read-aheads, has identical speed to SSH.NET with #865 and #866 applied.

Ideally the number of read-aheads should be dynamic to adjust to distance+speed automatically. That could be explored in a future PR, perhaps needed when 10Gbit speeds become common.

@IgorMilavec
Copy link
Collaborator

Just for future reference, here is how the dotnet team tackled a similar issue in HTTP/2: https://devblogs.microsoft.com/dotnet/dotnet-6-networking-improvements/

@mmttim
Copy link

mmttim commented May 10, 2022

Any updates on merging this one to the main branch? We are using SSH.NET for one of our tools in our build pipeline and we can clearly see a speed difference between FileZilla and SSH.NET.

We could switch to this branch off course, but it's more pleasent to just include the NuGet package.

@jjxtra
Copy link

jjxtra commented Jul 20, 2022

Code doesn't compile for me, complaining about target frameworks being empty

@zybexXL
Copy link
Contributor Author

zybexXL commented Jul 20, 2022

I just pulled the branch and compiled in VS2019 with no problem. This code does not touch any Framework/Project config file.
There are compile warnings about net.core 2.1/2.2/3.0 being deprecated, but that comes from the main line and can be ignored for now.

@A9G-Data-Droid
Copy link

I have tested this and it works!

@geoffmca
Copy link

@drieseng Are we able to get this reviewed and added for a future release? Are there pending issues/concerns that can be addressed?

Pedro Fonseca added 3 commits November 1, 2023 13:26
- Increases maxPendingReads from 10 to 100
- Increases socket send/receive buffer to 10 SSH packets (320K)
@zybexXL
Copy link
Contributor Author

zybexXL commented Nov 1, 2023

I've rebased this PR, please review and consider merging.
I have been using this code since 2021 and I still see the huge performance gains this provides.

@WojciechNagorski WojciechNagorski merged commit 2eec748 into sshnet:develop Nov 13, 2023
1 check failed
@WojciechNagorski WojciechNagorski added this to the 2023.0.1 milestone Nov 16, 2023
@WojciechNagorski
Copy link
Collaborator

The 2023.0.1 version has been released to Nuget: https://www.nuget.org/packages/SSH.NET/2023.0.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants