-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single request uploads of large "readable" data stream are slow (capped at ~8Mbps) #11044
Comments
Thanks @kasobol-msft for the detailed description, @chlowell can you take a look at this |
Sure. Looks like it's an issue users will encounter through the storage libraries, that may require a change in azure-core. @lmazuel, @rakshith91, @xiangyan99, your thoughts? |
My first reaction is this is something we should fix ASAP. I will gather more data and investigate possible fixes. |
I spent some time investigating this issue, and while I haven't fully isolated the root cause(s), I have several conclusions:
Repro Steps
ResultsClient: Azure VM, DS3_v2, West US 2, Windows Server 2019
This shows that
This shows that "Pipeline,stream" is much slower than "Pipeline,array". However, once "Pipeline,array" has been executed once, then both have the same perf. |
A similar issue was reported against curl on Windows that might be related: |
This should be fixed by #14442:
@kasobol-msft: Would you like to verify as well? |
@mikeharder works like a charm. |
azure-core (pipeline)
latest
Windows 10 Enterprise (1909) . But this seems to be platform independent issue.
platform win32 -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- C:\git\azure-sdk-for-python\venv\Scripts\python.exe
Root cause analysis brought me to these issues filled against Python httplib. Confirmed experimentally (see below).
https://bugs.python.org/issue21790
https://bugs.python.org/issue31945
https://stackoverflow.com/questions/48719893/why-is-the-block-size-for-python-httplibs-reads-hard-coded-as-8192-bytes
Describe the bug
When trying to push large amount of data (4000MB in my case) that's "readable stream" (e.g. BytesIO or file reader - anything implementing "read") then the upload speed caps at around 8Mbps.
(For the context I'm working on 4000MB block upload support for Azure Storage SDK).
To Reproduce
Execute test_put_block_stream_large with LARGE_BLOCK_SIZE bumped to some large value (i.e. 4000MB upcoming , or 100MB currently supported threshold).
OR
Use scenario from my fork as reference.
Expected behavior
Upload speed of "readable" data is not capped by httpclient and can leverage full network bandwith available.
Possible solution
The https://bugs.python.org/msg305571 suggest quite handy workaround that could be part of pipeline I guess. So far I didn't see any way to inject different blocksize to httpclient.
Screenshots
Original test:
I was uploading 4000MB of data in single request without any modifications using "readable" stream. That took over 1 hour!!
Turns out httpclient is using 8192 byte buffer when readable stream is passed:
Then I started to play with blocksize. I was editing http client's source and bumping the blocksize.
After bumpting to 8192*1024 upload speed was more than 2X faster
And after bumping it to 1081921024 I managed to upload that payload in ~4 and half minute.
Additional context
This is going to impact future users of "large block"/"large blob" (4000MB new limit for single block / 200TB limit for single blob). Users of that feature are most likely work with streams - either uploading data from network or data produced on the fly by computations. Therefore it's important to address this deficiency.
The text was updated successfully, but these errors were encountered: