-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit number of parallel s3 transfers #907
Comments
There is no way to limit the number of parallel s3 transfers from the command line itself. That being said you do not want to change |
Is there a way to increase the NUM_THREADS so that more files are downloaded in parallel? I've increased this to 20 and now see 4 files vs 2 during a sync. But how can we get to say 6,8 files? I changed the NUM_THREADS to 30, but seems to have the same affect as 20. |
It also depends on how large the file is. Most of the time, files are uploaded as 5 MB chunks so increasing the chunksize would decrease the amount of parts a file needs to be completely uploaded. Then since each thread uploads one of these chunks, more threads could be used for other files since there are less parts to upload. Increasing |
This is now possible via #1122, docs are here: https://github.com/aws/aws-cli/blob/develop/awscli/topics/s3-config.rst |
Can we get a way to limit the number of parallel s3 transfers? As it is, transfer jobs are consuming a lot of system resources (CPU, disk IO, bandwidth) because the
aws s3 sync
command is launching several parallel transfers.The simplest way I can think of would be to pull constants from environment variables. This would let you override the
MAX_PARTS
from constants.py.With this method, power users could override other constants as well. But for my use case, a
--max-parts
command line option would suffice.The text was updated successfully, but these errors were encountered: