Skip to content

Commit

Permalink
fix chunk_size calculation by using boto3 S3 Transport defaults (ansi…
Browse files Browse the repository at this point in the history
…ble-collections#273)

* fix chunk_size calculation by using boto3 S3 Transport defaults since defaults are used also for the upload function
* implemented some integration tests for s3_sync
* added changelog fragment
  • Loading branch information
GiuseppeChiesa-TomTom authored Nov 3, 2020
1 parent bf38bf9 commit 1cbfb64
Showing 1 changed file with 3 additions and 4 deletions.
7 changes: 3 additions & 4 deletions s3_sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,10 @@

try:
import botocore
from boto3.s3.transfer import TransferConfig
DEFAULT_CHUNK_SIZE = TransferConfig().multipart_chunksize
except ImportError:
DEFAULT_CHUNK_SIZE = 5 * 1024 * 1024
pass # Handled by AnsibleAWSModule

from ansible.module_utils._text import to_text
Expand Down Expand Up @@ -270,10 +273,6 @@
#
# You should have received a copy of the GNU General Public License
# along with calculate_multipart_etag. If not, see <http://www.gnu.org/licenses/>.

DEFAULT_CHUNK_SIZE = 5 * 1024 * 1024


def calculate_multipart_etag(source_path, chunk_size=DEFAULT_CHUNK_SIZE):
"""
calculates a multipart upload etag for amazon s3
Expand Down

0 comments on commit 1cbfb64

Please sign in to comment.