Skip to content

Commit

Permalink
s3_sync - Improve error handling when testing for existing files (#1330)
Browse files Browse the repository at this point in the history
s3_sync - Improve error handling when testing for existing files

SUMMARY
fixes: #58

Simplifies handling of '404' codes (use is_boto3_error_code)
Assume 403 files need updating (it's the best we can do, and mimics aws cli)
Allows Boto3 exceptions to fall through to the outer try/except clause and cleanly fail rather than rethrowing it as an Exception()

ISSUE TYPE

Feature Pull Request

COMPONENT NAME
s3_sync
ADDITIONAL INFORMATION

Reviewed-by: Joseph Torcasso <None>
(cherry picked from commit 40984a4)
  • Loading branch information
tremble authored and patchback[bot] committed Jul 9, 2022
1 parent b5609d6 commit 20d09be
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 19 deletions.
2 changes: 2 additions & 0 deletions changelogs/fragments/58-s3_sync.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
minor_changes:
- s3_sync - improves error handling during ``HEAD`` operation to compare existing files (https://github.com/ansible-collections/community.aws/issues/58).
14 changes: 5 additions & 9 deletions plugins/modules/s3_sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -265,6 +265,7 @@

# import module snippets
from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code


# the following function, calculate_multipart_etag, is from tlastowka
Expand Down Expand Up @@ -427,17 +428,12 @@ def head_s3(s3, bucket, s3keys):
retkeys = []
for entry in s3keys:
retentry = entry.copy()
# don't modify the input dict
try:
retentry['s3_head'] = s3.head_object(Bucket=bucket, Key=entry['s3_path'])
except botocore.exceptions.ClientError as err:
if (hasattr(err, 'response') and
'ResponseMetadata' in err.response and
'HTTPStatusCode' in err.response['ResponseMetadata'] and
str(err.response['ResponseMetadata']['HTTPStatusCode']) == '404'):
pass
else:
raise Exception(err)
# 404 (Missing) - File doesn't exist, we'll need to upload
# 403 (Denied) - Sometimes we can write but not read, assume we'll need to upload
except is_boto3_error_code(['404', '403']):
pass
retkeys.append(retentry)
return retkeys

Expand Down
20 changes: 10 additions & 10 deletions tests/integration/targets/s3_sync/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
dd if=/dev/zero of=test4.txt bs=1M count=10
args:
chdir: "{{ output_dir }}/s3_sync"

- name: Sync files with remote bucket
s3_sync:
bucket: "{{ test_bucket }}"
Expand All @@ -66,7 +66,7 @@
- output.changed
- output.name == "{{ test_bucket_2 }}"
- not output.requester_pays

- name: Sync files with remote bucket using glacier storage class
s3_sync:
bucket: "{{ test_bucket_2 }}"
Expand All @@ -77,7 +77,7 @@
- assert:
that:
- output is changed

# ============================================================

- name: Sync files already present
Expand All @@ -99,7 +99,7 @@
- assert:
that:
- output is not changed

# ============================================================
- name: Create a third s3_bucket
s3_bucket:
Expand All @@ -112,7 +112,7 @@
- output.changed
- output.name == "{{ test_bucket_3 }}"
- not output.requester_pays

- name: Sync individual file with remote bucket
s3_sync:
bucket: "{{ test_bucket_3 }}"
Expand Down Expand Up @@ -162,9 +162,9 @@
object: "{{ obj }}"
with_items: "{{ objects.s3_keys }}"
loop_control:
loop_var: obj
loop_var: obj
ignore_errors: true

- name: list test_bucket_2 objects
aws_s3:
bucket: "{{ test_bucket_2 }}"
Expand All @@ -179,9 +179,9 @@
object: "{{ obj }}"
with_items: "{{ objects.s3_keys }}"
loop_control:
loop_var: obj
loop_var: obj
ignore_errors: true

- name: list test_bucket_3 objects
aws_s3:
bucket: "{{ test_bucket_3 }}"
Expand All @@ -196,7 +196,7 @@
object: "{{ obj }}"
with_items: "{{ objects.s3_keys }}"
loop_control:
loop_var: obj
loop_var: obj
ignore_errors: true

- name: Ensure all buckets are deleted
Expand Down

0 comments on commit 20d09be

Please sign in to comment.