-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ssm connection plugin fails if s3 transfer bucket is server-side encrypted via KMS #127
Comments
Further, it is possible to get the connection to work out-of-the-box by modifying the aws profile that boto3 uses as follows:
What surprises me is that this is supposed to be the default configuration for boto3 as per Boto Configuration Digging even further into the codebase the documentation for botocore does not mention the |
Digging further into AWS documentation: There is an up to 24 hour period when the bucket is first created where the virtual pre-signed URL won't work due to the DNS name needing to be propagated globally and the redirect causing a CORS problem. In essence, for this to work reliably (except during up to 24 hours after bucket creation):
|
Not only does it fail when the bucket is server-side encrypted, but some regions require signature version s3v4 and just don't accept anything else (such as eu-central-1) (ref). Here is a patch that works for me. with endpoint overrideassumes bucket is in the same region as the instance --- a/plugins/connection/aws_ssm.py 2020-09-03 18:36:37.309000000 +0200
+++ b/plugins/connection/aws_ssm.py 2020-09-03 18:37:57.748000000 +0200
@@ -158,6 +158,7 @@
try:
import boto3
+ from botocore.client import Config
HAS_BOTO_3 = True
except ImportError as e:
HAS_BOTO_3_ERROR = str(e)
@@ -483,7 +484,13 @@
def _get_url(self, client_method, bucket_name, out_path, http_method):
''' Generate URL for get_object / put_object '''
- client = boto3.client('s3')
+ config = Config(signature_version='s3v4',
+ region_name=self.get_option('region'),
+ s3={'addressing_style': 'virtual'}
+ )
+ client = boto3.client('s3',
+ endpoint_url='https://s3.{0}.amazonaws.com'.format(self.get_option('region')),
+ config=config)
return client.generate_presigned_url(client_method, Params={'Bucket': bucket_name, 'Key': out_path}, ExpiresIn=3600, HttpMethod=http_method)
@_ssm_retry
@@ -499,9 +506,9 @@
get_command = "Invoke-WebRequest '%s' -OutFile '%s'" % (
self._get_url('get_object', self.get_option('bucket_name'), s3_path, 'GET'), out_path)
else:
- put_command = "curl --request PUT --upload-file '%s' '%s'" % (
+ put_command = "curl --show-error --silent --fail --request PUT --upload-file '%s' '%s'" % (
in_path, self._get_url('put_object', self.get_option('bucket_name'), s3_path, 'PUT'))
- get_command = "curl '%s' -o '%s'" % (
+ get_command = "curl --show-error --silent --fail '%s' -o '%s'" % (
self._get_url('get_object', self.get_option('bucket_name'), s3_path, 'GET'), out_path)
client = boto3.client('s3') without endpoint override--- a/plugins/connection/aws_ssm.py 2020-09-03 18:36:37.309000000 +0200
+++ b/plugins/connection/aws_ssm.py 2020-09-03 18:37:57.748000000 +0200
@@ -158,6 +158,7 @@
try:
import boto3
+ from botocore.client import Config
HAS_BOTO_3 = True
except ImportError as e:
HAS_BOTO_3_ERROR = str(e)
@@ -483,7 +484,11 @@
def _get_url(self, client_method, bucket_name, out_path, http_method):
''' Generate URL for get_object / put_object '''
- client = boto3.client('s3')
+ config = Config(signature_version='s3v4',
+ region_name=self.get_option('region')
+ )
+ client = boto3.client('s3',
+ config=config)
return client.generate_presigned_url(client_method, Params={'Bucket': bucket_name, 'Key': out_path}, ExpiresIn=3600, HttpMethod=http_method)
@_ssm_retry
@@ -499,9 +504,9 @@
get_command = "Invoke-WebRequest '%s' -OutFile '%s'" % (
self._get_url('get_object', self.get_option('bucket_name'), s3_path, 'GET'), out_path)
else:
- put_command = "curl --request PUT --upload-file '%s' '%s'" % (
+ put_command = "curl --show-error --silent --fail --request PUT --upload-file '%s' '%s'" % (
in_path, self._get_url('put_object', self.get_option('bucket_name'), s3_path, 'PUT'))
- get_command = "curl '%s' -o '%s'" % (
+ get_command = "curl --show-error --silent --fail '%s' -o '%s'" % (
self._get_url('get_object', self.get_option('bucket_name'), s3_path, 'GET'), out_path)
client = boto3.client('s3') |
…ections#127) * always use signature version 4 * pass region to the bucket client * detect when curl fails and abort appropriately Some regions only support signature v4, and any bucket that is encrypted also requires v4 signatures. Likewise some regions require the region_name passed.
…ections#127) * always use signature version 4 * pass region to the bucket client * detect when curl fails and abort appropriately Some regions only support signature v4, and any bucket that is encrypted also requires v4 signatures. Likewise some regions require the region_name passed.
…ections#127) * always use signature version 4 * pass region to the bucket client * detect when curl fails and abort appropriately Some regions only support signature v4, and any bucket that is encrypted also requires v4 signatures. Likewise some regions require the region_name passed.
I've preliminary patches for this issue (and other aws_ssm connection plugin issues) in my fork. |
Hi @abeluck |
@jupeter no I did not create a PR. The patches I posted and linked above worked at the time. We moved away from the ssm connection plugin because of this and other stability issues, it just wasn't ready for production (in our experience). |
So what we gonna do ? I am deploying cluster in eu-central-1 - what would be most elegant fix so far ? Oh I see it in repo: https://github.com/ansible-collections/community.aws/blob/main/plugins/connection/aws_ssm.py#L520 Not sure why galaxy collection is so old though ...
|
Hi all, The problem has been fixed by #352 and it will be part of the 1.4.0 release. Feel free to reopen if you think we've missed something. |
SUMMARY
While attempting to use an ssm connection where the S3 bucket has KMS server-side encryption enabled the existing code
returns an
InvalidArgument
response (400 status code) with the message body:ISSUE TYPE
COMPONENT NAME
aws_ssm connection plugin
ANSIBLE VERSION
CONFIGURATION
OS / ENVIRONMENT
Controller: Ubuntu 20.04 running ansible inside a python venv with boto3 1.14.16, botocore 1.17.16
Target: Amazon Linux 2 with SSM.
STEPS TO REPRODUCE
note: - it may be necessary to wait 24 hours for the bucket name to be propagated via DNS before continuing
aws ssm start-session --target <id>
EXPECTED RESULTS
Expecting "Hello world" to be reported by Ansible.
ACTUAL RESULTS
Ansible uploads the
AnsiballZ_setup.py
file successfully, but on retrieval by curl within the target system the file is replaced by the body of the failed request.In this case the following content is returned:
Contents of
AnsiballZ_setup.py
are:Root causes and work-around:
--silent --show-error --fail
so the exit code of curl does not reflect the failed http status code (400 in this case) and Ansible mistakenly continues to try to executeAnsiballZ_setup.py
as if it were a python script_get_url
uses theclient.generate_presigned_url
function from Boto3 but for this to work in the presence of encrypted content it requires passing in a signature version =s3v4
as part of a config object as follows:The text was updated successfully, but these errors were encountered: