Skip to content

Commit

Permalink
[PR #1850/e5a41df3 backport][stable-6] Document the requirement for a…
Browse files Browse the repository at this point in the history
…n S3 bucket for the aws_ssm connection plugin (#2031)

SUMMARY

Fixes #1775

This explains why an S3 bucket is needed for the aws_ssm plugin, and some considerations relating to that.

ISSUE TYPE

-  Docs Pull Request

COMPONENT NAME

aws_ssm

(cherry picked from commit e5a41df)
  • Loading branch information
patchback[bot] authored Jan 3, 2024
1 parent 815789a commit 6cb4223
Show file tree
Hide file tree
Showing 2 changed files with 18 additions and 0 deletions.
3 changes: 3 additions & 0 deletions changelogs/fragments/1775-aws_ssm-s3-docs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
minor_changes:
- aws_ssm - Updated the documentation to explicitly state that an S3 bucket is required,
the behavior of the files in that bucket, and requirements around that. (https://github.com/ansible-collections/community.aws/issues/1775).
15 changes: 15 additions & 0 deletions plugins/connection/aws_ssm.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,27 @@
``ansible_user`` variables to configure the remote user. The ``become_user`` parameter should
be used to configure which user to run commands as. Remote commands will often default to
running as the ``ssm-agent`` user, however this will also depend on how SSM has been configured.
- This plugin requires an S3 bucket to send files to/from the remote instance. This is required even for modules
which do not explicitly send files (such as the C(shell) or C(command) modules), because Ansible sends over the C(.py) files of the module itself, via S3.
- Files sent via S3 will be named in S3 with the EC2 host ID (e.g. C(i-123abc/)) as the prefix.
- The files in S3 will be deleted by the end of the playbook run. If the play is terminated ungracefully, the files may remain in the bucket.
If the bucket has versioning enabled, the files will remain in version history. If your tasks involve sending secrets to/from the remote instance
(e.g. within a C(shell) command, or a SQL password in the C(community.postgresql.postgresql_query) module) then those passwords will be included in
plaintext in those files in S3 indefinitely, visible to anyone with access to that bucket. Therefore it is recommended to use a bucket with versioning
disabled/suspended.
- The files in S3 will be deleted even if the C(keep_remote_files) setting is C(true).
requirements:
- The remote EC2 instance must be running the AWS Systems Manager Agent (SSM Agent).
U(https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html)
- The control machine must have the AWS session manager plugin installed.
U(https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html)
- The remote EC2 Linux instance must have curl installed.
- The remote EC2 Linux instance and the controller both need network connectivity to S3.
- The remote instance does not require IAM credentials for S3. This module will generate a presigned URL for S3 from the controller,
and then will pass that URL to the target over SSM, telling the target to download/upload from S3 with C(curl).
- The controller requires IAM permissions to upload, download and delete files from the specified S3 bucket. This includes
`s3:GetObject`, `s3:PutObject`, `s3:ListBucket`, `s3:DeleteObject` and `s3:GetBucketLocation`.
options:
access_key_id:
Expand Down

0 comments on commit 6cb4223

Please sign in to comment.