Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

service/s3: setting S3 object content as base64 #3788

Merged
merged 1 commit into from
Mar 20, 2018

Conversation

apparentlymart
Copy link
Contributor

Generally we recommend using content for small text strings generated within config (e.g. rendered templates) and source for larger binary content loaded from disk.

This new argument allows a third case of uploading small raw binary content generated in configuration, such as the result of passing a rendered template through the base64gzip(...) interpolation function
in conjunction with a content_encoding = "gzip" argument.

Terraform Core does not guarantee safe passage for raw binary strings that are not valid UTF-8, so base64 is an emerging convention for the rare cases where raw byte buffers must be manipulated within the configuration language, similar to the user_data_base64 argument to aws_instance.

@apparentlymart apparentlymart added enhancement Requests to existing resources that expand the functionality or scope. service/s3 Issues and PRs that pertain to the s3 service. labels Mar 15, 2018
@apparentlymart apparentlymart self-assigned this Mar 15, 2018
@ghost ghost added the size/M Managed by automation to categorize the size of a PR. label Mar 15, 2018
* `source` - (Required) The path to the source file being uploaded to the bucket.
* `content` - (Required unless `source` given) The literal content being uploaded to the bucket.
* `source` - (Required unless `content` or `content_base64` is set) The path to a source file to upload to the bucket.
* `content` - (Required unless `source` or `content_base64` is set) Literal content to upload into the bucket.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of Literal content here should we say String content/UTF8 content?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Literal" here was trying to distinguish from "filename" and "base64" rather than making a statement about what format the content is in. That is, I was trying to explain how source is different from content to someone who might not be familiar with either yet.

I'm not sure that "string" gets to that meaning because the base64 version and the filename are also both strings, just with different interpretations of their values.

How about this?

  • source - The path to a file that will be read and uploaded as raw bytes for the object content.
  • content - Literal string value to use as the object content, which will be uploaded as UTF-8-encoded text.
  • content_base64 - Base64-encoded data that will be decoded and uploaded as raw bytes for the object content.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me 👍

@bflad bflad self-requested a review March 15, 2018 07:49
@bflad bflad added this to the v1.12.0 milestone Mar 15, 2018
Copy link
Contributor

@bflad bflad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Just one note from earlier about potentially fixing the documentation wording for the content argument.

11 tests passed (all tests)
=== RUN   TestAccAWSS3BucketObject_tags
--- PASS: TestAccAWSS3BucketObject_tags (8.69s)
=== RUN   TestAccAWSS3BucketObject_contentBase64
--- PASS: TestAccAWSS3BucketObject_contentBase64 (8.93s)
=== RUN   TestAccAWSS3BucketObject_source
--- PASS: TestAccAWSS3BucketObject_source (9.11s)
=== RUN   TestAccAWSS3BucketObject_withContentCharacteristics
--- PASS: TestAccAWSS3BucketObject_withContentCharacteristics (9.34s)
=== RUN   TestAccAWSS3BucketObject_content
--- PASS: TestAccAWSS3BucketObject_content (9.68s)
=== RUN   TestAccAWSS3BucketObject_sse
--- PASS: TestAccAWSS3BucketObject_sse (10.35s)
=== RUN   TestAccAWSS3BucketObject_updates
--- PASS: TestAccAWSS3BucketObject_updates (13.67s)
=== RUN   TestAccAWSS3BucketObject_storageClass
--- PASS: TestAccAWSS3BucketObject_storageClass (15.19s)
=== RUN   TestAccAWSS3BucketObject_updatesWithVersioning
--- PASS: TestAccAWSS3BucketObject_updatesWithVersioning (15.25s)
=== RUN   TestAccAWSS3BucketObject_acl
--- PASS: TestAccAWSS3BucketObject_acl (15.24s)
=== RUN   TestAccAWSS3BucketObject_kms
--- PASS: TestAccAWSS3BucketObject_kms (27.63s)

Generally we recommend either using "content" for small text strings
generated within config (e.g. rendered templates) and "source" for larger
binary content loaded from disk.

This new argument allows a third case of uploading small raw binary
content generated in configuration, such as the result of passing a
rendered template through the base64gzip(...) interpolation function
in conjunction with a content_encoding = "gzip" argument.

Terraform Core does not guarantee safe passage for raw binary strings that
are not valid UTF-8, so base64 is an emerging convention for the rare
cases where raw byte buffers must be manipulated within the configuration
language, similar to the "user_data_base64" argument to aws_instance.
@ghost ghost added the size/M Managed by automation to categorize the size of a PR. label Mar 20, 2018
@apparentlymart apparentlymart merged commit 8ba4407 into master Mar 20, 2018
@bflad bflad deleted the f-s3-object-base64 branch March 23, 2018 21:12
@bflad
Copy link
Contributor

bflad commented Mar 23, 2018

This has been released in version 1.12.0 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

@ghost
Copy link

ghost commented Apr 7, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/s3 Issues and PRs that pertain to the s3 service. size/M Managed by automation to categorize the size of a PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants