diff --git a/.changelog/22850.txt b/.changelog/22850.txt index c19e2ea16c0..1c35d251149 100644 --- a/.changelog/22850.txt +++ b/.changelog/22850.txt @@ -1,11 +1,11 @@ -```release-note:note -data-source/aws_s3_bucket_object: The data source has been renamed. Use `aws_s3_object` instead +```release-note:new-resource +aws_s3_object ``` -```release-note:note -data-source/aws_s3_bucket_objects: The data source has been renamed. Use `aws_s3_objects` instead +```release-note:new-data-source +aws_s3_object ``` -```release-note:note -resource/aws_s3_bucket_object: The resource has been renamed. Use `aws_s3_object` instead -``` +```release-note:new-data-source +aws_s3_objects +``` \ No newline at end of file diff --git a/.changelog/22877.txt b/.changelog/22877.txt new file mode 100644 index 00000000000..4c5637eee0f --- /dev/null +++ b/.changelog/22877.txt @@ -0,0 +1,11 @@ +```release-note:note +data-source/aws_s3_bucket_object: The data source is deprecated; use `aws_s3_object` instead +``` + +```release-note:note +data-source/aws_s3_bucket_objects: The data source is deprecated; use `aws_s3_objects` instead +``` + +```release-note:note +resource/aws_s3_bucket_object: The resource is deprecated; use `aws_s3_object` instead +``` diff --git a/CHANGELOG.md b/CHANGELOG.md index 50d2eb189a1..a7cd4b404bc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -26,8 +26,6 @@ NOTES: * data-source/aws_network_acls: The type of the `ids` attribute has changed from Set to List. If no NACLs match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) * data-source/aws_network_interfaces: The type of the `ids` attribute has changed from Set to List. If no network interfaces match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) * data-source/aws_route_tables: The type of the `ids` attribute has changed from Set to List. If no route tables match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) -* data-source/aws_s3_bucket_object: The data source has been renamed. Use `aws_s3_object` instead ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) -* data-source/aws_s3_bucket_objects: The data source has been renamed. Use `aws_s3_objects` instead ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) * data-source/aws_security_groups: If no security groups match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) * data-source/aws_ssoadmin_instances: The type of the `identity_store_ids` and `arns` attributes has changed from Set to List. If no instances match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) * data-source/aws_subnet_ids: The `aws_subnet_ids` data source has been deprecated and will be removed in a future version. Use the `aws_subnets` data source instead ([#22743](https://github.com/hashicorp/terraform-provider-aws/issues/22743)) @@ -47,14 +45,16 @@ NOTES: * resource/aws_elasticache_replication_group: The `replication_group_description` argument has been deprecated. All configurations using `replication_group_description` should be updated to use the `description` argument instead ([#22666](https://github.com/hashicorp/terraform-provider-aws/issues/22666)) * resource/aws_route: The `instance_id` argument has been deprecated. All configurations using `instance_id` should be updated to use the `network_interface_id` argument instead ([#22664](https://github.com/hashicorp/terraform-provider-aws/issues/22664)) * resource/aws_route_table: The `instance_id` argument of the `route` configuration block has been deprecated. All configurations using `route` `instance_id` should be updated to use the `route` `network_interface_id` argument instead ([#22664](https://github.com/hashicorp/terraform-provider-aws/issues/22664)) -* resource/aws_s3_bucket_object: The resource has been renamed. Use `aws_s3_object` instead ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) FEATURES: * **New Data Source:** `aws_ec2_client_vpn_endpoint` ([#14218](https://github.com/hashicorp/terraform-provider-aws/issues/14218)) * **New Data Source:** `aws_eips` ([#7537](https://github.com/hashicorp/terraform-provider-aws/issues/7537)) +* **New Data Source:** `aws_s3_object` ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) +* **New Data Source:** `aws_s3_objects` ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) * **New Resource:** `aws_s3_bucket_cors_configuration` ([#12141](https://github.com/hashicorp/terraform-provider-aws/issues/12141)) * **New Resource:** `aws_s3_bucket_versioning` ([#5132](https://github.com/hashicorp/terraform-provider-aws/issues/5132)) +* **New Resource:** `aws_s3_object` ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) ENHANCEMENTS: diff --git a/internal/provider/provider.go b/internal/provider/provider.go index ccdcba5817d..f8ba9941593 100644 --- a/internal/provider/provider.go +++ b/internal/provider/provider.go @@ -695,6 +695,8 @@ func Provider() *schema.Provider { "aws_s3_bucket": s3.DataSourceBucket(), "aws_s3_object": s3.DataSourceObject(), "aws_s3_objects": s3.DataSourceObjects(), + "aws_s3_bucket_object": s3.DataSourceBucketObject(), // DEPRECATED: use aws_s3_object instead + "aws_s3_bucket_objects": s3.DataSourceBucketObjects(), // DEPRECATED: use aws_s3_objects instead "aws_sagemaker_prebuilt_ecr_image": sagemaker.DataSourcePrebuiltECRImage(), @@ -1608,6 +1610,7 @@ func Provider() *schema.Provider { "aws_s3_bucket_versioning": s3.ResourceBucketVersioning(), "aws_s3_object": s3.ResourceObject(), "aws_s3_object_copy": s3.ResourceObjectCopy(), + "aws_s3_bucket_object": s3.ResourceBucketObject(), // DEPRECATED: use aws_s3_object instead "aws_s3_access_point": s3control.ResourceAccessPoint(), "aws_s3control_access_point_policy": s3control.ResourceAccessPointPolicy(), diff --git a/internal/service/s3/bucket_object.go b/internal/service/s3/bucket_object.go new file mode 100644 index 00000000000..c307c89c6a8 --- /dev/null +++ b/internal/service/s3/bucket_object.go @@ -0,0 +1,603 @@ +package s3 + +// WARNING: This code is DEPRECATED and will be removed in a future release!! +// DO NOT apply fixes or enhancements to the resource in this file. +// INSTEAD, apply fixes and enhancements to the resource in "object.go". + +import ( + "bytes" + "context" + "encoding/base64" + "fmt" + "io" + "log" + "net/http" + "os" + "regexp" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/aws/aws-sdk-go/service/s3/s3manager" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/service/kms" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" + "github.com/mitchellh/go-homedir" +) + +func ResourceBucketObject() *schema.Resource { + return &schema.Resource{ + Create: resourceBucketObjectCreate, + Read: resourceBucketObjectRead, + Update: resourceBucketObjectUpdate, + Delete: resourceBucketObjectDelete, + + Importer: &schema.ResourceImporter{ + State: resourceBucketObjectImport, + }, + + CustomizeDiff: customdiff.Sequence( + resourceBucketObjectCustomizeDiff, + verify.SetTagsDiff, + ), + + Schema: map[string]*schema.Schema{ + "acl": { + Type: schema.TypeString, + Default: s3.ObjectCannedACLPrivate, + Optional: true, + ValidateFunc: validation.StringInSlice(s3.ObjectCannedACL_Values(), false), + }, + "bucket": { + Deprecated: "Use the aws_s3_object resource instead", + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.NoZeroValues, + }, + "bucket_key_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "cache_control": { + Type: schema.TypeString, + Optional: true, + }, + "content": { + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"source", "content_base64"}, + }, + "content_base64": { + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"source", "content"}, + }, + "content_disposition": { + Type: schema.TypeString, + Optional: true, + }, + "content_encoding": { + Type: schema.TypeString, + Optional: true, + }, + "content_language": { + Type: schema.TypeString, + Optional: true, + }, + "content_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "etag": { + Type: schema.TypeString, + // This will conflict with SSE-C and SSE-KMS encryption and multi-part upload + // if/when it's actually implemented. The Etag then won't match raw-file MD5. + // See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html + Optional: true, + Computed: true, + ConflictsWith: []string{"kms_key_id"}, + }, + "force_destroy": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "key": { + Deprecated: "Use the aws_s3_object resource instead", + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.NoZeroValues, + }, + "kms_key_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: verify.ValidARN, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // ignore diffs where the user hasn't specified a kms_key_id but the bucket has a default KMS key configured + if new == "" && d.Get("server_side_encryption") == s3.ServerSideEncryptionAwsKms { + return true + } + return false + }, + }, + "metadata": { + Type: schema.TypeMap, + ValidateFunc: validateMetadataIsLowerCase, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "object_lock_legal_hold_status": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(s3.ObjectLockLegalHoldStatus_Values(), false), + }, + "object_lock_mode": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(s3.ObjectLockMode_Values(), false), + }, + "object_lock_retain_until_date": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.IsRFC3339Time, + }, + "server_side_encryption": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(s3.ServerSideEncryption_Values(), false), + Computed: true, + }, + "source": { + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"content", "content_base64"}, + }, + "source_hash": { + Type: schema.TypeString, + Optional: true, + }, + "storage_class": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(s3.ObjectStorageClass_Values(), false), + }, + "tags": tftags.TagsSchema(), + "tags_all": tftags.TagsSchemaComputed(), + "version_id": { + Type: schema.TypeString, + Computed: true, + }, + "website_redirect": { + Type: schema.TypeString, + Optional: true, + }, + }, + } +} + +func resourceBucketObjectCreate(d *schema.ResourceData, meta interface{}) error { + return resourceBucketObjectUpload(d, meta) +} + +func resourceBucketObjectRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).S3Conn + defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig + + bucket := d.Get("bucket").(string) + key := d.Get("key").(string) + + input := &s3.HeadObjectInput{ + Bucket: aws.String(bucket), + Key: aws.String(key), + } + + var resp *s3.HeadObjectOutput + + err := resource.Retry(s3ObjectCreationTimeout, func() *resource.RetryError { + var err error + + resp, err = conn.HeadObject(input) + + if d.IsNewResource() && tfawserr.ErrStatusCodeEquals(err, http.StatusNotFound) { + return resource.RetryableError(err) + } + + if err != nil { + return resource.NonRetryableError(err) + } + + return nil + }) + + if tfresource.TimedOut(err) { + resp, err = conn.HeadObject(input) + } + + if !d.IsNewResource() && tfawserr.ErrStatusCodeEquals(err, http.StatusNotFound) { + log.Printf("[WARN] S3 Object (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading S3 Object (%s): %w", d.Id(), err) + } + + log.Printf("[DEBUG] Reading S3 Object meta: %s", resp) + + d.Set("bucket_key_enabled", resp.BucketKeyEnabled) + d.Set("cache_control", resp.CacheControl) + d.Set("content_disposition", resp.ContentDisposition) + d.Set("content_encoding", resp.ContentEncoding) + d.Set("content_language", resp.ContentLanguage) + d.Set("content_type", resp.ContentType) + metadata := flex.PointersMapToStringList(resp.Metadata) + + // AWS Go SDK capitalizes metadata, this is a workaround. https://github.com/aws/aws-sdk-go/issues/445 + for k, v := range metadata { + delete(metadata, k) + metadata[strings.ToLower(k)] = v + } + + if err := d.Set("metadata", metadata); err != nil { + return fmt.Errorf("error setting metadata: %s", err) + } + d.Set("version_id", resp.VersionId) + d.Set("server_side_encryption", resp.ServerSideEncryption) + d.Set("website_redirect", resp.WebsiteRedirectLocation) + d.Set("object_lock_legal_hold_status", resp.ObjectLockLegalHoldStatus) + d.Set("object_lock_mode", resp.ObjectLockMode) + d.Set("object_lock_retain_until_date", flattenS3ObjectDate(resp.ObjectLockRetainUntilDate)) + + if err := resourceBucketObjectSetKMS(d, meta, resp.SSEKMSKeyId); err != nil { + return fmt.Errorf("object KMS: %w", err) + } + + // See https://forums.aws.amazon.com/thread.jspa?threadID=44003 + d.Set("etag", strings.Trim(aws.StringValue(resp.ETag), `"`)) + + // The "STANDARD" (which is also the default) storage + // class when set would not be included in the results. + d.Set("storage_class", s3.StorageClassStandard) + if resp.StorageClass != nil { + d.Set("storage_class", resp.StorageClass) + } + + // Retry due to S3 eventual consistency + tagsRaw, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return ObjectListTags(conn, bucket, key) + }) + + if err != nil { + return fmt.Errorf("error listing tags for S3 Bucket (%s) Object (%s): %s", bucket, key, err) + } + + tags, ok := tagsRaw.(tftags.KeyValueTags) + + if !ok { + return fmt.Errorf("error listing tags for S3 Bucket (%s) Object (%s): unable to convert tags", bucket, key) + } + + tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) + + //lintignore:AWSR002 + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } + + return nil +} + +func resourceBucketObjectUpdate(d *schema.ResourceData, meta interface{}) error { + if hasS3BucketObjectContentChanges(d) { + return resourceBucketObjectUpload(d, meta) + } + + conn := meta.(*conns.AWSClient).S3Conn + + bucket := d.Get("bucket").(string) + key := d.Get("key").(string) + + if d.HasChange("acl") { + _, err := conn.PutObjectAcl(&s3.PutObjectAclInput{ + Bucket: aws.String(bucket), + Key: aws.String(key), + ACL: aws.String(d.Get("acl").(string)), + }) + if err != nil { + return fmt.Errorf("error putting S3 object ACL: %s", err) + } + } + + if d.HasChange("object_lock_legal_hold_status") { + _, err := conn.PutObjectLegalHold(&s3.PutObjectLegalHoldInput{ + Bucket: aws.String(bucket), + Key: aws.String(key), + LegalHold: &s3.ObjectLockLegalHold{ + Status: aws.String(d.Get("object_lock_legal_hold_status").(string)), + }, + }) + if err != nil { + return fmt.Errorf("error putting S3 object lock legal hold: %s", err) + } + } + + if d.HasChanges("object_lock_mode", "object_lock_retain_until_date") { + req := &s3.PutObjectRetentionInput{ + Bucket: aws.String(bucket), + Key: aws.String(key), + Retention: &s3.ObjectLockRetention{ + Mode: aws.String(d.Get("object_lock_mode").(string)), + RetainUntilDate: expandS3ObjectDate(d.Get("object_lock_retain_until_date").(string)), + }, + } + + // Bypass required to lower or clear retain-until date. + if d.HasChange("object_lock_retain_until_date") { + oraw, nraw := d.GetChange("object_lock_retain_until_date") + o := expandS3ObjectDate(oraw.(string)) + n := expandS3ObjectDate(nraw.(string)) + if n == nil || (o != nil && n.Before(*o)) { + req.BypassGovernanceRetention = aws.Bool(true) + } + } + + _, err := conn.PutObjectRetention(req) + if err != nil { + return fmt.Errorf("error putting S3 object lock retention: %s", err) + } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + + if err := ObjectUpdateTags(conn, bucket, key, o, n); err != nil { + return fmt.Errorf("error updating tags: %s", err) + } + } + + return resourceBucketObjectRead(d, meta) +} + +func resourceBucketObjectDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).S3Conn + + bucket := d.Get("bucket").(string) + key := d.Get("key").(string) + // We are effectively ignoring all leading '/'s in the key name and + // treating multiple '/'s as a single '/' as aws.Config.DisableRestProtocolURICleaning is false + key = strings.TrimLeft(key, "/") + key = regexp.MustCompile(`/+`).ReplaceAllString(key, "/") + + var err error + if _, ok := d.GetOk("version_id"); ok { + err = DeleteAllObjectVersions(conn, bucket, key, d.Get("force_destroy").(bool), false) + } else { + err = deleteS3ObjectVersion(conn, bucket, key, "", false) + } + + if err != nil { + return fmt.Errorf("error deleting S3 Bucket (%s) Object (%s): %s", bucket, key, err) + } + + return nil +} + +func resourceBucketObjectImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + id := d.Id() + id = strings.TrimPrefix(id, "s3://") + parts := strings.Split(id, "/") + + if len(parts) < 2 { + return []*schema.ResourceData{d}, fmt.Errorf("id %s should be in format / or s3:///", id) + } + + bucket := parts[0] + key := strings.Join(parts[1:], "/") + + d.SetId(key) + d.Set("bucket", bucket) + d.Set("key", key) + + return []*schema.ResourceData{d}, nil +} + +func resourceBucketObjectUpload(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).S3Conn + uploader := s3manager.NewUploaderWithClient(conn) + defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig + tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) + + var body io.ReadSeeker + + if v, ok := d.GetOk("source"); ok { + source := v.(string) + path, err := homedir.Expand(source) + if err != nil { + return fmt.Errorf("Error expanding homedir in source (%s): %s", source, err) + } + file, err := os.Open(path) + if err != nil { + return fmt.Errorf("Error opening S3 object source (%s): %s", path, err) + } + + body = file + defer func() { + err := file.Close() + if err != nil { + log.Printf("[WARN] Error closing S3 object source (%s): %s", path, err) + } + }() + } else if v, ok := d.GetOk("content"); ok { + content := v.(string) + body = bytes.NewReader([]byte(content)) + } else if v, ok := d.GetOk("content_base64"); ok { + content := v.(string) + // We can't do streaming decoding here (with base64.NewDecoder) because + // the AWS SDK requires an io.ReadSeeker but a base64 decoder can't seek. + contentRaw, err := base64.StdEncoding.DecodeString(content) + if err != nil { + return fmt.Errorf("error decoding content_base64: %s", err) + } + body = bytes.NewReader(contentRaw) + } else { + body = bytes.NewReader([]byte{}) + } + + bucket := d.Get("bucket").(string) + key := d.Get("key").(string) + + input := &s3manager.UploadInput{ + ACL: aws.String(d.Get("acl").(string)), + Body: body, + Bucket: aws.String(bucket), + Key: aws.String(key), + } + + if v, ok := d.GetOk("storage_class"); ok { + input.StorageClass = aws.String(v.(string)) + } + + if v, ok := d.GetOk("cache_control"); ok { + input.CacheControl = aws.String(v.(string)) + } + + if v, ok := d.GetOk("content_type"); ok { + input.ContentType = aws.String(v.(string)) + } + + if v, ok := d.GetOk("metadata"); ok { + input.Metadata = flex.ExpandStringMap(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("content_encoding"); ok { + input.ContentEncoding = aws.String(v.(string)) + } + + if v, ok := d.GetOk("content_language"); ok { + input.ContentLanguage = aws.String(v.(string)) + } + + if v, ok := d.GetOk("content_disposition"); ok { + input.ContentDisposition = aws.String(v.(string)) + } + + if v, ok := d.GetOk("bucket_key_enabled"); ok { + input.BucketKeyEnabled = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("server_side_encryption"); ok { + input.ServerSideEncryption = aws.String(v.(string)) + } + + if v, ok := d.GetOk("kms_key_id"); ok { + input.SSEKMSKeyId = aws.String(v.(string)) + input.ServerSideEncryption = aws.String(s3.ServerSideEncryptionAwsKms) + } + + if len(tags) > 0 { + // The tag-set must be encoded as URL Query parameters. + input.Tagging = aws.String(tags.IgnoreAWS().UrlEncode()) + } + + if v, ok := d.GetOk("website_redirect"); ok { + input.WebsiteRedirectLocation = aws.String(v.(string)) + } + + if v, ok := d.GetOk("object_lock_legal_hold_status"); ok { + input.ObjectLockLegalHoldStatus = aws.String(v.(string)) + } + + if v, ok := d.GetOk("object_lock_mode"); ok { + input.ObjectLockMode = aws.String(v.(string)) + } + + if v, ok := d.GetOk("object_lock_retain_until_date"); ok { + input.ObjectLockRetainUntilDate = expandS3ObjectDate(v.(string)) + } + + if _, err := uploader.Upload(input); err != nil { + return fmt.Errorf("Error uploading object to S3 bucket (%s): %s", bucket, err) + } + + d.SetId(key) + + return resourceBucketObjectRead(d, meta) +} + +func resourceBucketObjectSetKMS(d *schema.ResourceData, meta interface{}, sseKMSKeyId *string) error { + // Only set non-default KMS key ID (one that doesn't match default) + if sseKMSKeyId != nil { + // retrieve S3 KMS Default Master Key + conn := meta.(*conns.AWSClient).KMSConn + keyMetadata, err := kms.FindKeyByID(conn, DefaultKmsKeyAlias) + if err != nil { + return fmt.Errorf("Failed to describe default S3 KMS key (%s): %s", DefaultKmsKeyAlias, err) + } + + if aws.StringValue(sseKMSKeyId) != aws.StringValue(keyMetadata.Arn) { + log.Printf("[DEBUG] S3 object is encrypted using a non-default KMS Key ID: %s", aws.StringValue(sseKMSKeyId)) + d.Set("kms_key_id", sseKMSKeyId) + } + } + + return nil +} + +func resourceBucketObjectCustomizeDiff(_ context.Context, d *schema.ResourceDiff, meta interface{}) error { + if hasS3BucketObjectContentChanges(d) { + return d.SetNewComputed("version_id") + } + + if d.HasChange("source_hash") { + d.SetNewComputed("version_id") + d.SetNewComputed("etag") + } + + return nil +} + +func hasS3BucketObjectContentChanges(d verify.ResourceDiffer) bool { + for _, key := range []string{ + "bucket_key_enabled", + "cache_control", + "content_base64", + "content_disposition", + "content_encoding", + "content_language", + "content_type", + "content", + "etag", + "kms_key_id", + "metadata", + "server_side_encryption", + "source", + "source_hash", + "storage_class", + "website_redirect", + } { + if d.HasChange(key) { + return true + } + } + return false +} diff --git a/internal/service/s3/bucket_object_data_source.go b/internal/service/s3/bucket_object_data_source.go new file mode 100644 index 00000000000..93393830346 --- /dev/null +++ b/internal/service/s3/bucket_object_data_source.go @@ -0,0 +1,247 @@ +package s3 + +// WARNING: This code is DEPRECATED and will be removed in a future release!! +// DO NOT apply fixes or enhancements to the data source in this file. +// INSTEAD, apply fixes and enhancements to the data source in "object_data_source.go". + +import ( + "bytes" + "fmt" + "log" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" +) + +func DataSourceBucketObject() *schema.Resource { + return &schema.Resource{ + Read: dataSourceBucketObjectRead, + + Schema: map[string]*schema.Schema{ + "body": { + Type: schema.TypeString, + Computed: true, + }, + "bucket": { + Deprecated: "Use the aws_s3_object data source instead", + Type: schema.TypeString, + Required: true, + }, + "bucket_key_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "cache_control": { + Type: schema.TypeString, + Computed: true, + }, + "content_disposition": { + Type: schema.TypeString, + Computed: true, + }, + "content_encoding": { + Type: schema.TypeString, + Computed: true, + }, + "content_language": { + Type: schema.TypeString, + Computed: true, + }, + "content_length": { + Type: schema.TypeInt, + Computed: true, + }, + "content_type": { + Type: schema.TypeString, + Computed: true, + }, + "etag": { + Type: schema.TypeString, + Computed: true, + }, + "expiration": { + Type: schema.TypeString, + Computed: true, + }, + "expires": { + Type: schema.TypeString, + Computed: true, + }, + "key": { + Type: schema.TypeString, + Required: true, + }, + "last_modified": { + Type: schema.TypeString, + Computed: true, + }, + "metadata": { + Type: schema.TypeMap, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "object_lock_legal_hold_status": { + Type: schema.TypeString, + Computed: true, + }, + "object_lock_mode": { + Type: schema.TypeString, + Computed: true, + }, + "object_lock_retain_until_date": { + Type: schema.TypeString, + Computed: true, + }, + "range": { + Type: schema.TypeString, + Optional: true, + }, + "server_side_encryption": { + Type: schema.TypeString, + Computed: true, + }, + "sse_kms_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "storage_class": { + Type: schema.TypeString, + Computed: true, + }, + "version_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "website_redirect_location": { + Type: schema.TypeString, + Computed: true, + }, + + "tags": tftags.TagsSchemaComputed(), + }, + } +} + +func dataSourceBucketObjectRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).S3Conn + ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig + + bucket := d.Get("bucket").(string) + key := d.Get("key").(string) + + input := s3.HeadObjectInput{ + Bucket: aws.String(bucket), + Key: aws.String(key), + } + if v, ok := d.GetOk("range"); ok { + input.Range = aws.String(v.(string)) + } + if v, ok := d.GetOk("version_id"); ok { + input.VersionId = aws.String(v.(string)) + } + + versionText := "" + uniqueId := bucket + "/" + key + if v, ok := d.GetOk("version_id"); ok { + versionText = fmt.Sprintf(" of version %q", v.(string)) + uniqueId += "@" + v.(string) + } + + log.Printf("[DEBUG] Reading S3 Object: %s", input) + out, err := conn.HeadObject(&input) + if err != nil { + return fmt.Errorf("failed getting S3 Bucket (%s) Object (%s): %w", bucket, key, err) + } + if aws.BoolValue(out.DeleteMarker) { + return fmt.Errorf("Requested S3 object %q%s has been deleted", bucket+key, versionText) + } + + log.Printf("[DEBUG] Received S3 object: %s", out) + + d.SetId(uniqueId) + + d.Set("bucket_key_enabled", out.BucketKeyEnabled) + d.Set("cache_control", out.CacheControl) + d.Set("content_disposition", out.ContentDisposition) + d.Set("content_encoding", out.ContentEncoding) + d.Set("content_language", out.ContentLanguage) + d.Set("content_length", out.ContentLength) + d.Set("content_type", out.ContentType) + // See https://forums.aws.amazon.com/thread.jspa?threadID=44003 + d.Set("etag", strings.Trim(aws.StringValue(out.ETag), `"`)) + d.Set("expiration", out.Expiration) + d.Set("expires", out.Expires) + if out.LastModified != nil { + d.Set("last_modified", out.LastModified.Format(time.RFC1123)) + } else { + d.Set("last_modified", "") + } + d.Set("metadata", flex.PointersMapToStringList(out.Metadata)) + d.Set("object_lock_legal_hold_status", out.ObjectLockLegalHoldStatus) + d.Set("object_lock_mode", out.ObjectLockMode) + d.Set("object_lock_retain_until_date", flattenS3ObjectDate(out.ObjectLockRetainUntilDate)) + d.Set("server_side_encryption", out.ServerSideEncryption) + d.Set("sse_kms_key_id", out.SSEKMSKeyId) + d.Set("version_id", out.VersionId) + d.Set("website_redirect_location", out.WebsiteRedirectLocation) + + // The "STANDARD" (which is also the default) storage + // class when set would not be included in the results. + d.Set("storage_class", s3.StorageClassStandard) + if out.StorageClass != nil { + d.Set("storage_class", out.StorageClass) + } + + if isContentTypeAllowed(out.ContentType) { + input := s3.GetObjectInput{ + Bucket: aws.String(bucket), + Key: aws.String(key), + } + if v, ok := d.GetOk("range"); ok { + input.Range = aws.String(v.(string)) + } + if out.VersionId != nil { + input.VersionId = out.VersionId + } + out, err := conn.GetObject(&input) + if err != nil { + return fmt.Errorf("Failed getting S3 object: %w", err) + } + + buf := new(bytes.Buffer) + bytesRead, err := buf.ReadFrom(out.Body) + if err != nil { + return fmt.Errorf("Failed reading content of S3 object (%s): %w", uniqueId, err) + } + log.Printf("[INFO] Saving %d bytes from S3 object %s", bytesRead, uniqueId) + d.Set("body", buf.String()) + } else { + contentType := "" + if out.ContentType == nil { + contentType = "" + } else { + contentType = aws.StringValue(out.ContentType) + } + + log.Printf("[INFO] Ignoring body of S3 object %s with Content-Type %q", uniqueId, contentType) + } + + tags, err := ObjectListTags(conn, bucket, key) + + if err != nil { + return fmt.Errorf("error listing tags for S3 Bucket (%s) Object (%s): %w", bucket, key, err) + } + + if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + return nil +} diff --git a/internal/service/s3/bucket_object_data_source_test.go b/internal/service/s3/bucket_object_data_source_test.go new file mode 100644 index 00000000000..f9a441c9a7a --- /dev/null +++ b/internal/service/s3/bucket_object_data_source_test.go @@ -0,0 +1,734 @@ +package s3_test + +// WARNING: This code is DEPRECATED and will be removed in a future release!! +// DO NOT apply fixes or enhancements to the data source in this file. +// INSTEAD, apply fixes and enhancements to the data source in "object_data_source_test.go". + +import ( + "fmt" + "regexp" + "testing" + "time" + + "github.com/aws/aws-sdk-go/service/s3" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccS3BucketObjectDataSource_basic(t *testing.T) { + rInt := sdkacctest.RandInt() + + var rObj s3.GetObjectOutput + var dsObj s3.GetObjectOutput + + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccBucketObjectDataSourceConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName, &rObj), + testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), + resource.TestCheckResourceAttr(dataSourceName, "content_length", "11"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "etag", resourceName, "etag"), + resource.TestMatchResourceAttr(dataSourceName, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_legal_hold_status", resourceName, "object_lock_legal_hold_status"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_mode", resourceName, "object_lock_mode"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_retain_until_date", resourceName, "object_lock_retain_until_date"), + resource.TestCheckNoResourceAttr(dataSourceName, "body"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_basicViaAccessPoint(t *testing.T) { + var dsObj, rObj s3.GetObjectOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + dataSourceName := "data.aws_s3_object.test" + resourceName := "aws_s3_object.test" + accessPointResourceName := "aws_s3_access_point.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccBucketObjectDataSourceConfig_basicViaAccessPoint(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName, &rObj), + testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), + testAccCheckObjectExists(resourceName, &rObj), + resource.TestCheckResourceAttrPair(dataSourceName, "bucket", accessPointResourceName, "arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "key", resourceName, "key"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_readableBody(t *testing.T) { + rInt := sdkacctest.RandInt() + + var rObj s3.GetObjectOutput + var dsObj s3.GetObjectOutput + + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccBucketObjectDataSourceConfig_readableBody(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName, &rObj), + testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), + resource.TestCheckResourceAttr(dataSourceName, "content_length", "3"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "etag", resourceName, "etag"), + resource.TestMatchResourceAttr(dataSourceName, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_legal_hold_status", resourceName, "object_lock_legal_hold_status"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_mode", resourceName, "object_lock_mode"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_retain_until_date", resourceName, "object_lock_retain_until_date"), + resource.TestCheckResourceAttr(dataSourceName, "body", "yes"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_kmsEncrypted(t *testing.T) { + rInt := sdkacctest.RandInt() + + var rObj s3.GetObjectOutput + var dsObj s3.GetObjectOutput + + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccBucketObjectDataSourceConfig_kmsEncrypted(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName, &rObj), + testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), + resource.TestCheckResourceAttr(dataSourceName, "content_length", "22"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "etag", resourceName, "etag"), + resource.TestCheckResourceAttrPair(dataSourceName, "server_side_encryption", resourceName, "server_side_encryption"), + resource.TestCheckResourceAttrPair(dataSourceName, "sse_kms_key_id", resourceName, "kms_key_id"), + resource.TestMatchResourceAttr(dataSourceName, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_legal_hold_status", resourceName, "object_lock_legal_hold_status"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_mode", resourceName, "object_lock_mode"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_retain_until_date", resourceName, "object_lock_retain_until_date"), + resource.TestCheckResourceAttr(dataSourceName, "body", "Keep Calm and Carry On"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_bucketKeyEnabled(t *testing.T) { + rInt := sdkacctest.RandInt() + + var rObj s3.GetObjectOutput + var dsObj s3.GetObjectOutput + + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccBucketObjectDataSourceConfig_bucketKeyEnabled(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName, &rObj), + testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), + resource.TestCheckResourceAttr(dataSourceName, "content_length", "22"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "etag", resourceName, "etag"), + resource.TestCheckResourceAttrPair(dataSourceName, "server_side_encryption", resourceName, "server_side_encryption"), + resource.TestCheckResourceAttrPair(dataSourceName, "sse_kms_key_id", resourceName, "kms_key_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "bucket_key_enabled", resourceName, "bucket_key_enabled"), + resource.TestMatchResourceAttr(dataSourceName, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_legal_hold_status", resourceName, "object_lock_legal_hold_status"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_mode", resourceName, "object_lock_mode"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_retain_until_date", resourceName, "object_lock_retain_until_date"), + resource.TestCheckResourceAttr(dataSourceName, "body", "Keep Calm and Carry On"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_allParams(t *testing.T) { + rInt := sdkacctest.RandInt() + + var rObj s3.GetObjectOutput + var dsObj s3.GetObjectOutput + + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccBucketObjectDataSourceConfig_allParams(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName, &rObj), + testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), + resource.TestCheckResourceAttr(dataSourceName, "content_length", "25"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "etag", resourceName, "etag"), + resource.TestMatchResourceAttr(dataSourceName, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttrPair(dataSourceName, "version_id", resourceName, "version_id"), + resource.TestCheckNoResourceAttr(dataSourceName, "body"), + resource.TestCheckResourceAttrPair(dataSourceName, "bucket_key_enabled", resourceName, "bucket_key_enabled"), + resource.TestCheckResourceAttrPair(dataSourceName, "cache_control", resourceName, "cache_control"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_disposition", resourceName, "content_disposition"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_encoding", resourceName, "content_encoding"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_language", resourceName, "content_language"), + // Encryption is off + resource.TestCheckResourceAttrPair(dataSourceName, "server_side_encryption", resourceName, "server_side_encryption"), + resource.TestCheckResourceAttr(dataSourceName, "sse_kms_key_id", ""), + // Supported, but difficult to reproduce in short testing time + resource.TestCheckResourceAttrPair(dataSourceName, "storage_class", resourceName, "storage_class"), + resource.TestCheckResourceAttr(dataSourceName, "expiration", ""), + // Currently unsupported in aws_s3_object resource + resource.TestCheckResourceAttr(dataSourceName, "expires", ""), + resource.TestCheckResourceAttrPair(dataSourceName, "website_redirect_location", resourceName, "website_redirect"), + resource.TestCheckResourceAttr(dataSourceName, "metadata.%", "0"), + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "1"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_legal_hold_status", resourceName, "object_lock_legal_hold_status"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_mode", resourceName, "object_lock_mode"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_retain_until_date", resourceName, "object_lock_retain_until_date"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_objectLockLegalHoldOff(t *testing.T) { + rInt := sdkacctest.RandInt() + + var rObj s3.GetObjectOutput + var dsObj s3.GetObjectOutput + + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccBucketObjectDataSourceConfig_objectLockLegalHoldOff(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName, &rObj), + testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), + resource.TestCheckResourceAttr(dataSourceName, "content_length", "11"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "etag", resourceName, "etag"), + resource.TestMatchResourceAttr(dataSourceName, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_legal_hold_status", resourceName, "object_lock_legal_hold_status"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_mode", resourceName, "object_lock_mode"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_retain_until_date", resourceName, "object_lock_retain_until_date"), + resource.TestCheckNoResourceAttr(dataSourceName, "body"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_objectLockLegalHoldOn(t *testing.T) { + rInt := sdkacctest.RandInt() + retainUntilDate := time.Now().UTC().AddDate(0, 0, 10).Format(time.RFC3339) + + var rObj s3.GetObjectOutput + var dsObj s3.GetObjectOutput + + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccBucketObjectDataSourceConfig_objectLockLegalHoldOn(rInt, retainUntilDate), + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName, &rObj), + testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), + resource.TestCheckResourceAttr(dataSourceName, "content_length", "11"), + resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "etag", resourceName, "etag"), + resource.TestMatchResourceAttr(dataSourceName, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_legal_hold_status", resourceName, "object_lock_legal_hold_status"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_mode", resourceName, "object_lock_mode"), + resource.TestCheckResourceAttrPair(dataSourceName, "object_lock_retain_until_date", resourceName, "object_lock_retain_until_date"), + resource.TestCheckNoResourceAttr(dataSourceName, "body"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_leadingSlash(t *testing.T) { + var rObj s3.GetObjectOutput + var dsObj1, dsObj2, dsObj3 s3.GetObjectOutput + + resourceName := "aws_s3_object.object" + dataSourceName1 := "data.aws_s3_object.obj1" + dataSourceName2 := "data.aws_s3_object.obj2" + dataSourceName3 := "data.aws_s3_object.obj3" + + rInt := sdkacctest.RandInt() + resourceOnlyConf, conf := testAccBucketObjectDataSourceConfig_leadingSlash(rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: resourceOnlyConf, + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName, &rObj), + ), + }, + { + Config: conf, + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExistsDataSource(dataSourceName1, &dsObj1), + resource.TestCheckResourceAttr(dataSourceName1, "content_length", "3"), + resource.TestCheckResourceAttrPair(dataSourceName1, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName1, "etag", resourceName, "etag"), + resource.TestMatchResourceAttr(dataSourceName1, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttr(dataSourceName1, "body", "yes"), + + testAccCheckObjectExistsDataSource(dataSourceName2, &dsObj2), + resource.TestCheckResourceAttr(dataSourceName2, "content_length", "3"), + resource.TestCheckResourceAttrPair(dataSourceName2, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName2, "etag", resourceName, "etag"), + resource.TestMatchResourceAttr(dataSourceName2, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttr(dataSourceName2, "body", "yes"), + + testAccCheckObjectExistsDataSource(dataSourceName3, &dsObj3), + resource.TestCheckResourceAttr(dataSourceName3, "content_length", "3"), + resource.TestCheckResourceAttrPair(dataSourceName3, "content_type", resourceName, "content_type"), + resource.TestCheckResourceAttrPair(dataSourceName3, "etag", resourceName, "etag"), + resource.TestMatchResourceAttr(dataSourceName3, "last_modified", regexp.MustCompile(rfc1123RegexPattern)), + resource.TestCheckResourceAttr(dataSourceName3, "body", "yes"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_multipleSlashes(t *testing.T) { + var rObj1, rObj2 s3.GetObjectOutput + var dsObj1, dsObj2, dsObj3 s3.GetObjectOutput + + resourceName1 := "aws_s3_object.object1" + resourceName2 := "aws_s3_object.object2" + dataSourceName1 := "data.aws_s3_object.obj1" + dataSourceName2 := "data.aws_s3_object.obj2" + dataSourceName3 := "data.aws_s3_object.obj3" + + rInt := sdkacctest.RandInt() + resourceOnlyConf, conf := testAccBucketObjectDataSourceConfig_multipleSlashes(rInt) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: resourceOnlyConf, + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExists(resourceName1, &rObj1), + testAccCheckObjectExists(resourceName2, &rObj2), + ), + }, + { + Config: conf, + Check: resource.ComposeTestCheckFunc( + + testAccCheckObjectExistsDataSource(dataSourceName1, &dsObj1), + resource.TestCheckResourceAttr(dataSourceName1, "content_length", "3"), + resource.TestCheckResourceAttrPair(dataSourceName1, "content_type", resourceName1, "content_type"), + resource.TestCheckResourceAttr(dataSourceName1, "body", "yes"), + + testAccCheckObjectExistsDataSource(dataSourceName2, &dsObj2), + resource.TestCheckResourceAttr(dataSourceName2, "content_length", "3"), + resource.TestCheckResourceAttrPair(dataSourceName2, "content_type", resourceName1, "content_type"), + resource.TestCheckResourceAttr(dataSourceName2, "body", "yes"), + + testAccCheckObjectExistsDataSource(dataSourceName3, &dsObj3), + resource.TestCheckResourceAttr(dataSourceName3, "content_length", "2"), + resource.TestCheckResourceAttrPair(dataSourceName3, "content_type", resourceName2, "content_type"), + resource.TestCheckResourceAttr(dataSourceName3, "body", "no"), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectDataSource_singleSlashAsKey(t *testing.T) { + var dsObj s3.GetObjectOutput + dataSourceName := "data.aws_s3_object.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + PreventPostDestroyRefresh: true, + Steps: []resource.TestStep{ + { + Config: testAccBucketObjectSingleSlashAsKeyDataSourceConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), + ), + }, + }, + }) +} + +func testAccBucketObjectDataSourceConfig_basic(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "object_bucket" { + bucket = "tf-object-test-bucket-%[1]d" +} + +resource "aws_s3_object" "object" { + bucket = aws_s3_bucket.object_bucket.bucket + key = "tf-testing-obj-%[1]d" + content = "Hello World" +} + +data "aws_s3_object" "obj" { + bucket = aws_s3_bucket.object_bucket.bucket + key = aws_s3_object.object.key +} +`, randInt) +} + +func testAccBucketObjectDataSourceConfig_basicViaAccessPoint(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_access_point" "test" { + bucket = aws_s3_bucket.test.bucket + name = %[1]q +} + +resource "aws_s3_object" "test" { + bucket = aws_s3_bucket.test.bucket + key = %[1]q + content = "Hello World" +} + +data "aws_s3_object" "test" { + bucket = aws_s3_access_point.test.arn + key = aws_s3_object.test.key +} +`, rName) +} + +func testAccBucketObjectDataSourceConfig_readableBody(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "object_bucket" { + bucket = "tf-object-test-bucket-%[1]d" +} + +resource "aws_s3_object" "object" { + bucket = aws_s3_bucket.object_bucket.bucket + key = "tf-testing-obj-%[1]d-readable" + content = "yes" + content_type = "text/plain" +} + +data "aws_s3_object" "obj" { + bucket = aws_s3_bucket.object_bucket.bucket + key = aws_s3_object.object.key +} +`, randInt) +} + +func testAccBucketObjectDataSourceConfig_kmsEncrypted(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "object_bucket" { + bucket = "tf-object-test-bucket-%[1]d" +} + +resource "aws_kms_key" "example" { + description = "TF Acceptance Test KMS key" + deletion_window_in_days = 7 +} + +resource "aws_s3_object" "object" { + bucket = aws_s3_bucket.object_bucket.bucket + key = "tf-testing-obj-%[1]d-encrypted" + content = "Keep Calm and Carry On" + content_type = "text/plain" + kms_key_id = aws_kms_key.example.arn +} + +data "aws_s3_object" "obj" { + bucket = aws_s3_bucket.object_bucket.bucket + key = aws_s3_object.object.key +} +`, randInt) +} + +func testAccBucketObjectDataSourceConfig_bucketKeyEnabled(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "object_bucket" { + bucket = "tf-object-test-bucket-%[1]d" +} + +resource "aws_kms_key" "example" { + description = "TF Acceptance Test KMS key" + deletion_window_in_days = 7 +} + +resource "aws_s3_object" "object" { + bucket = aws_s3_bucket.object_bucket.bucket + key = "tf-testing-obj-%[1]d-encrypted" + content = "Keep Calm and Carry On" + content_type = "text/plain" + kms_key_id = aws_kms_key.example.arn + bucket_key_enabled = true +} + +data "aws_s3_object" "obj" { + bucket = aws_s3_bucket.object_bucket.bucket + key = aws_s3_object.object.key +} +`, randInt) +} + +func testAccBucketObjectDataSourceConfig_allParams(randInt int) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "object_bucket" { + bucket = "tf-object-test-bucket-%[1]d" + + versioning { + enabled = true + } +} + +resource "aws_s3_object" "object" { + bucket = aws_s3_bucket.object_bucket.bucket + key = "tf-testing-obj-%[1]d-all-params" + + content = < **NOTE:** The `aws_s3_bucket_object` data source is DEPRECATED and will be removed in a future version! Use `aws_s3_object` instead, where new features and fixes will be added. + +The S3 object data source allows access to the metadata and +_optionally_ (see below) content of an object stored inside S3 bucket. + +~> **Note:** The content of an object (`body` field) is available only for objects which have a human-readable `Content-Type` (`text/*` and `application/json`). This is to prevent printing unsafe characters and potentially downloading large amount of data which would be thrown away in favour of metadata. + +## Example Usage + +The following example retrieves a text object (which must have a `Content-Type` +value starting with `text/`) and uses it as the `user_data` for an EC2 instance: + +```terraform +data "aws_s3_bucket_object" "bootstrap_script" { + bucket = "ourcorp-deploy-config" + key = "ec2-bootstrap-script.sh" +} + +resource "aws_instance" "example" { + instance_type = "t2.micro" + ami = "ami-2757f631" + user_data = data.aws_s3_bucket_object.bootstrap_script.body +} +``` + +The following, more-complex example retrieves only the metadata for a zip +file stored in S3, which is then used to pass the most recent `version_id` +to AWS Lambda for use as a function implementation. More information about +Lambda functions is available in the documentation for +[`aws_lambda_function`](/docs/providers/aws/r/lambda_function.html). + +```terraform +data "aws_s3_bucket_object" "lambda" { + bucket = "ourcorp-lambda-functions" + key = "hello-world.zip" +} + +resource "aws_lambda_function" "test_lambda" { + s3_bucket = data.aws_s3_bucket_object.lambda.bucket + s3_key = data.aws_s3_bucket_object.lambda.key + s3_object_version = data.aws_s3_bucket_object.lambda.version_id + function_name = "lambda_function_name" + role = aws_iam_role.iam_for_lambda.arn # (not shown) + handler = "exports.test" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `bucket` - (Required) The name of the bucket to read the object from. Alternatively, an [S3 access point](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-access-points.html) ARN can be specified +* `key` - (Required) The full path to the object inside the bucket +* `version_id` - (Optional) Specific version ID of the object returned (defaults to latest version) + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `body` - Object data (see **limitations above** to understand cases in which this field is actually available) +* `bucket_key_enabled` - (Optional) Whether or not to use [Amazon S3 Bucket Keys](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html) for SSE-KMS. +* `cache_control` - Specifies caching behavior along the request/reply chain. +* `content_disposition` - Specifies presentational information for the object. +* `content_encoding` - Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. +* `content_language` - The language the content is in. +* `content_length` - Size of the body in bytes. +* `content_type` - A standard MIME type describing the format of the object data. +* `etag` - [ETag](https://en.wikipedia.org/wiki/HTTP_ETag) generated for the object (an MD5 sum of the object content in case it's not encrypted) +* `expiration` - If the object expiration is configured (see [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html)), the field includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded. +* `expires` - The date and time at which the object is no longer cacheable. +* `last_modified` - Last modified date of the object in RFC1123 format (e.g., `Mon, 02 Jan 2006 15:04:05 MST`) +* `metadata` - A map of metadata stored with the object in S3 +* `object_lock_legal_hold_status` - Indicates whether this object has an active [legal hold](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-legal-holds). This field is only returned if you have permission to view an object's legal hold status. +* `object_lock_mode` - The object lock [retention mode](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes) currently in place for this object. +* `object_lock_retain_until_date` - The date and time when this object's object lock will expire. +* `server_side_encryption` - If the object is stored using server-side encryption (KMS or Amazon S3-managed encryption key), this field includes the chosen encryption and algorithm used. +* `sse_kms_key_id` - If present, specifies the ID of the Key Management Service (KMS) master encryption key that was used for the object. +* `storage_class` - [Storage class](http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) information of the object. Available for all objects except for `Standard` storage class objects. +* `version_id` - The latest version ID of the object returned. +* `website_redirect_location` - If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata. +* `tags` - A map of tags assigned to the object. + +-> **Note:** Terraform ignores all leading `/`s in the object's `key` and treats multiple `/`s in the rest of the object's `key` as a single `/`, so values of `/index.html` and `index.html` correspond to the same S3 object as do `first//second///third//` and `first/second/third/`. diff --git a/website/docs/d/s3_bucket_objects.html.markdown b/website/docs/d/s3_bucket_objects.html.markdown new file mode 100644 index 00000000000..30a51d389cf --- /dev/null +++ b/website/docs/d/s3_bucket_objects.html.markdown @@ -0,0 +1,52 @@ +--- +subcategory: "S3" +layout: "aws" +page_title: "AWS: aws_s3_bucket_objects" +description: |- + Returns keys and metadata of S3 objects +--- + +# Data Source: aws_s3_bucket_objects + +~> **NOTE:** The `aws_s3_bucket_objects` data source is DEPRECATED and will be removed in a future version! Use `aws_s3_objects` instead, where new features and fixes will be added. + +~> **NOTE on `max_keys`:** Retrieving very large numbers of keys can adversely affect Terraform's performance. + +The objects data source returns keys (i.e., file names) and other metadata about objects in an S3 bucket. + +## Example Usage + +The following example retrieves a list of all object keys in an S3 bucket and creates corresponding Terraform object data sources: + +```terraform +data "aws_s3_bucket_objects" "my_objects" { + bucket = "ourcorp" +} + +data "aws_s3_object" "object_info" { + count = length(data.aws_s3_bucket_objects.my_objects.keys) + key = element(data.aws_s3_bucket_objects.my_objects.keys, count.index) + bucket = data.aws_s3_bucket_objects.my_objects.bucket +} +``` + +## Argument Reference + +The following arguments are supported: + +* `bucket` - (Required) Lists object keys in this S3 bucket. Alternatively, an [S3 access point](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-access-points.html) ARN can be specified +* `prefix` - (Optional) Limits results to object keys with this prefix (Default: none) +* `delimiter` - (Optional) A character used to group keys (Default: none) +* `encoding_type` - (Optional) Encodes keys using this method (Default: none; besides none, only "url" can be used) +* `max_keys` - (Optional) Maximum object keys to return (Default: 1000) +* `start_after` - (Optional) Returns key names lexicographically after a specific object key in your bucket (Default: none; S3 lists object keys in UTF-8 character encoding in lexicographical order) +* `fetch_owner` - (Optional) Boolean specifying whether to populate the owner list (Default: false) + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `keys` - List of strings representing object keys +* `common_prefixes` - List of any keys between `prefix` and the next occurrence of `delimiter` (i.e., similar to subdirectories of the `prefix` "directory"); the list is only returned when you specify `delimiter` +* `id` - S3 Bucket. +* `owners` - List of strings representing object owner IDs (see `fetch_owner` above) diff --git a/website/docs/guides/version-4-upgrade.html.md b/website/docs/guides/version-4-upgrade.html.md index 5737d621250..c887afe0ae0 100644 --- a/website/docs/guides/version-4-upgrade.html.md +++ b/website/docs/guides/version-4-upgrade.html.md @@ -27,6 +27,8 @@ Upgrade topics: - [Plural Data Source Behavior](#plural-data-source-behavior) - [Data Source: aws_cloudwatch_log_group](#data-source-aws_cloudwatch_log_group) - [Data Source: aws_subnet_ids](#data-source-aws_subnet_ids) +- [Data Source: aws_s3_bucket_object](#data-source-aws_s3_bucket_object) +- [Data Source: aws_s3_bucket_objects](#data-source-aws_s3_bucket_objects) - [Resource: aws_batch_compute_environment](#resource-aws_batch_compute_environment) - [Resource: aws_cloudwatch_event_target](#resource-aws_cloudwatch_event_target) - [Resource: aws_elasticache_cluster](#resource-aws_elasticache_cluster) @@ -34,6 +36,7 @@ Upgrade topics: - [Resource: aws_elasticache_replication_group](#resource-aws_elasticache_replication_group) - [Resource: aws_network_interface](#resource-aws_network_interface) - [Resource: aws_s3_bucket](#resource-aws_s3_bucket) +- [Resource: aws_s3_bucket_object](#resource-aws_s3_bucket_object) - [Resource: aws_spot_instance_request](#resource-aws_spot_instance_request) @@ -307,6 +310,14 @@ output "subnet_cidr_blocks" { } ``` +## Data Source: aws_s3_bucket_object + +The `aws_s3_bucket_object` data source is deprecated and will be removed in a future version. Use `aws_s3_object` instead, where new features and fixes will be added. + +## Data Source: aws_s3_bucket_objects + +The `aws_s3_bucket_objects` data source is deprecated and will be removed in a future version. Use `aws_s3_objects` instead, where new features and fixes will be added. + ## Resource: aws_batch_compute_environment No `compute_resources` can be specified when `type` is `UNMANAGED`. @@ -454,6 +465,18 @@ resource "aws_spot_instance_request" "example" { } ``` +## Resource: aws_s3_bucket_object + +The `aws_s3_bucket_object` resource is deprecated and will be removed in a future version. Use `aws_s3_object` instead, where new features and fixes will be added. + +When replacing `aws_s3_bucket_object` with `aws_s3_object` in your configuration, on the next apply, Terraform will recreate the object. If you prefer to not have Terraform recreate the object, import the object using `aws_s3_object`. + +For example, the following will import an S3 object into state, assuming the configuration exists, as `aws_s3_object.example`: + +```console +% terraform import aws_s3_object.example s3://some-bucket-name/some/key.txt +``` + ## EC2-Classic Resource and Data Source Support While an upgrade to this major version will not directly impact EC2-Classic resources configured with Terraform, diff --git a/website/docs/r/s3_bucket_object.html.markdown b/website/docs/r/s3_bucket_object.html.markdown new file mode 100644 index 00000000000..baa723757e8 --- /dev/null +++ b/website/docs/r/s3_bucket_object.html.markdown @@ -0,0 +1,173 @@ +--- +subcategory: "S3" +layout: "aws" +page_title: "AWS: aws_s3_bucket_object" +description: |- + Provides an S3 object resource. +--- + +# Resource: aws_s3_bucket_object + +~> **NOTE:** The `aws_s3_bucket_object` resource is DEPRECATED and will be removed in a future version! Use `aws_s3_object` instead, where new features and fixes will be added. When replacing `aws_s3_bucket_object` with `aws_s3_object` in your configuration, on the next apply, Terraform will recreate the object. If you prefer to not have Terraform recreate the object, import the object using `aws_s3_object`. + +Provides an S3 object resource. + +## Example Usage + +### Uploading a file to a bucket + +```terraform +resource "aws_s3_bucket_object" "object" { + bucket = "your_bucket_name" + key = "new_object_key" + source = "path/to/file" + + # The filemd5() function is available in Terraform 0.11.12 and later + # For Terraform 0.11.11 and earlier, use the md5() function and the file() function: + # etag = "${md5(file("path/to/file"))}" + etag = filemd5("path/to/file") +} +``` + +### Encrypting with KMS Key + +```terraform +resource "aws_kms_key" "examplekms" { + description = "KMS key 1" + deletion_window_in_days = 7 +} + +resource "aws_s3_bucket" "examplebucket" { + bucket = "examplebuckettftest" + acl = "private" +} + +resource "aws_s3_bucket_object" "example" { + key = "someobject" + bucket = aws_s3_bucket.examplebucket.id + source = "index.html" + kms_key_id = aws_kms_key.examplekms.arn +} +``` + +### Server Side Encryption with S3 Default Master Key + +```terraform +resource "aws_s3_bucket" "examplebucket" { + bucket = "examplebuckettftest" + acl = "private" +} + +resource "aws_s3_bucket_object" "example" { + key = "someobject" + bucket = aws_s3_bucket.examplebucket.id + source = "index.html" + server_side_encryption = "aws:kms" +} +``` + +### Server Side Encryption with AWS-Managed Key + +```terraform +resource "aws_s3_bucket" "examplebucket" { + bucket = "examplebuckettftest" + acl = "private" +} + +resource "aws_s3_bucket_object" "example" { + key = "someobject" + bucket = aws_s3_bucket.examplebucket.id + source = "index.html" + server_side_encryption = "AES256" +} +``` + +### S3 Object Lock + +```terraform +resource "aws_s3_bucket" "examplebucket" { + bucket = "examplebuckettftest" + acl = "private" + + versioning { + enabled = true + } + + object_lock_configuration { + object_lock_enabled = "Enabled" + } +} + +resource "aws_s3_bucket_object" "example" { + key = "someobject" + bucket = aws_s3_bucket.examplebucket.id + source = "important.txt" + + object_lock_legal_hold_status = "ON" + object_lock_mode = "GOVERNANCE" + object_lock_retain_until_date = "2021-12-31T23:59:60Z" + + force_destroy = true +} +``` + +## Argument Reference + +-> **Note:** If you specify `content_encoding` you are responsible for encoding the body appropriately. `source`, `content`, and `content_base64` all expect already encoded/compressed bytes. + +The following arguments are required: + +* `bucket` - (Required) Name of the bucket to put the file in. Alternatively, an [S3 access point](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-access-points.html) ARN can be specified. +* `key` - (Required) Name of the object once it is in the bucket. + +The following arguments are optional: + +* `acl` - (Optional) [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Valid values are `private`, `public-read`, `public-read-write`, `aws-exec-read`, `authenticated-read`, `bucket-owner-read`, and `bucket-owner-full-control`. Defaults to `private`. +* `bucket_key_enabled` - (Optional) Whether or not to use [Amazon S3 Bucket Keys](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html) for SSE-KMS. +* `cache_control` - (Optional) Caching behavior along the request/reply chain Read [w3c cache_control](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) for further details. +* `content_base64` - (Optional, conflicts with `source` and `content`) Base64-encoded data that will be decoded and uploaded as raw bytes for the object content. This allows safely uploading non-UTF8 binary data, but is recommended only for small content such as the result of the `gzipbase64` function with small text strings. For larger objects, use `source` to stream the content from a disk file. +* `content_disposition` - (Optional) Presentational information for the object. Read [w3c content_disposition](http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1) for further information. +* `content_encoding` - (Optional) Content encodings that have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field. Read [w3c content encoding](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11) for further information. +* `content_language` - (Optional) Language the content is in e.g., en-US or en-GB. +* `content_type` - (Optional) Standard MIME type describing the format of the object data, e.g., application/octet-stream. All Valid MIME Types are valid for this input. +* `content` - (Optional, conflicts with `source` and `content_base64`) Literal string value to use as the object content, which will be uploaded as UTF-8-encoded text. +* `etag` - (Optional) Triggers updates when the value changes. The only meaningful value is `filemd5("path/to/file")` (Terraform 0.11.12 or later) or `${md5(file("path/to/file"))}` (Terraform 0.11.11 or earlier). This attribute is not compatible with KMS encryption, `kms_key_id` or `server_side_encryption = "aws:kms"` (see `source_hash` instead). +* `force_destroy` - (Optional) Whether to allow the object to be deleted by removing any legal hold on any object version. Default is `false`. This value should be set to `true` only if the bucket has S3 object lock enabled. +* `kms_key_id` - (Optional) ARN of the KMS Key to use for object encryption. If the S3 Bucket has server-side encryption enabled, that value will automatically be used. If referencing the `aws_kms_key` resource, use the `arn` attribute. If referencing the `aws_kms_alias` data source or resource, use the `target_key_arn` attribute. Terraform will only perform drift detection if a configuration value is provided. +* `metadata` - (Optional) Map of keys/values to provision metadata (will be automatically prefixed by `x-amz-meta-`, note that only lowercase label are currently supported by the AWS Go API). +* `object_lock_legal_hold_status` - (Optional) [Legal hold](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-legal-holds) status that you want to apply to the specified object. Valid values are `ON` and `OFF`. +* `object_lock_mode` - (Optional) Object lock [retention mode](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-modes) that you want to apply to this object. Valid values are `GOVERNANCE` and `COMPLIANCE`. +* `object_lock_retain_until_date` - (Optional) Date and time, in [RFC3339 format](https://tools.ietf.org/html/rfc3339#section-5.8), when this object's object lock will [expire](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html#object-lock-retention-periods). +* `server_side_encryption` - (Optional) Server-side encryption of the object in S3. Valid values are "`AES256`" and "`aws:kms`". +* `source_hash` - (Optional) Triggers updates like `etag` but useful to address `etag` encryption limitations. Set using `filemd5("path/to/source")` (Terraform 0.11.12 or later). (The value is only stored in state and not saved by AWS.) +* `source` - (Optional, conflicts with `content` and `content_base64`) Path to a file that will be read and uploaded as raw bytes for the object content. +* `storage_class` - (Optional) [Storage Class](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#AmazonS3-PutObject-request-header-StorageClass) for the object. Defaults to "`STANDARD`". +* `tags` - (Optional) Map of tags to assign to the object. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `website_redirect` - (Optional) Target URL for [website redirect](http://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html). + +If no content is provided through `source`, `content` or `content_base64`, then the object will be empty. + +-> **Note:** Terraform ignores all leading `/`s in the object's `key` and treats multiple `/`s in the rest of the object's `key` as a single `/`, so values of `/index.html` and `index.html` correspond to the same S3 object as do `first//second///third//` and `first/second/third/`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `etag` - ETag generated for the object (an MD5 sum of the object content). For plaintext objects or objects encrypted with an AWS-managed key, the hash is an MD5 digest of the object data. For objects encrypted with a KMS key or objects created by either the Multipart Upload or Part Copy operation, the hash is not an MD5 digest, regardless of the method of encryption. More information on possible values can be found on [Common Response Headers](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html). +* `id` - `key` of the resource supplied above +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). +* `version_id` - Unique version ID value for the object, if bucket versioning is enabled. + +## Import + +Objects can be imported using the `id`. The `id` is the bucket name and the key together e.g., + +``` +$ terraform import aws_s3_bucket_object.object some-bucket-name/some/key.txt +``` + +Additionally, s3 url syntax can be used, e.g., + +``` +$ terraform import aws_s3_bucket_object.object s3://some-bucket-name/some/key.txt +```