Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Is there a better way to force the re-creation of an aws_s3_bucket_object? #2584

Closed
hashibot opened this issue Dec 7, 2017 · 10 comments
Labels
question A question about existing functionality; most questions are re-routed to discuss.hashicorp.com. service/s3 Issues and PRs that pertain to the s3 service.

Comments

@hashibot
Copy link

hashibot commented Dec 7, 2017

This issue was originally opened by @jonseymour as hashicorp/terraform#16869. It was migrated here as a result of the provider split. The original body of the issue is below.


It seems that if I choose to use kms_key_id with an aws_s3_bucket_object, I need to manually taint the resource in order to force the update of the object in the s3 bucket.

Is there a better way?

Terraform Version

Terraform v0.10.7

Terraform Configuration Files

Snippet

resource "aws_s3_bucket_object" "encrypted" {
  bucket = "${aws_s3_bucket.vault.bucket}"
  key    = "env/secrets.json"
  source = "../vault/${var.environment}/env/secrets.json"
  kms_key_id = "${aws_kms_key.key.arn}"
}

Debug Output

Not relevant.

Crash Output

Not relevant.

Expected Behavior

If the file referenced by source changes, then the s3 bucket object will be updated.

Actual Behavior

Manual tainting of the file (terraform taint --module=vault aws_s3_bucket_object.encrypted) is required to force the object to be updated. If there are technical reasons why the change can't be detected, then the option to always update the object.

Steps to Reproduce

  1. Add an aws_s3_bucket_object resource configured as above to the plan
  2. Apply the plan
  3. Modify the file referenced by the resource
  4. Reapply the plan.
@hashibot hashibot added the question A question about existing functionality; most questions are re-routed to discuss.hashicorp.com. label Dec 7, 2017
@jonseymour
Copy link

jonseymour commented Dec 8, 2017

Ok, here is a workaround, of sorts. I realised that the provider will push the s3 bucket object even if the only thing that changes is the tags.

So, if we change the resource snippet to be:

resource "aws_s3_bucket_object" "encrypted" {
  bucket = "${aws_s3_bucket.vault.bucket}"
  key    = "env/secrets.json"
  source = "../vault/${var.environment}/env/secrets.json"
  kms_key_id = "${aws_kms_key.key.arn}"
  tags = {
  	md5 = "${md5(file("../vault/${var.environment}/env/secrets.json"))}"
  }
}

then we have a reliable way to detect a change to the source and can exploit the side-effect of updating the tags to update the s3 bucket object itself.

@radeksimko radeksimko added the service/s3 Issues and PRs that pertain to the s3 service. label Jan 28, 2018
@ewbankkit
Copy link
Contributor

@jonseymour Investigating #7130 I looked this problem and have no problem creating a new S3 object version just by specifying a KMS key ARN - Could you please verify with the latest Terraform and AWS Provider versions?

@sfdc-afraley
Copy link

Due to #7130 the workaround of using the md5 as a tag no longer causes the object to be updated. Now once the object is created, it will not update if you're using KMS and thus can't use etag. Is there a new workaround?

@sfdc-afraley
Copy link

sfdc-afraley commented Feb 20, 2019

I found a usable workaround on Reddit

Instead of source try using
content_base64 = "${ base64encode( file( "${path.module}/files/nginx.conf" ) ) }"

I should add that in case it's not obvious, this causes the entire file to be base64 encoded into your statefile.

@jonseymour
Copy link

@ewbankkit sorry, just saw your request. When I get a chance, I will see if I can reproduce the issue with current provider and terraform versions.

@oldschoolsysadmin
Copy link

oldschoolsysadmin commented Apr 8, 2019

I found a usable workaround on Reddit

Instead of source try using
content_base64 = "${ base64encode( file( "${path.module}/files/nginx.conf" ) ) }"

This doesn't work for me - the objects I'm uploading are too large for base64 encoding to be a reasonable option. Is there any better way to trigger updates on a changed file?

@oldschoolsysadmin
Copy link

By enabling server side encryption in the bucket resource, I'm able to continue using the etag argument in the bucket object resource, and still get an encrypted object in the bucket. So my bucket resource now has this stanza:

resource "aws_s3_bucket" "bucket" {
    [...]
    server_side_encryption_configuration {
        rule {
            apply_server_side_encryption_by_default {
                sse_algorithm = "aws:kms"
            }
        }
    }
}

When creating aws_s3_bucket_object resources pointed to this bucket, they're implicitly encrypted, but can still use the etag argument to track changes to the local file.

@oldschoolsysadmin
Copy link

The above doesn't really work - terraform ended up wanting to recreate the bucket object every single time, regardless of whether the local file had changed or not.

@bflad
Copy link
Contributor

bflad commented Aug 20, 2019

There appears to be a feature request for a separate hash argument in #6668 with more upvotes so closing this issue to consolidate any discussions and efforts there. 👍

@bflad bflad closed this as completed Aug 20, 2019
@ghost
Copy link

ghost commented Nov 1, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Nov 1, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question A question about existing functionality; most questions are re-routed to discuss.hashicorp.com. service/s3 Issues and PRs that pertain to the s3 service.
Projects
None yet
Development

No branches or pull requests

7 participants