-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Is there a better way to force the re-creation of an aws_s3_bucket_object? #2584
Comments
Ok, here is a workaround, of sorts. I realised that the provider will push the s3 bucket object even if the only thing that changes is the tags. So, if we change the resource snippet to be:
then we have a reliable way to detect a change to the source and can exploit the side-effect of updating the tags to update the s3 bucket object itself. |
@jonseymour Investigating #7130 I looked this problem and have no problem creating a new S3 object version just by specifying a KMS key ARN - Could you please verify with the latest Terraform and AWS Provider versions? |
Due to #7130 the workaround of using the md5 as a tag no longer causes the object to be updated. Now once the object is created, it will not update if you're using KMS and thus can't use etag. Is there a new workaround? |
I found a usable workaround on Reddit
I should add that in case it's not obvious, this causes the entire file to be base64 encoded into your statefile. |
@ewbankkit sorry, just saw your request. When I get a chance, I will see if I can reproduce the issue with current provider and terraform versions. |
This doesn't work for me - the objects I'm uploading are too large for base64 encoding to be a reasonable option. Is there any better way to trigger updates on a changed file? |
By enabling server side encryption in the bucket resource, I'm able to continue using the etag argument in the bucket object resource, and still get an encrypted object in the bucket. So my bucket resource now has this stanza:
When creating |
The above doesn't really work - terraform ended up wanting to recreate the bucket object every single time, regardless of whether the local file had changed or not. |
There appears to be a feature request for a separate hash argument in #6668 with more upvotes so closing this issue to consolidate any discussions and efforts there. 👍 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
This issue was originally opened by @jonseymour as hashicorp/terraform#16869. It was migrated here as a result of the provider split. The original body of the issue is below.
It seems that if I choose to use kms_key_id with an aws_s3_bucket_object, I need to manually taint the resource in order to force the update of the object in the s3 bucket.
Is there a better way?
Terraform Version
Terraform Configuration Files
Snippet
Debug Output
Not relevant.
Crash Output
Not relevant.
Expected Behavior
If the file referenced by source changes, then the s3 bucket object will be updated.
Actual Behavior
Manual tainting of the file (
terraform taint --module=vault aws_s3_bucket_object.encrypted
) is required to force the object to be updated. If there are technical reasons why the change can't be detected, then the option to always update the object.Steps to Reproduce
The text was updated successfully, but these errors were encountered: