-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_s3_bucket_object inconsistently returning "Provider produced inconsistent result after apply" #9725
Comments
hi @alpacamybags118 sorry you are running into issues here, and we appreciate you opening up an issue. Do you know if any of the tests being executed are modifying the same resource? There are a number of computed properties on this resource so it might be possible that actual state is changing between parallel test runs. That's a first guess. Unfortunately, without any debug logs there is not much that can be identified here as the issue is not reproducible on it's own. If this is something that is effecting your infrastructure deployment can you please update this issue with the relevant debug log information - see Terraform Debugging - so that we can dive deeper into the provided error. Cheers! |
@nywilken i was able to get a debug log - what's the best way to get it to you guys? something i was thinking about - while there isn't anything executing on the same resource, a few of the tasks are adding an object to a bucket (the same file). Maybe its possible that one of the tests has a lock on the file at the same time as another test is trying to read it? |
@nywilken in the meantime, i was looking through the logs and I think this piece might be of use Something interesting to note is the object this is uploading is a PNG file, which other calls to this resource that I'm using are upload .HTML files and seem to work fine. |
@nywilken as an update to those errors, I was able to get around them by adding the following lines to my s3 object resources:
The Here are some relevant logs that might help (omitting anything that could be sensitive:
The etag |
We hit this too, and Terraform debug logs for our case are here. I'll check our CloudTrail logs for possible parallel activity later today. |
So trigger in our job seems to have been:
Checking our logs with: SELECT eventtime,
eventname,
useridentity.username,
useragent,
requestparameters
errorcode,
errormessage
FROM "default"."cloudtrail_logs"
WHERE from_iso8601_timestamp(eventtime) > from_iso8601_timestamp('2019-08-22T12:53:40Z')
AND from_iso8601_timestamp(eventtime) < from_iso8601_timestamp('2019-08-22T12:53:43Z')
AND eventname LIKE '%Bucket'
ORDER BY eventtime; Gave:
So I dunno why the
And that's well after the failed |
Michael B. at AWS got back to me confirming eventual-consistency:
I'm following up with AWS, because the referenced docs talk about objects in buckets, and not about buckets themselves. And from my reading, they say that you should expect consistent read-after-write for creating PUTs that had no earlier HEAD or GET (which is what we had here) and you only get eventually-consistent read-after-write if you had a leading HEAD or GET. But however the AWS discussion works out (buggy docs or a buggy implementation), the Terraform provider should probably adjust to gracefully handle the current measured behavior by retrying 404ing HEADs a few times (a minute? I dunno what "some more time" actually means) before erroring out with "inconsistent result after apply". I haven't looked at the backing provider code yet to see what this would entail or how easy it would be to implement. |
John-Michael J. at AWS confirmed that all bucket creation is eventually-consistent read-after-write. There is no bucket-creation case in which they guarantee consistent read-after-write. This contrasts with objects in buckets, where under some conditions they do guarantee consistent read-after-write. |
I just ran into this same issue. It was after going from one object with an md5 based etag It appears the delete/create of
|
I'm now getting this regularly when updating s3 objects. It seems to have started once switching to using resource "aws_s3_bucket_object" "haproxy_config_files" {
for_each = fileset(var.haproxy_config_dir_path, "**")
key = "${var.service_configs_bucket_path}/config.d/${each.value}"
bucket = var.service_configs_bucket_name
source = "${var.haproxy_config_dir_path}/${each.value}"
etag = filemd5("${var.haproxy_config_dir_path}/${each.value}")
} edit: I should mention I'm on terraform 0.12.8 with hashicorp/aws 2.27.0 |
For those dealing with this issue, a workaround I did for the time being is to use a null resource and run aws cli commands to manipulate objects in S3. For example:
You would just need to ensure whatever environment you are running your terraform plan/apply in has the cli configured. |
Is there any update on this issue, to try and put in a retry mechanism similar to the one put in place for S3 buckets? Pretty sure it's another S3 eventual consistency issue. Not covering this scenario (since it could happen at just about any time), prevents us from using the s3_bucket_object resource at all. https://github.com/terraform-providers/terraform-provider-aws/pull/11894/files |
I've encountered the same problem so I fixed it. Please upvote the PR for a faster review! 🙏 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Community Note
Terraform Version
0.12.6
Affected Resource(s)
Terraform Configuration Files
Debug Output
Panic Output
Expected Behavior
An object was added to the bucket.
Actual Behavior
Steps to Reproduce
terraform apply
Important Factoids
References
The text was updated successfully, but these errors were encountered: