-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provider/aws: force_destroy argument for s3 buckets with objects #2007
provider/aws: force_destroy argument for s3 buckets with objects #2007
Conversation
log.Printf("[DEBUG] S3 Bucket attempting to forceDestroy %+v", err) | ||
|
||
bucket := d.Get("bucket").(string) | ||
resp, listErr := s3conn.ListObjects( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
go ahead and use err
here instead of listErr
This is a good start! I noticed that the S3 API will by default only return the first 1,000 results with the |
@catsby that was my original thought / plan as well, but as it is written it should recursively delete in groups of 1000 until all objects are gone or it returns an error. I'd be glad to revamp it into a loop if you prefer that approach. I thought this looked a little cleaner is why I went with it to start with. I have added a comment in the code to make that intent more clear, but let me know if I should go with the other approach. |
@@ -257,6 +264,43 @@ func resourceAwsS3BucketDelete(d *schema.ResourceData, meta interface{}) error { | |||
Bucket: aws.String(d.Id()), | |||
}) | |||
if err != nil { | |||
// bucket may have things delete them | |||
forceDestroy := d.Get("force_destroy").(bool); if (forceDestroy && strings.Contains(err.Error(),"BucketNotEmpty")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At one point I had a note here on formatting/style here, did I not post it?
I would rather see this style:
if err != nil {
ec2err, ok := err.(aws.APIError)
if ok && ec2err.Code == "BucketNotEmpty" {
if d.Get("force_destroy").(bool) {
// bucket may have things delete them
// attempt to delete things
}
}
return fmt.Errorf("Error deleting S3 Bucket: %s", err)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I saw the note, but I will address it now. Thanks!
Ah, clearly missed that. I'm curious how well that will work, more so on AWS ability to update and accurately re-list items as we need it in this recursive manner vs. us using the markers to page through. It's hard to say without a large bucket with various items in it handy to test with. I'm OK with the recursive strategy as is then, thanks for pointing it out |
The testing comment actually leads into another question I have :)
so actually 2 questions 1) Is there already a way to do this I am missing? 2) does this seem like right approach for this? |
I have just finished a test for deleting ~3000 objects successfully. I planned to test in on 5000 objects, but the AWS uploader stopped working somewhere around 60%. It seems to work quite well even with a full load
|
Excellent, thank you for testing that! Unfortunately I'm pretty sure this is the only change you need to make: -ec2err, ok := err.(aws.APIError)
-if ok && ec2err.Code == "BucketNotEmpty" {
+ec2err, ok := err.(awserr.Error)
+if ok && ec2err.Code() == "BucketNotEmpty" { You may also need to import "github.com/awslabs/aws-sdk-go/aws/awserr" |
I think it should be good now. Thanks!
|
Sorry to keep you running around @m-s-austin , could you please add documentation? Also, there are 23 commits here, could you possibly squash? |
commit a92fe29 Author: Michael Austin <[email protected]> Date: Wed May 20 16:35:38 2015 -0400 updated to new style of awserr commit 428271c Merge: b3bae0e 883e284 Author: Michael Austin <[email protected]> Date: Wed May 20 16:29:00 2015 -0400 Merge branch 'master' into 2544-terraform-s3-forceDelete commit b3bae0e Author: Michael Austin <[email protected]> Date: Wed May 20 12:06:36 2015 -0400 removed extra line commit 85eb40f Author: Michael Austin <[email protected]> Date: Tue May 19 14:27:19 2015 -0400 stray [ commit d8a405f Author: Michael Austin <[email protected]> Date: Tue May 19 14:24:01 2015 -0400 addressed feedback concerning parsing of aws error in a more standard way commit 5b9a5ee Author: Michael Austin <[email protected]> Date: Tue May 19 10:55:22 2015 -0400 clarify comment to highlight recursion commit 9104378 Author: Michael Austin <[email protected]> Date: Tue May 19 10:51:13 2015 -0400 addressed feedback about reusing err variable and unneeded parens commit 95e9c3a Merge: 2637edf db095e2 Author: Michael Austin <[email protected]> Date: Mon May 18 19:15:36 2015 -0400 Merge branch 'master' into 2544-terraform-s3-forceDelete commit 2637edf Author: Michael Austin <[email protected]> Date: Fri May 15 15:12:41 2015 -0400 optimize delete to delete up to 1000 at once instead of one at a time commit 1441eb2 Author: Michael Austin <[email protected]> Date: Fri May 15 12:34:53 2015 -0400 Revert "hook new resource provider into configuration" This reverts commit e14a1ad. commit b532fa2 Author: Michael Austin <[email protected]> Date: Fri May 15 12:34:49 2015 -0400 this file should not be in this branch commit 645c0b6 Author: Michael Austin <[email protected]> Date: Thu May 14 21:15:29 2015 -0400 buckets tagged force_destroy will delete all files and then delete buckets commit ac50cae Author: Michael Austin <[email protected]> Date: Thu May 14 12:41:40 2015 -0400 added code to delete policy from s3 bucket commit cd45e45 Author: Michael Austin <[email protected]> Date: Thu May 14 12:27:13 2015 -0400 added code to read bucket policy from bucket, however, it's not working as expected currently commit 0d3d51a Merge: 31ffdea 8a3b75d Author: Michael Austin <[email protected]> Date: Thu May 14 08:38:06 2015 -0400 Merge remote-tracking branch 'hashi_origin/master' into 2544-terraform-s3-policy commit 31ffdea Author: Michael Austin <[email protected]> Date: Wed May 13 16:01:52 2015 -0400 add name for use with resouce id commit b41c737 Author: Michael Austin <[email protected]> Date: Wed May 13 14:48:24 2015 -0400 Revert "working policy assignment" This reverts commit 0975a70. commit b926b11 Author: Michael Austin <[email protected]> Date: Wed May 13 14:35:02 2015 -0400 moved policy to it's own provider commit 233a5f4 Merge: e14a1ad c003e96 Author: Michael Austin <[email protected]> Date: Wed May 13 12:39:14 2015 -0400 merged origin/master commit e14a1ad Author: Michael Austin <[email protected]> Date: Wed May 13 12:26:51 2015 -0400 hook new resource provider into configuration commit 455b409 Author: Michael Austin <[email protected]> Date: Wed May 13 12:26:15 2015 -0400 dummy resource provider commit 0975a70 Author: Michael Austin <[email protected]> Date: Wed May 13 09:42:31 2015 -0400 working policy assignment commit 3ab901d Author: Michael Austin <[email protected]> Date: Tue May 12 10:39:56 2015 -0400 added policy string to schema
a92fe29
to
dc698e3
Compare
No problem :) done and done |
…elete provider/aws: force_destroy argument for s3 buckets with objects
Thanks! Everything checked out here, pulling this in. |
merging because this PR has been accepted on hashicorp/master hashicorp#2007
Will the option Second, why we have inconsistent setting as below?
Should they be same?
or
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
This PR is for feedback / suggestions on a feature that would make it possible to destroy an S3 bucket even if it has objects in it when you are tearing it down.
force_destroy defaults to false, so it won't change the current behavior (which is to error out if there are objects in the bucket), but if you set force_destroy to true, all of the items in the bucket would be deleted thereby allowing the entire bucket to be destroyed.
I have tested this code and it works well, but I am not sure of the best way to write an acceptance test for it, I could also use feedback on that.
Thanks!