Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: force_destroy argument for s3 buckets with objects #2007

Merged
merged 2 commits into from
May 21, 2015

Conversation

m-s-austin
Copy link
Contributor

This PR is for feedback / suggestions on a feature that would make it possible to destroy an S3 bucket even if it has objects in it when you are tearing it down.

resource "aws_s3_bucket" default {
   bucket = "my-tf-test-bucket"
   force_destroy = true
}

force_destroy defaults to false, so it won't change the current behavior (which is to error out if there are objects in the bucket), but if you set force_destroy to true, all of the items in the bucket would be deleted thereby allowing the entire bucket to be destroyed.

I have tested this code and it works well, but I am not sure of the best way to write an acceptance test for it, I could also use feedback on that.
Thanks!

@catsby catsby changed the title provider/aws: force_delete argument for s3 buckets with objects provider/aws: force_destroy argument for s3 buckets with objects May 19, 2015
log.Printf("[DEBUG] S3 Bucket attempting to forceDestroy %+v", err)

bucket := d.Get("bucket").(string)
resp, listErr := s3conn.ListObjects(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

go ahead and use err here instead of listErr

@catsby
Copy link
Contributor

catsby commented May 19, 2015

This is a good start! I noticed that the S3 API will by default only return the first 1,000 results with the ListBucket call (docs here). We'll need to check IsTruncated in the response and do follow up requests using NextMarker to go through them all.

@m-s-austin
Copy link
Contributor Author

@catsby that was my original thought / plan as well, but as it is written it should recursively delete in groups of 1000 until all objects are gone or it returns an error. I'd be glad to revamp it into a loop if you prefer that approach. I thought this looked a little cleaner is why I went with it to start with.

I have added a comment in the code to make that intent more clear, but let me know if I should go with the other approach.

@@ -257,6 +264,43 @@ func resourceAwsS3BucketDelete(d *schema.ResourceData, meta interface{}) error {
Bucket: aws.String(d.Id()),
})
if err != nil {
// bucket may have things delete them
forceDestroy := d.Get("force_destroy").(bool); if (forceDestroy && strings.Contains(err.Error(),"BucketNotEmpty")) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At one point I had a note here on formatting/style here, did I not post it?
I would rather see this style:

if err != nil {
    ec2err, ok := err.(aws.APIError)
    if ok && ec2err.Code == "BucketNotEmpty" {
        if d.Get("force_destroy").(bool) { 
            // bucket may have things delete them
          // attempt to delete things
        }
    } 
    return fmt.Errorf("Error deleting S3 Bucket: %s", err)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I saw the note, but I will address it now. Thanks!

@catsby
Copy link
Contributor

catsby commented May 19, 2015

as it is written it should recursively delete in groups of 1000

Ah, clearly missed that. I'm curious how well that will work, more so on AWS ability to update and accurately re-list items as we need it in this recursive manner vs. us using the markers to page through. It's hard to say without a large bucket with various items in it handy to test with.

I'm OK with the recursive strategy as is then, thanks for pointing it out

@m-s-austin
Copy link
Contributor Author

The testing comment actually leads into another question I have :)
I want to prepare an acceptance test for this but it doesn't seem like terraform currently has a way to "pre-load" a bucket with objects so currently we have to create the bucket then run some other scripts to upload the files. We were contemplating a method of doing this possibly something like (still rough obviously)

resource "aws_s3_bucket_object" default {
   bucket = "my-tf-test-bucket"
   key = "key"
   acl = "acl"
   body = file  
etc... 
}

so actually 2 questions 1) Is there already a way to do this I am missing? 2) does this seem like right approach for this?

@m-s-austin
Copy link
Contributor Author

I have just finished a test for deleting ~3000 objects successfully. I planned to test in on 5000 objects, but the AWS uploader stopped working somewhere around 60%.

It seems to work quite well even with a full load

On May 19, 2015, at 12:49 PM, Clint [email protected] wrote:

'm curious how well that will work, more so on AWS ability to update and accurately re-list items as we need it in this recursive manner vs. us using the markers to page through. It's hard to say without a large bucket with various items in it handy to test with.

@catsby
Copy link
Contributor

catsby commented May 20, 2015

Excellent, thank you for testing that! Unfortunately master has progressed and introduced a change from upstream awslabs/aws-sdk-go, can you rebase? It's related to errors:

I'm pretty sure this is the only change you need to make:

-ec2err, ok := err.(aws.APIError)
-if ok && ec2err.Code == "BucketNotEmpty" {
+ec2err, ok := err.(awserr.Error)
+if ok && ec2err.Code() == "BucketNotEmpty" {

You may also need to import awserr in the import block at the top:

"github.com/awslabs/aws-sdk-go/aws/awserr"

@m-s-austin
Copy link
Contributor Author

I think it should be good now. Thanks!


Reply to this email directly or view it on GitHub #2007 (comment).

@catsby
Copy link
Contributor

catsby commented May 20, 2015

Sorry to keep you running around @m-s-austin , could you please add documentation?

Also, there are 23 commits here, could you possibly squash?

commit a92fe29
Author: Michael Austin <[email protected]>
Date:   Wed May 20 16:35:38 2015 -0400

    updated to new style of awserr

commit 428271c
Merge: b3bae0e 883e284
Author: Michael Austin <[email protected]>
Date:   Wed May 20 16:29:00 2015 -0400

    Merge branch 'master' into 2544-terraform-s3-forceDelete

commit b3bae0e
Author: Michael Austin <[email protected]>
Date:   Wed May 20 12:06:36 2015 -0400

    removed extra line

commit 85eb40f
Author: Michael Austin <[email protected]>
Date:   Tue May 19 14:27:19 2015 -0400

    stray [

commit d8a405f
Author: Michael Austin <[email protected]>
Date:   Tue May 19 14:24:01 2015 -0400

    addressed feedback concerning parsing of aws error in a more standard way

commit 5b9a5ee
Author: Michael Austin <[email protected]>
Date:   Tue May 19 10:55:22 2015 -0400

    clarify comment to highlight recursion

commit 9104378
Author: Michael Austin <[email protected]>
Date:   Tue May 19 10:51:13 2015 -0400

    addressed feedback about reusing err variable and unneeded parens

commit 95e9c3a
Merge: 2637edf db095e2
Author: Michael Austin <[email protected]>
Date:   Mon May 18 19:15:36 2015 -0400

    Merge branch 'master' into 2544-terraform-s3-forceDelete

commit 2637edf
Author: Michael Austin <[email protected]>
Date:   Fri May 15 15:12:41 2015 -0400

    optimize delete to delete up to 1000 at once instead of one at a time

commit 1441eb2
Author: Michael Austin <[email protected]>
Date:   Fri May 15 12:34:53 2015 -0400

    Revert "hook new resource provider into configuration"

    This reverts commit e14a1ad.

commit b532fa2
Author: Michael Austin <[email protected]>
Date:   Fri May 15 12:34:49 2015 -0400

    this file should not be in this branch

commit 645c0b6
Author: Michael Austin <[email protected]>
Date:   Thu May 14 21:15:29 2015 -0400

    buckets tagged force_destroy will delete all files and then delete buckets

commit ac50cae
Author: Michael Austin <[email protected]>
Date:   Thu May 14 12:41:40 2015 -0400

    added code to delete policy from s3 bucket

commit cd45e45
Author: Michael Austin <[email protected]>
Date:   Thu May 14 12:27:13 2015 -0400

    added code to read bucket policy from bucket, however, it's not working as expected currently

commit 0d3d51a
Merge: 31ffdea 8a3b75d
Author: Michael Austin <[email protected]>
Date:   Thu May 14 08:38:06 2015 -0400

    Merge remote-tracking branch 'hashi_origin/master' into 2544-terraform-s3-policy

commit 31ffdea
Author: Michael Austin <[email protected]>
Date:   Wed May 13 16:01:52 2015 -0400

    add name for use with resouce id

commit b41c737
Author: Michael Austin <[email protected]>
Date:   Wed May 13 14:48:24 2015 -0400

    Revert "working policy assignment"
    This reverts commit 0975a70.

commit b926b11
Author: Michael Austin <[email protected]>
Date:   Wed May 13 14:35:02 2015 -0400

    moved policy to it's own provider

commit 233a5f4
Merge: e14a1ad c003e96
Author: Michael Austin <[email protected]>
Date:   Wed May 13 12:39:14 2015 -0400

    merged origin/master

commit e14a1ad
Author: Michael Austin <[email protected]>
Date:   Wed May 13 12:26:51 2015 -0400

    hook new resource provider into configuration

commit 455b409
Author: Michael Austin <[email protected]>
Date:   Wed May 13 12:26:15 2015 -0400

    dummy resource provider

commit 0975a70
Author: Michael Austin <[email protected]>
Date:   Wed May 13 09:42:31 2015 -0400

    working policy assignment

commit 3ab901d
Author: Michael Austin <[email protected]>
Date:   Tue May 12 10:39:56 2015 -0400

    added policy string to schema
@m-s-austin m-s-austin force-pushed the 2544-terraform-s3-forceDelete branch from a92fe29 to dc698e3 Compare May 20, 2015 23:08
@m-s-austin
Copy link
Contributor Author

No problem :) done and done

catsby added a commit that referenced this pull request May 21, 2015
…elete

provider/aws: force_destroy argument for s3 buckets with objects
@catsby catsby merged commit e5f5e1a into hashicorp:master May 21, 2015
@catsby
Copy link
Contributor

catsby commented May 21, 2015

Thanks! Everything checked out here, pulling this in.
Thanks for the contribution!

m-s-austin added a commit to iJoinSolutions/terraform that referenced this pull request May 21, 2015
merging because this PR has been accepted on hashicorp/master
hashicorp#2007
@m-s-austin m-s-austin deleted the 2544-terraform-s3-forceDelete branch May 21, 2015 17:26
@ozbillwang
Copy link

@m-s-austin & @catsby

Will the option force_destroy have conflicts with option versioning?

Second, why we have inconsistent setting as below?

    force_destroy = true

    versioning {
      enabled = true
    }

Should they be same?

    force_destroy = true
    versioning = true

or

     force_destroy {
      enabled = true
    }

    versioning {
      enabled = true
    }

@ghost
Copy link

ghost commented Apr 21, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants