-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add retention_policy to storage_bucket #2064
Add retention_policy to storage_bucket #2064
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good so far- I made a first pass, mostly covering schema. Do you mind update the resource's website docs as well?
if v, ok := d.GetOk("retention_policy"); ok { | ||
retention_policies := v.([]interface{}) | ||
|
||
if len(retention_policies) > 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can avoid having to check this in CRUD code/catch it at plan time instead of at apply time by using MaxItems
in the resource schema.
if d.HasChange("retention_policy") { | ||
// Changing from locked to unlocked is not possible, throw an error | ||
old, new := d.GetChange("retention_policy.0.is_locked") | ||
if old.(bool) && !new.(bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change isn't necessary, but we could actually add a conditional ForceNew
using CustomizeDiff
. If we perform this check there, we can mark the field with diff.ForceNew
, causing Terraform to attempt to destroy + recreate the resource.
This behaviour might be unintuitive, though, since the deletion will only succeed if every object in the bucket is past the retention policy and is removed, and buckets with force_destroy
set will probably attempt to remove all the objects and only delete those who've had their policies expire.
} | ||
} | ||
} | ||
|
||
log.Printf("[DEBUG] Created bucket %v at location %v\n\n", res.Name, res.SelfLink) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This log message is probably worth moving above the SetId
call after creation, and we can log that a retention policy lock was enabled here instead.
@@ -554,6 +646,20 @@ func resourceStorageBucketDelete(d *schema.ResourceData, meta interface{}) error | |||
} | |||
|
|||
if len(res.Items) != 0 { | |||
if d.Get("retention_policy.0.is_locked").(bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job avoiding the possible problem with not-old-enough items 👍
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. ℹ️ Googlers: Go here for more info. |
b176fab
to
f724217
Compare
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
f724217
to
8bc3fe4
Compare
Hi! I'm the modular magician, I work on Magic Modules. Pull request statusesNo diff detected in terraform-google-conversion. New Pull RequestsI built this PR into one or more new PRs on other repositories, and when those are closed, this PR will also be merged and closed. |
Tracked submodules are build/terraform-beta build/terraform-mapper build/terraform build/ansible build/inspec.
8bc3fe4
to
c979cb4
Compare
Added support for
retention_policy
to google_storage_bucketRetention Policy attribute
is_locked
is not able to be set at creation time, but instead requires a call toLockRetentionPolicy
and requires the bucket's meta-generation number to be passed in. We get the meta-generation number from the response after our initial create/update. Therefore, we have to make a separate call to the API after our initial creation/update of the bucket.Another thing to be aware of is, once the retention policy has been locked, it cannot be unlocked.
Release Note for Downstream PRs (will be copied)