-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Delayed Deletion for Vmwareengine Private Cloud #10764
Support Delayed Deletion for Vmwareengine Private Cloud #10764
Conversation
Hello! I am a robot. Tests will require approval from a repository maintainer to run. @melinath, a repository maintainer, has been assigned to review your changes. If you have not received review feedback within 2 business days, please leave a comment on this PR asking them to take a look. You can help make sure that review is quick by doing a self-review and by running impacted tests locally. |
Hi there, I'm the Modular magician. I've detected the following information about your changes: Diff reportYour PR generated some diffs in downstreams - here they are.
|
Tests analyticsTotal tests: Click here to see the affected service packages
Action takenFound 3 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected testsTestAccVmwareengineExternalAccessRule_vmwareEngineExternalAccessRuleUpdate|TestAccVmwareengineExternalAddress_vmwareEngineExternalAddressUpdate|TestAccVmwareengineSubnet_vmwareEngineUserDefinedSubnetUpdate |
This PR intends to solve #10764 |
bed5a73
to
aabdf56
Compare
Hi there, I'm the Modular magician. I've detected the following information about your changes: Diff reportYour PR generated some diffs in downstreams - here they are.
|
Tests analyticsTotal tests: Click here to see the affected service packages
Action takenFound 3 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected testsTestAccVmwareengineExternalAccessRule_vmwareEngineExternalAccessRuleUpdate|TestAccVmwareengineExternalAddress_vmwareEngineExternalAddressUpdate|TestAccVmwareengineSubnet_vmwareEngineUserDefinedSubnetUpdate |
|
|
This PR has been waiting for review for 2 weekdays. Please take a look! Use the label |
Hi @melinath, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah okay - in that case, marking as reviewed for now.
@swamitagupta, this PR is waiting for action from you. Please address any comments or change requests, or re-request review from a core reviewer if no action is required. If no action is taken, this PR will be closed in 28 days. This notification can be disabled with the |
aabdf56
to
9a502e9
Compare
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
e104f8e
to
89c8f23
Compare
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
89c8f23
to
a8b6b8c
Compare
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
a8b6b8c
to
bf09ae3
Compare
Tests analyticsTotal tests: 0 Click here to see the affected service packages
View the build log |
Hi there, I'm the Modular magician. I've detected the following information about your changes: Diff reportYour PR generated some diffs in downstreams - here they are.
Errors
|
Tests analyticsTotal tests: 0 Click here to see the affected service packages
View the build log |
Hi there, I'm the Modular magician. I've detected the following information about your changes: Diff reportYour PR generated some diffs in downstreams - here they are.
|
2939d07
to
1d01423
Compare
Tests analyticsTotal tests: 16 Click here to see the affected service packages
View the build log |
Hi there, I'm the Modular magician. I've detected the following information about your changes: Diff reportYour PR generated some diffs in downstreams - here they are.
|
Tests analyticsTotal tests: 16 Click here to see the affected service packages
View the build log |
1d01423
to
bb90373
Compare
Hi there, I'm the Modular magician. I've detected the following information about your changes: Diff reportYour PR generated some diffs in downstreams - here they are.
|
Tests analyticsTotal tests: 16 Click here to see the affected service packages
View the build log |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
asked some clarifying questions for context, things seem okay at first glance however.
Lots of parts, but since changes are contained to a diff suppress or deletion functionality, it should be fairly safe.
- !ruby/object:Api::Type::Integer | ||
name: "deletion_delay_hours" | ||
description: | | ||
The number of hours to delay this request. You can set this value to an hour between 0 to 8, where setting it to 0 starts the deletion request immediately. If no value is set, a default value is set at the API Level. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the API default value for deletion delay? Is it 0? How often do you think users will need to additionally set send_deletion_delay_hours_if_zero
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, the default delay at the API level is set to 3 hours. However, it might be modified in the near future.
Users would need to set send_deletion_delay_hours_if_zero
if they want to send delay_hours=0
through the API call and delete the PC immediately. I have added this boolean field after a discussion with Stephen in a previous thread since terraform does not differentiate between "no value set" and "0 value set".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, thank you for the context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noting that I do not like the need for send_deletion_delay_hours_if_zero
but cannot think of another way to force send 0 only when the user has specified 0 due to TF limitations. I am sure this was discussed previously
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is the previous discussion thread for implementing this change.
Another approach which I could think of was using delay_hours
inside a nested object deletion_parameters
but nested objects are not yet supported for virtual_fields
.
return true | ||
} | ||
if (isMultiNodePrivateCloud(d) && old == "TIME_LIMITED" && new == "STANDARD") { | ||
log.Printf("[DEBUG] Multinode Private Cloud found, facilitating TYPE change to STANDARD") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you give more context as to why a type diff from TIME_LIMITED
-> STANDARD
needs to be suppressed with Multinode? Also, is this related or separate to delayed deletion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is separate from delayed deletion.
The TIME_LIMITED -> STANDARD PC update behavior is explained in go/upgrade-pc-in-terraform
A feature request was raised where the user wanted to change the PC type from their end for this use case. Since API does not support this change in its update call (it automatically changes it in the backend after a node count 1 -> 3&+ node count increase), I have added this suppress function to enable this type change at the terraform level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also see that type
is set to ignore_read
, so at least it will remain consistent. Thanks for the context.
@@ -18,7 +18,7 @@ func TestAccVmwareenginePrivateCloud_vmwareEnginePrivateCloudUpdate(t *testing.T | |||
t.Parallel() | |||
|
|||
context := map[string]interface{}{ | |||
"region": "southamerica-west1", | |||
"region": "me-west1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you provide more context as to why this is being updated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some months back, we exhausted the number of nodes in the southamerca-west1 region which were being used for this test, leading to PC creation failures in multiple terraform tests.
As a result, go/gcve-resource-optimisation-for-tf-tests was created and additional nodes were reserved in me-west1 region for terraform test projects. This PR is part of its implementation efforts along with PR #10992.
@@ -8,6 +8,8 @@ import ( | |||
) | |||
|
|||
func TestAccVmwareengineSubnet_vmwareEngineUserDefinedSubnetUpdate(t *testing.T) { | |||
// Temporarily skipping so that this test does not run and consume resources during PR pushes. It is bound to fail and is being fixed by PR #10992 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we will remove these before submitting right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can keep these commented till PR #10992 gets submitted. That way, the consumption of nodes for tests which are bound to fail would be prevented.
These tests would be modified and combined by PR #10992 which is built on the design go/gcve-resource-optimisation-for-tf-tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood, so the relevant test for these code changes are all in mmv1/third_party/terraform/services/vmwareengine/resource_vmwareengine_private_cloud_test.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, thats correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
TIME_LIMITED
toSTANDARD
for multi-node Private CloudsRelease Note Template for Downstream PRs (will be copied)