-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keep failed resources, add state
output value
#331
Conversation
state
output value
state
output valuestate
output value
// state | ||
if hvn.State == networkmodels.HashicorpCloudNetwork20200907NetworkStateFAILED { | ||
// The HVN has already been deleted, remove from state. | ||
if hvn.State == networkmodels.HashicorpCloudNetwork20200907NetworkStateDELETED { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking we should add this check to all the Reads, but it appears that not all the resources have a DELETED state. So I guess we'll just make sure to check on any resource with that state. 👍 (which you've done!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing output:
--- PASS: TestAccAwsHvnOnly (170.69s)
ok github.com/hashicorp/terraform-provider-hcp/internal/provider 171.230s
--- PASS: TestAccAwsPeering (733.26s)
ok github.com/hashicorp/terraform-provider-hcp/internal/provider 733.602s
--- PASS: TestAccTGWAttachment (738.99s)
ok github.com/hashicorp/terraform-provider-hcp/internal/provider 739.340s
--- PASS: TestAccVaultCluster (3159.01s)
ok github.com/hashicorp/terraform-provider-hcp/internal/provider 3159.483s
I'm having troubles with timeouts on the Azure peering test. But I do think this is safe to merge.
I'll wrap up testing this change against existing resources (the acceptance tests only cover newly created resources) and then hopefully this should be good to go! 🎉
- there has been a longstanding issue where resources are leaked in our e2e tests. This is because we purge all failed clusters from state with d.SetId(""). This doesn't make sense for failed clusters/snapshots. Instead, we should store their state... in state
- and undo property sorting - it makes the PR bigger
- otherwise folks need to upload a credit card to help pr fixes
- reduced the size, need to reduce the scale
Co-authored-by: Brenna Hewer-Darroch <[email protected]>
64038e1
to
cb7da48
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just completed a manual test to verify existing resources are not impacted. I created the following resources using the latest HCP provider version, v0.32.0:
- AWS HVN
- AWS network peering
- AWS transit gateway
- AWS HVN route
- Vault cluster
- Sample TF state, created on v0.32.0
# Vault cluster in TF state
{
"mode": "managed",
"type": "hcp_vault_cluster",
"name": "test",
"provider": "provider[\"registry.terraform.io/hashicorp/hcp\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"cloud_provider": "aws",
"cluster_id": "test-vault-cluster",
"created_at": "2022-06-22T19:50:41.440Z",
...
},
"dependencies": [
"hcp_hvn.test"
]
}
]
}
- Output after switching to this branch's local binary and running apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
# Vault cluster in TF state, now refreshed with `state`
{
"mode": "managed",
"type": "hcp_vault_cluster",
"name": "test",
"provider": "provider[\"registry.terraform.io/hashicorp/hcp\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"cloud_provider": "aws",
"cluster_id": "test-vault-cluster",
"created_at": "2022-06-22T19:50:41.440Z",
"state": "RUNNING",
...
},
"dependencies": [
"hcp_hvn.test"
]
}
]
}
*I tried to test the Azure resources but I'm experiencing timeout issues. I feel confident this is a safe change to merge, and we can address the testing issues separately.
* Fix storing of failed consul cluster/snapshot state - there has been a longstanding issue where resources are leaked in our e2e tests. This is because we purge all failed clusters from state with d.SetId(""). This doesn't make sense for failed clusters/snapshots. Instead, we should store their state... in state * Add acc testing - and undo property sorting - it makes the PR bigger * Make docs * Change acc testing tier to development - otherwise folks need to upload a credit card to help pr fixes * Fix scale attr - reduced the size, need to reduce the scale * Fix size if acc testing of consul cluster * Update Consul cluster data source to also store state * Fix brace in cluster def * go generate docs * Use state vs cluster/snapshot_state * Update internal/provider/resource_consul_cluster_test.go Co-authored-by: Brenna Hewer-Darroch <[email protected]> * Stop purging failed resources from state, export state output value * go generate Co-authored-by: Brenna Hewer-Darroch <[email protected]>
This is the same couple changes as the other PR targeting Consul but applied to networks and Vault as well:
closes: #222
🛠️ Description
🚢 Release Note
Release note for CHANGELOG:
🏗️ Acceptance tests
Output from acceptance testing:
hitting issues with a provider version on the route test