From 4eb85757eb79f5499c5a1349e732596d56e08490 Mon Sep 17 00:00:00 2001 From: Kevin Kredit Date: Thu, 31 Mar 2022 17:22:51 -0400 Subject: [PATCH] Vault: enable paths_filter and scaling for Plus-tier (#281) * Vault: enable paths_filter and scaling for Plus-tier * Update docs with 'go generate' * Expand the comments describing Vault Plus-tier scaling --- docs/data-sources/vault_cluster.md | 1 + docs/guides/vault-performance-replication.md | 3 +- docs/guides/vault-scaling.md | 6 +- docs/resources/vault_cluster.md | 1 + .../vault_perf_replication/replication.tf | 1 + go.mod | 2 +- go.sum | 4 +- internal/clients/vault_cluster.go | 46 +++++ .../provider/data_source_vault_cluster.go | 8 + internal/provider/resource_vault_cluster.go | 186 ++++++++++++++---- .../provider/resource_vault_cluster_test.go | 104 +++++++++- internal/provider/validators.go | 16 ++ internal/provider/validators_test.go | 67 +++++++ .../vault-performance-replication.md.tmpl | 2 +- templates/guides/vault-scaling.md.tmpl | 6 +- 15 files changed, 405 insertions(+), 48 deletions(-) diff --git a/docs/data-sources/vault_cluster.md b/docs/data-sources/vault_cluster.md index 5032273fa..5dcd8a627 100644 --- a/docs/data-sources/vault_cluster.md +++ b/docs/data-sources/vault_cluster.md @@ -38,6 +38,7 @@ data "hcp_vault_cluster" "example" { - **min_vault_version** (String) The minimum Vault version to use when creating the cluster. If not specified, it is defaulted to the version that is currently recommended by HCP. - **namespace** (String) The name of the customer namespace this HCP Vault cluster is located in. - **organization_id** (String) The ID of the organization this HCP Vault cluster is located in. +- **paths_filter** (List of String) The performance replication [paths filter](https://learn.hashicorp.com/tutorials/vault/paths-filter). Applies to performance replication secondaries only and operates in "deny" mode only. - **primary_link** (String) The `self_link` of the HCP Vault Plus tier cluster which is the primary in the performance replication setup with this HCP Vault Plus tier cluster. If not specified, it is a standalone Plus tier HCP Vault cluster. - **project_id** (String) The ID of the project this HCP Vault cluster is located in. - **public_endpoint** (Boolean) Denotes that the cluster has a public endpoint. Defaults to false. diff --git a/docs/guides/vault-performance-replication.md b/docs/guides/vault-performance-replication.md index ef888f907..644a67630 100644 --- a/docs/guides/vault-performance-replication.md +++ b/docs/guides/vault-performance-replication.md @@ -11,7 +11,7 @@ Admins and Contributors can use the provider to create Plus tier clusters with V Although the clusters may reside in the same HVN, it is more likely that you will want to station your performance replication secondary in a different region, and therefore HVN, than your primary. When establishing performance replication links between clusters in different HVNs, an HVN peering connection is required. This can be defined explicitly using an [`hcp_hvn_peering_connection`](../resources/hvn_peering_connection.md), or HCP will create the connection automatically (peering connections can be imported after creation using [terraform import](https://www.terraform.io/cli/import)). Note HVN peering [CIDR block requirements](https://cloud.hashicorp.com/docs/hcp/network/routes#cidr-block-requirements). --> **Note**: At this time, Plus tier clusters cannot be scaled. +-> **Note:** Remember, when scaling performance replicated clusters, be sure to keep the size of all clusters in the group in sync. ### Performance replication example @@ -42,5 +42,6 @@ resource "hcp_vault_cluster" "secondary" { hvn_id = hcp_hvn.secondary_network.hvn_id tier = "plus_medium" primary_link = hcp_vault_cluster.primary.self_link + paths_filter = ["path/a", "path/b"] } ``` diff --git a/docs/guides/vault-scaling.md b/docs/guides/vault-scaling.md index 4e14b76a5..7902b98aa 100644 --- a/docs/guides/vault-scaling.md +++ b/docs/guides/vault-scaling.md @@ -7,7 +7,11 @@ description: |- # Scale a cluster -Admins are able to use the provider to change a cluster’s size or tier. Scaling down to a Development tier from any production-grade tier is not allowed. In addition, if you are using too much storage and want to scale down to a smaller size or tier, you will be unable to do so until you delete enough resources. +Admins are able to use the provider to change a cluster’s size or tier. There are a few limitations on cluster scaling: + +- When scaling performance replicated Plus-tier clusters, be sure to keep the size of all clusters in the group in sync +- Scaling down to the Development tier from any production-grade tier is not allowed +- If you are using too much storage and want to scale down to a smaller size or tier, you will be unable to do so until you delete enough resources ### Scaling example diff --git a/docs/resources/vault_cluster.md b/docs/resources/vault_cluster.md index c4d7cf9ed..bcec6cec9 100644 --- a/docs/resources/vault_cluster.md +++ b/docs/resources/vault_cluster.md @@ -43,6 +43,7 @@ resource "hcp_vault_cluster" "example" { - **id** (String) The ID of this resource. - **min_vault_version** (String) The minimum Vault version to use when creating the cluster. If not specified, it is defaulted to the version that is currently recommended by HCP. +- **paths_filter** (List of String) The performance replication [paths filter](https://learn.hashicorp.com/tutorials/vault/paths-filter). Applies to performance replication secondaries only and operates in "deny" mode only. - **primary_link** (String) The `self_link` of the HCP Vault Plus tier cluster which is the primary in the performance replication setup with this HCP Vault Plus tier cluster. If not specified, it is a standalone Plus tier HCP Vault cluster. - **public_endpoint** (Boolean) Denotes that the cluster has a public endpoint. Defaults to false. - **tier** (String) Tier of the HCP Vault cluster. Valid options for tiers - `dev`, `starter_small`, `standard_small`, `standard_medium`, `standard_large`, `plus_small`, `plus_medium`, `plus_large`. See [pricing information](https://cloud.hashicorp.com/pricing/vault). diff --git a/examples/guides/vault_perf_replication/replication.tf b/examples/guides/vault_perf_replication/replication.tf index 5e581293d..6f376840a 100644 --- a/examples/guides/vault_perf_replication/replication.tf +++ b/examples/guides/vault_perf_replication/replication.tf @@ -23,4 +23,5 @@ resource "hcp_vault_cluster" "secondary" { hvn_id = hcp_hvn.secondary_network.hvn_id tier = "plus_medium" primary_link = hcp_vault_cluster.primary.self_link + paths_filter = ["path/a", "path/b"] } diff --git a/go.mod b/go.mod index 7bbbfcf86..f70c12a79 100644 --- a/go.mod +++ b/go.mod @@ -12,7 +12,7 @@ require ( github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 github.com/hashicorp/go-version v1.4.0 github.com/hashicorp/hcl/v2 v2.8.2 // indirect - github.com/hashicorp/hcp-sdk-go v0.16.0 + github.com/hashicorp/hcp-sdk-go v0.18.0 github.com/hashicorp/terraform-plugin-docs v0.5.1 github.com/hashicorp/terraform-plugin-sdk/v2 v2.10.1 github.com/posener/complete v1.2.1 // indirect diff --git a/go.sum b/go.sum index 47e4397ff..c76c3ead0 100644 --- a/go.sum +++ b/go.sum @@ -379,8 +379,8 @@ github.com/hashicorp/hc-install v0.3.1/go.mod h1:3LCdWcCDS1gaHC9mhHCGbkYfoY6vdsK github.com/hashicorp/hcl/v2 v2.3.0/go.mod h1:d+FwDBbOLvpAM3Z6J7gPj/VoAGkNe/gm352ZhjJ/Zv8= github.com/hashicorp/hcl/v2 v2.8.2 h1:wmFle3D1vu0okesm8BTLVDyJ6/OL9DCLUwn0b2OptiY= github.com/hashicorp/hcl/v2 v2.8.2/go.mod h1:bQTN5mpo+jewjJgh8jr0JUguIi7qPHUF6yIfAEN3jqY= -github.com/hashicorp/hcp-sdk-go v0.16.0 h1:/UfRdiI1Z2AJGBi24aFO8MeNTWBa08EHyAvH1C9BWw8= -github.com/hashicorp/hcp-sdk-go v0.16.0/go.mod h1:z0I0eZ+TVJJ7pycnCzMM/ouOw5D5Qnp/zylNXkqGEX0= +github.com/hashicorp/hcp-sdk-go v0.18.0 h1:SnYFPebdfbc/sjit71Zx5Ji9fuQFgjvpIdrlgjzlriE= +github.com/hashicorp/hcp-sdk-go v0.18.0/go.mod h1:z0I0eZ+TVJJ7pycnCzMM/ouOw5D5Qnp/zylNXkqGEX0= github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y= github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= github.com/hashicorp/terraform-exec v0.15.0 h1:cqjh4d8HYNQrDoEmlSGelHmg2DYDh5yayckvJ5bV18E= diff --git a/internal/clients/vault_cluster.go b/internal/clients/vault_cluster.go index 448676e6c..0c5faed5c 100644 --- a/internal/clients/vault_cluster.go +++ b/internal/clients/vault_cluster.go @@ -138,3 +138,49 @@ func UpdateVaultClusterTier(ctx context.Context, client *Client, loc *sharedmode return updateResp.Payload, nil } + +// UpdateVaultPathsFilter will make a call to the Vault service to update the paths filter for a secondary cluster +func UpdateVaultPathsFilter(ctx context.Context, client *Client, loc *sharedmodels.HashicorpCloudLocationLocation, + clusterID string, params vaultmodels.HashicorpCloudVault20201125ClusterPerformanceReplicationPathsFilter) (*vaultmodels.HashicorpCloudVault20201125UpdatePathsFilterResponse, error) { + + updateParams := vault_service.NewUpdatePathsFilterParams() + updateParams.Context = ctx + updateParams.ClusterID = clusterID + updateParams.LocationProjectID = loc.ProjectID + updateParams.LocationOrganizationID = loc.OrganizationID + updateParams.Body = &vaultmodels.HashicorpCloudVault20201125UpdatePathsFilterRequest{ + // ClusterID and Location are repeated because the values above are required to populate the URL, + // and the values below are required in the API request body + ClusterID: clusterID, + Location: loc, + Mode: params.Mode, + Paths: params.Paths, + } + + updateResp, err := client.Vault.UpdatePathsFilter(updateParams, nil) + if err != nil { + return nil, err + } + + return updateResp.Payload, nil +} + +// DeleteVaultPathsFilter will make a call to the Vault service to delete the paths filter for a secondary cluster +func DeleteVaultPathsFilter(ctx context.Context, client *Client, loc *sharedmodels.HashicorpCloudLocationLocation, + clusterID string) (*vaultmodels.HashicorpCloudVault20201125DeletePathsFilterResponse, error) { + + deleteParams := vault_service.NewDeletePathsFilterParams() + deleteParams.Context = ctx + deleteParams.ClusterID = clusterID + deleteParams.LocationProjectID = loc.ProjectID + deleteParams.LocationOrganizationID = loc.OrganizationID + deleteParams.LocationRegionProvider = &loc.Region.Provider + deleteParams.LocationRegionRegion = &loc.Region.Region + + deleteResp, err := client.Vault.DeletePathsFilter(deleteParams, nil) + if err != nil { + return nil, err + } + + return deleteResp.Payload, nil +} diff --git a/internal/provider/data_source_vault_cluster.go b/internal/provider/data_source_vault_cluster.go index a5be2f69a..e79222e64 100644 --- a/internal/provider/data_source_vault_cluster.go +++ b/internal/provider/data_source_vault_cluster.go @@ -102,6 +102,14 @@ func dataSourceVaultCluster() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "paths_filter": { + Description: "The performance replication [paths filter](https://learn.hashicorp.com/tutorials/vault/paths-filter). Applies to performance replication secondaries only and operates in \"deny\" mode only.", + Type: schema.TypeList, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Computed: true, + }, }, } } diff --git a/internal/provider/resource_vault_cluster.go b/internal/provider/resource_vault_cluster.go index 001c33cb1..c3b63e497 100644 --- a/internal/provider/resource_vault_cluster.go +++ b/internal/provider/resource_vault_cluster.go @@ -91,6 +91,16 @@ func resourceVaultCluster() *schema.Resource { Optional: true, ForceNew: true, }, + "paths_filter": { + Description: "The performance replication [paths filter](https://learn.hashicorp.com/tutorials/vault/paths-filter). Applies to performance replication secondaries only and operates in \"deny\" mode only.", + Type: schema.TypeList, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateDiagFunc: validateVaultPathsFilter, + }, + Optional: true, + }, "organization_id": { Description: "The ID of the organization this HCP Vault cluster is located in.", Type: schema.TypeString, @@ -198,6 +208,14 @@ func resourceVaultClusterCreate(ctx context.Context, d *schema.ResourceData, met var vaultCluster *vaultmodels.HashicorpCloudVault20201125InputCluster if getPrimaryLinkIfAny(d) != "" { primaryClusterLink := newLink(primaryClusterModel.Location, VaultClusterResourceType, primaryClusterModel.ID) + var pathsFilter *vaultmodels.HashicorpCloudVault20201125ClusterPerformanceReplicationPathsFilter + if paths, ok := d.GetOk("paths_filter"); ok { + pathStrings := getPathStrings(paths) + pathsFilter = &vaultmodels.HashicorpCloudVault20201125ClusterPerformanceReplicationPathsFilter{ + Mode: vaultmodels.HashicorpCloudVault20201125ClusterPerformanceReplicationPathsFilterModeDENY, + Paths: pathStrings, + } + } vaultCluster = &vaultmodels.HashicorpCloudVault20201125InputCluster{ Config: &vaultmodels.HashicorpCloudVault20201125InputClusterConfig{ VaultConfig: &vaultmodels.HashicorpCloudVault20201125VaultConfig{ @@ -213,8 +231,13 @@ func resourceVaultClusterCreate(ctx context.Context, d *schema.ResourceData, met ID: clusterID, Location: loc, PerformanceReplicationPrimaryCluster: primaryClusterLink, + PerformanceReplicationPathsFilter: pathsFilter, } } else { + if _, ok := d.GetOk("paths_filter"); ok { + return diag.Errorf("only performance replication secondaries may specify a paths_filter") + } + vaultCluster = &vaultmodels.HashicorpCloudVault20201125InputCluster{ Config: &vaultmodels.HashicorpCloudVault20201125InputClusterConfig{ VaultConfig: &vaultmodels.HashicorpCloudVault20201125VaultConfig{ @@ -328,26 +351,52 @@ func resourceVaultClusterUpdate(ctx context.Context, d *schema.ResourceData, met return diag.Errorf("unable to fetch Vault cluster (%s): %v", clusterID, err) } - // Confirm public_endpoint or tier have changed. - if !(d.HasChange("tier") || d.HasChange("public_endpoint")) { + // Confirm at least on modifiable field has changed + if !d.HasChanges("tier", "public_endpoint", "paths_filter") { return nil } if d.HasChange("tier") { + clusterToScale := cluster + destTier := vaultmodels.HashicorpCloudVault20201125Tier(strings.ToUpper(d.Get("tier").(string))) if inPlusTier(string(cluster.Config.Tier)) { - return diag.Errorf("scaling Plus tier clusters is not yet allowed") + // Plus tier clusters scale as a group via the primary cluster. + // However, it is still worth individually tracking the tier of each cluster so that the + // provider has the same information as the portal UI and can detect a scaling operation that + // fails part way through to enable retries. + // Because the clusters scale as group, + // a) replicated clusters may have already scaled due to another resource's update + // b) all scaling requests are routed through the primary + // It is important to keep the tier of all replicated clusters in sync. + + // Because of (a), check that the scaling operation is necessary. + if cluster.Config.Tier == destTier { + clusterToScale = nil + } else { + printPlusScalingWarningMsg() + primaryLink := getPrimaryLinkIfAny(d) + if primaryLink != "" { + // Because of (b), if the cluster is a secondary, issue the actual API request to the primary. + var getPrimaryErr diag.Diagnostics + clusterToScale, getPrimaryErr = getPrimaryClusterFromLink(ctx, client, primaryLink) + if getPrimaryErr != nil { + return getPrimaryErr + } + } + } } - // Invoke update tier endpoint. - tier := vaultmodels.HashicorpCloudVault20201125Tier(strings.ToUpper(d.Get("tier").(string))) - updateResp, err := clients.UpdateVaultClusterTier(ctx, client, cluster.Location, clusterID, tier) - if err != nil { - return diag.Errorf("error updating Vault cluster tier (%s): %v", clusterID, err) - } - - // Wait for the update cluster operation. - if err := clients.WaitForOperation(ctx, client, "update Vault cluster tier", cluster.Location, updateResp.Operation.ID); err != nil { - return diag.Errorf("unable to update Vault cluster tier (%s): %v", clusterID, err) + if clusterToScale != nil { + // Invoke update tier endpoint. + updateResp, err := clients.UpdateVaultClusterTier(ctx, client, clusterToScale.Location, clusterToScale.ID, destTier) + if err != nil { + return diag.Errorf("error updating Vault cluster tier (%s): %v", clusterID, err) + } + + // Wait for the update cluster operation. + if err := clients.WaitForOperation(ctx, client, "update Vault cluster tier", clusterToScale.Location, updateResp.Operation.ID); err != nil { + return diag.Errorf("unable to update Vault cluster tier (%s): %v", clusterID, err) + } } } @@ -364,6 +413,41 @@ func resourceVaultClusterUpdate(ctx context.Context, d *schema.ResourceData, met } } + if d.HasChange("paths_filter") { + if paths, ok := d.GetOk("paths_filter"); ok { + // paths_filter is present. Check that it is a secondary, then update. + if _, ok := d.GetOk("primary_link"); !ok { + return diag.Errorf("only performance replication secondaries may specify a paths_filter") + } + + // Invoke update paths filter endpoint. + pathStrings := getPathStrings(paths) + updateResp, err := clients.UpdateVaultPathsFilter(ctx, client, cluster.Location, clusterID, vaultmodels.HashicorpCloudVault20201125ClusterPerformanceReplicationPathsFilter{ + Mode: vaultmodels.HashicorpCloudVault20201125ClusterPerformanceReplicationPathsFilterModeDENY, + Paths: pathStrings, + }) + if err != nil { + return diag.Errorf("error updating Vault cluster paths filter (%s): %v", clusterID, err) + } + + // Wait for the update paths filter operation. + if err := clients.WaitForOperation(ctx, client, "update Vault cluster paths filter", cluster.Location, updateResp.Operation.ID); err != nil { + return diag.Errorf("unable to update Vault cluster paths filter (%s): %v", clusterID, err) + } + } else { + // paths_filter is not present. Delete the paths_filter. + deleteResp, err := clients.DeleteVaultPathsFilter(ctx, client, cluster.Location, clusterID) + if err != nil { + return diag.Errorf("error deleting Vault cluster paths filter (%s): %v", clusterID, err) + } + + // Wait for the delete paths filter operation. + if err := clients.WaitForOperation(ctx, client, "delete Vault cluster paths filter", cluster.Location, deleteResp.Operation.ID); err != nil { + return diag.Errorf("unable to delete Vault cluster paths filter (%s): %v", clusterID, err) + } + } + } + // Get the updated Vault cluster. cluster, err = clients.GetVaultClusterByID(ctx, client, loc, clusterID) @@ -478,13 +562,24 @@ func setVaultClusterResourceData(d *schema.ResourceData, cluster *vaultmodels.Ha return err } - if cluster.PerformanceReplicationInfo != nil && cluster.PerformanceReplicationInfo.PrimaryClusterLink != nil { - primaryLink, err := linkURL(cluster.PerformanceReplicationInfo.PrimaryClusterLink) - if err != nil { - return err + if cluster.PerformanceReplicationInfo != nil { + prInfo := cluster.PerformanceReplicationInfo + if prInfo.PrimaryClusterLink != nil { + primaryLink, err := linkURL(cluster.PerformanceReplicationInfo.PrimaryClusterLink) + if err != nil { + return err + } + if err := d.Set("primary_link", primaryLink); err != nil { + return err + } } - if err := d.Set("primary_link", primaryLink); err != nil { - return err + + if prInfo.PathsFilter != nil && prInfo.PathsFilter.Paths != nil { + if err := d.Set("paths_filter", prInfo.PathsFilter.Paths); err != nil { + return err + } + } else { + d.Set("paths_filter", nil) } } @@ -516,44 +611,35 @@ func inPlusTier(tier string) bool { tier == string(vaultmodels.HashicorpCloudVault20201125TierPLUSLARGE) } -func validatePerformanceReplicationChecksAndReturnPrimaryIfAny(ctx context.Context, client *clients.Client, d *schema.ResourceData) ([]diag.Diagnostic, *vaultmodels.HashicorpCloudVault20201125Cluster) { +func validatePerformanceReplicationChecksAndReturnPrimaryIfAny(ctx context.Context, client *clients.Client, d *schema.ResourceData) (diag.Diagnostics, *vaultmodels.HashicorpCloudVault20201125Cluster) { primaryClusterLinkStr := getPrimaryLinkIfAny(d) // If no primary_link has been supplied, treat this as as single cluster creation. if primaryClusterLinkStr == "" { return nil, nil } - primaryClusterLink, err := buildLinkFromURL(primaryClusterLinkStr, VaultClusterResourceType, client.Config.OrganizationID) + primaryCluster, err := getPrimaryClusterFromLink(ctx, client, primaryClusterLinkStr) if err != nil { - return diag.Errorf("invalid primary_link supplied %v", err), nil - } - - primaryCluster, err := clients.GetVaultClusterByID(ctx, client, primaryClusterLink.Location, primaryClusterLink.ID) - if err != nil { - if clients.IsResponseCodeNotFound(err) { - return diag.Errorf("primary cluster (%s) must exist", primaryClusterLink.ID), nil - - } - return diag.Errorf("unable to check for presence of an existing primary Vault cluster (%s): %v", primaryClusterLink.ID, err), nil + return err, nil } if !inPlusTier(string(primaryCluster.Config.Tier)) { - return diag.Errorf("primary cluster (%s) must be plus-tier", primaryClusterLink.ID), primaryCluster + return diag.Errorf("primary cluster (%s) must be plus-tier", primaryCluster.ID), primaryCluster } // Tier should be specified, even if secondary inherits it from the primary cluster. if !strings.EqualFold(d.Get("tier").(string), string(primaryCluster.Config.Tier)) { - return diag.Errorf("a secondary's tier must match that of its primary (%s)", primaryClusterLink.ID), primaryCluster + return diag.Errorf("a secondary's tier must match that of its primary (%s)", primaryCluster.ID), primaryCluster } if primaryCluster.PerformanceReplicationInfo != nil && primaryCluster.PerformanceReplicationInfo.Mode == vaultmodels.HashicorpCloudVault20201125ClusterPerformanceReplicationInfoModeSECONDARY { - return diag.Errorf("primary cluster (%s) is already a secondary", primaryClusterLink.ID), primaryCluster + return diag.Errorf("primary cluster (%s) is already a secondary", primaryCluster.ID), primaryCluster } // min_vault_version should either be empty or match the primary's initial version minVaultVersion := d.Get("min_vault_version").(string) if minVaultVersion != "" && !strings.EqualFold(minVaultVersion, primaryCluster.Config.VaultConfig.InitialVersion) { - return diag.Errorf("min_vault_version should either be unset or match the primary cluster's (%s) initial version (%s)", primaryClusterLink.ID, primaryCluster.Config.VaultConfig.InitialVersion), primaryCluster + return diag.Errorf("min_vault_version should either be unset or match the primary cluster's (%s) initial version (%s)", primaryCluster.ID, primaryCluster.Config.VaultConfig.InitialVersion), primaryCluster } return nil, primaryCluster } @@ -565,3 +651,33 @@ func getPrimaryLinkIfAny(d *schema.ResourceData) string { } return primaryClusterLinkIface.(string) } + +func getPrimaryClusterFromLink(ctx context.Context, client *clients.Client, link string) (*vaultmodels.HashicorpCloudVault20201125Cluster, diag.Diagnostics) { + primaryClusterLink, err := buildLinkFromURL(link, VaultClusterResourceType, client.Config.OrganizationID) + if err != nil { + return nil, diag.Errorf("invalid primary_link supplied %v", err) + } + + primaryCluster, err := clients.GetVaultClusterByID(ctx, client, primaryClusterLink.Location, primaryClusterLink.ID) + if err != nil { + if clients.IsResponseCodeNotFound(err) { + return nil, diag.Errorf("primary cluster (%s) does not exist", primaryClusterLink.ID) + + } + return nil, diag.Errorf("unable to check for presence of an existing primary Vault cluster (%s): %v", primaryClusterLink.ID, err) + } + return primaryCluster, nil +} + +func getPathStrings(pathFilter interface{}) []string { + pathFilterArr := pathFilter.([]interface{}) + var paths []string + for _, pathFilter := range pathFilterArr { + paths = append(paths, pathFilter.(string)) + } + return paths +} + +func printPlusScalingWarningMsg() { + log.Printf("[WARN] When scaling Plus-tier Vault clusters, be sure to keep the size of all clusters in a replication group in sync") +} diff --git a/internal/provider/resource_vault_cluster_test.go b/internal/provider/resource_vault_cluster_test.go index e986b6f6e..59dd6bf4b 100644 --- a/internal/provider/resource_vault_cluster_test.go +++ b/internal/provider/resource_vault_cluster_test.go @@ -310,6 +310,18 @@ func TestAccPerformanceReplication_Validations(t *testing.T) { `)), ExpectError: regexp.MustCompile(`invalid primary_link supplied*`), }, + { + // incorrectly specify a paths_filter on a non-secondary + Config: testConfig(setTestAccPerformanceReplication_e2e(` + resource "hcp_vault_cluster" "c1" { + cluster_id = "test-primary" + hvn_id = hcp_hvn.hvn1.hvn_id + tier = "plus_small" + paths_filter = ["path/a"] + } + `)), + ExpectError: regexp.MustCompile(`only performance replication secondaries may specify a paths_filter`), + }, { // create a plus tier cluster successfully Config: testConfig(setTestAccPerformanceReplication_e2e(` @@ -347,9 +359,6 @@ func TestAccPerformanceReplication_Validations(t *testing.T) { hvn_id = hcp_hvn.hvn1.hvn_id tier = "plus_small" public_endpoint = true - depends_on = [ - hcp_hvn.hvn1 - ] } resource "hcp_vault_cluster" "c2" { cluster_id = "test-secondary" @@ -395,7 +404,7 @@ func TestAccPerformanceReplication_Validations(t *testing.T) { min_vault_version = "v1.0.1" } `)), - ExpectError: regexp.MustCompile(`min_vault_version does not apply to secondary`), + ExpectError: regexp.MustCompile(`min_vault_version should either be unset or match the primary cluster's`), }, { // secondary cluster created successfully (same hvn) @@ -411,6 +420,7 @@ func TestAccPerformanceReplication_Validations(t *testing.T) { hvn_id = hcp_hvn.hvn1.hvn_id tier = "plus_small" primary_link = hcp_vault_cluster.c1.self_link + paths_filter = ["path/a", "path/b"] } `)), Check: resource.ComposeTestCheckFunc( @@ -421,6 +431,8 @@ func TestAccPerformanceReplication_Validations(t *testing.T) { resource.TestCheckResourceAttr(secondaryVaultResourceName, "cloud_provider", "aws"), resource.TestCheckResourceAttr(secondaryVaultResourceName, "region", "us-west-2"), resource.TestCheckResourceAttr(secondaryVaultResourceName, "public_endpoint", "false"), + resource.TestCheckResourceAttr(secondaryVaultResourceName, "paths_filter.0", "path/a"), + resource.TestCheckResourceAttr(secondaryVaultResourceName, "paths_filter.1", "path/b"), resource.TestCheckResourceAttr(secondaryVaultResourceName, "namespace", "admin"), resource.TestCheckResourceAttrSet(secondaryVaultResourceName, "vault_version"), resource.TestCheckResourceAttrSet(secondaryVaultResourceName, "organization_id"), @@ -432,6 +444,48 @@ func TestAccPerformanceReplication_Validations(t *testing.T) { resource.TestCheckResourceAttrSet(secondaryVaultResourceName, "created_at"), ), }, + { + // update paths filter + Config: testConfig(setTestAccPerformanceReplication_e2e(` + resource "hcp_vault_cluster" "c1" { + cluster_id = "test-primary" + hvn_id = hcp_hvn.hvn1.hvn_id + tier = "plus_small" + public_endpoint = true + } + resource "hcp_vault_cluster" "c2" { + cluster_id = "test-secondary" + hvn_id = hcp_hvn.hvn1.hvn_id + tier = "plus_small" + primary_link = hcp_vault_cluster.c1.self_link + paths_filter = ["path/a", "path/c"] + } + `)), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(secondaryVaultResourceName, "paths_filter.0", "path/a"), + resource.TestCheckResourceAttr(secondaryVaultResourceName, "paths_filter.1", "path/c"), + ), + }, + { + // delete paths filter + Config: testConfig(setTestAccPerformanceReplication_e2e(` + resource "hcp_vault_cluster" "c1" { + cluster_id = "test-primary" + hvn_id = hcp_hvn.hvn1.hvn_id + tier = "plus_small" + public_endpoint = true + } + resource "hcp_vault_cluster" "c2" { + cluster_id = "test-secondary" + hvn_id = hcp_hvn.hvn1.hvn_id + tier = "plus_small" + primary_link = hcp_vault_cluster.c1.self_link + } + `)), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr(secondaryVaultResourceName, "paths_filter.0"), + ), + }, { // secondary cluster created successfully (different hvn) Config: testConfig(setTestAccPerformanceReplication_e2e(` @@ -469,7 +523,42 @@ func TestAccPerformanceReplication_Validations(t *testing.T) { ), }, { - // scaling out of the Plus tier not yet allowed + // successfully scale replication group + Config: testConfig(setTestAccPerformanceReplication_e2e(` + resource "hcp_vault_cluster" "c1" { + cluster_id = "test-primary" + hvn_id = hcp_hvn.hvn1.hvn_id + tier = "plus_medium" + public_endpoint = true + } + resource "hcp_vault_cluster" "c2" { + cluster_id = "test-secondary" + hvn_id = hcp_hvn.hvn2.hvn_id + tier = "plus_medium" + primary_link = hcp_vault_cluster.c1.self_link + } + `)), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(primaryVaultResourceName, "tier", "PLUS_MEDIUM"), + resource.TestCheckResourceAttr(secondaryVaultResourceName, "tier", "PLUS_MEDIUM"), + ), + }, + { + // successfully disable replication + Config: testConfig(setTestAccPerformanceReplication_e2e(` + resource "hcp_vault_cluster" "c1" { + cluster_id = "test-primary" + hvn_id = hcp_hvn.hvn1.hvn_id + tier = "plus_medium" + public_endpoint = true + } + `)), + Check: resource.ComposeTestCheckFunc( + testAccCheckVaultClusterExists(primaryVaultResourceName), + ), + }, + { + // successfully scale out of the Plus tier Config: testConfig(setTestAccPerformanceReplication_e2e(` resource "hcp_vault_cluster" "c1" { cluster_id = "test-primary" @@ -478,7 +567,10 @@ func TestAccPerformanceReplication_Validations(t *testing.T) { public_endpoint = true } `)), - ExpectError: regexp.MustCompile(`scaling Plus tier clusters is not yet allowed`), + Check: resource.ComposeTestCheckFunc( + testAccCheckVaultClusterExists(primaryVaultResourceName), + resource.TestCheckResourceAttr(primaryVaultResourceName, "tier", "STARTER_SMALL"), + ), }, }, }) diff --git a/internal/provider/validators.go b/internal/provider/validators.go index 98037ff84..9b1c44cb3 100644 --- a/internal/provider/validators.go +++ b/internal/provider/validators.go @@ -197,6 +197,22 @@ func validateVaultClusterTier(v interface{}, path cty.Path) diag.Diagnostics { return diagnostics } +func validateVaultPathsFilter(v interface{}, path cty.Path) diag.Diagnostics { + var diagnostics diag.Diagnostics + p := v.(string) + pathRegex := regexp.MustCompile(`\A[\w-]+(/[\w-]+)*\z`) + if !pathRegex.MatchString(p) { + msg := fmt.Sprintf("paths filter path '%v' is invalid", p) + diagnostics = append(diagnostics, diag.Diagnostic{ + Severity: diag.Error, + Summary: msg, + Detail: msg + fmt.Sprintf(" (paths must match regex '%s').", pathRegex.String()), + AttributePath: path, + }) + } + return diagnostics +} + func validateCIDRBlock(v interface{}, path cty.Path) diag.Diagnostics { var diagnostics diag.Diagnostics diff --git a/internal/provider/validators_test.go b/internal/provider/validators_test.go index 60ac37161..3ef208c0b 100644 --- a/internal/provider/validators_test.go +++ b/internal/provider/validators_test.go @@ -392,6 +392,73 @@ func Test_validateVaultClusterTier(t *testing.T) { } } +func Test_validateVaultPathsFilter(t *testing.T) { + tcs := map[string]struct { + input string + expected diag.Diagnostics + }{ + "valid path": { + input: "valid/path", + expected: nil, + }, + "different valid path": { + input: "_valid-path/2/2/2/valid", + expected: nil, + }, + "invalid path with :": { + input: "valid/path:", + expected: diag.Diagnostics{ + diag.Diagnostic{ + Severity: diag.Error, + Summary: "paths filter path 'valid/path:' is invalid", + Detail: "paths filter path 'valid/path:' is invalid (paths must match regex '\\A[\\w-]+(/[\\w-]+)*\\z').", + AttributePath: nil, + }, + }, + }, + "invalid path with trailing /": { + input: "trailing/", + expected: diag.Diagnostics{ + diag.Diagnostic{ + Severity: diag.Error, + Summary: "paths filter path 'trailing/' is invalid", + Detail: "paths filter path 'trailing/' is invalid (paths must match regex '\\A[\\w-]+(/[\\w-]+)*\\z').", + AttributePath: nil, + }, + }, + }, + "invalid path with leading /": { + input: "/leading", + expected: diag.Diagnostics{ + diag.Diagnostic{ + Severity: diag.Error, + Summary: "paths filter path '/leading' is invalid", + Detail: "paths filter path '/leading' is invalid (paths must match regex '\\A[\\w-]+(/[\\w-]+)*\\z').", + AttributePath: nil, + }, + }, + }, + "invalid empty path": { + input: "", + expected: diag.Diagnostics{ + diag.Diagnostic{ + Severity: diag.Error, + Summary: "paths filter path '' is invalid", + Detail: "paths filter path '' is invalid (paths must match regex '\\A[\\w-]+(/[\\w-]+)*\\z').", + AttributePath: nil, + }, + }, + }, + } + for n, tc := range tcs { + t.Run(n, func(t *testing.T) { + r := require.New(t) + result := validateVaultPathsFilter(tc.input, nil) + r.Equal(tc.expected, result) + }) + } +} + func Test_validateCIDRBlock(t *testing.T) { tcs := map[string]struct { input string diff --git a/templates/guides/vault-performance-replication.md.tmpl b/templates/guides/vault-performance-replication.md.tmpl index e5b19e7d8..798a91d7b 100644 --- a/templates/guides/vault-performance-replication.md.tmpl +++ b/templates/guides/vault-performance-replication.md.tmpl @@ -11,7 +11,7 @@ Admins and Contributors can use the provider to create Plus tier clusters with V Although the clusters may reside in the same HVN, it is more likely that you will want to station your performance replication secondary in a different region, and therefore HVN, than your primary. When establishing performance replication links between clusters in different HVNs, an HVN peering connection is required. This can be defined explicitly using an [`hcp_hvn_peering_connection`](../resources/hvn_peering_connection.md), or HCP will create the connection automatically (peering connections can be imported after creation using [terraform import](https://www.terraform.io/cli/import)). Note HVN peering [CIDR block requirements](https://cloud.hashicorp.com/docs/hcp/network/routes#cidr-block-requirements). --> **Note**: At this time, Plus tier clusters cannot be scaled. +-> **Note:** Remember, when scaling performance replicated clusters, be sure to keep the size of all clusters in the group in sync. ### Performance replication example diff --git a/templates/guides/vault-scaling.md.tmpl b/templates/guides/vault-scaling.md.tmpl index 83f69c0ee..33741b541 100644 --- a/templates/guides/vault-scaling.md.tmpl +++ b/templates/guides/vault-scaling.md.tmpl @@ -7,7 +7,11 @@ description: |- # Scale a cluster -Admins are able to use the provider to change a cluster’s size or tier. Scaling down to a Development tier from any production-grade tier is not allowed. In addition, if you are using too much storage and want to scale down to a smaller size or tier, you will be unable to do so until you delete enough resources. +Admins are able to use the provider to change a cluster’s size or tier. There are a few limitations on cluster scaling: + +- When scaling performance replicated Plus-tier clusters, be sure to keep the size of all clusters in the group in sync +- Scaling down to the Development tier from any production-grade tier is not allowed +- If you are using too much storage and want to scale down to a smaller size or tier, you will be unable to do so until you delete enough resources ### Scaling example