Skip to content

Commit

Permalink
Vault: enable paths_filter and scaling for Plus-tier (#281)
Browse files Browse the repository at this point in the history
* Vault: enable paths_filter and scaling for Plus-tier

* Update docs with 'go generate'

* Expand the comments describing Vault Plus-tier scaling
  • Loading branch information
Kevin Kredit authored Mar 31, 2022
1 parent 0948f86 commit 4eb8575
Show file tree
Hide file tree
Showing 15 changed files with 405 additions and 48 deletions.
1 change: 1 addition & 0 deletions docs/data-sources/vault_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ data "hcp_vault_cluster" "example" {
- **min_vault_version** (String) The minimum Vault version to use when creating the cluster. If not specified, it is defaulted to the version that is currently recommended by HCP.
- **namespace** (String) The name of the customer namespace this HCP Vault cluster is located in.
- **organization_id** (String) The ID of the organization this HCP Vault cluster is located in.
- **paths_filter** (List of String) The performance replication [paths filter](https://learn.hashicorp.com/tutorials/vault/paths-filter). Applies to performance replication secondaries only and operates in "deny" mode only.
- **primary_link** (String) The `self_link` of the HCP Vault Plus tier cluster which is the primary in the performance replication setup with this HCP Vault Plus tier cluster. If not specified, it is a standalone Plus tier HCP Vault cluster.
- **project_id** (String) The ID of the project this HCP Vault cluster is located in.
- **public_endpoint** (Boolean) Denotes that the cluster has a public endpoint. Defaults to false.
Expand Down
3 changes: 2 additions & 1 deletion docs/guides/vault-performance-replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Admins and Contributors can use the provider to create Plus tier clusters with V

Although the clusters may reside in the same HVN, it is more likely that you will want to station your performance replication secondary in a different region, and therefore HVN, than your primary. When establishing performance replication links between clusters in different HVNs, an HVN peering connection is required. This can be defined explicitly using an [`hcp_hvn_peering_connection`](../resources/hvn_peering_connection.md), or HCP will create the connection automatically (peering connections can be imported after creation using [terraform import](https://www.terraform.io/cli/import)). Note HVN peering [CIDR block requirements](https://cloud.hashicorp.com/docs/hcp/network/routes#cidr-block-requirements).

-> **Note**: At this time, Plus tier clusters cannot be scaled.
-> **Note:** Remember, when scaling performance replicated clusters, be sure to keep the size of all clusters in the group in sync.

### Performance replication example

Expand Down Expand Up @@ -42,5 +42,6 @@ resource "hcp_vault_cluster" "secondary" {
hvn_id = hcp_hvn.secondary_network.hvn_id
tier = "plus_medium"
primary_link = hcp_vault_cluster.primary.self_link
paths_filter = ["path/a", "path/b"]
}
```
6 changes: 5 additions & 1 deletion docs/guides/vault-scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,11 @@ description: |-

# Scale a cluster

Admins are able to use the provider to change a cluster’s size or tier. Scaling down to a Development tier from any production-grade tier is not allowed. In addition, if you are using too much storage and want to scale down to a smaller size or tier, you will be unable to do so until you delete enough resources.
Admins are able to use the provider to change a cluster’s size or tier. There are a few limitations on cluster scaling:

- When scaling performance replicated Plus-tier clusters, be sure to keep the size of all clusters in the group in sync
- Scaling down to the Development tier from any production-grade tier is not allowed
- If you are using too much storage and want to scale down to a smaller size or tier, you will be unable to do so until you delete enough resources

### Scaling example

Expand Down
1 change: 1 addition & 0 deletions docs/resources/vault_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ resource "hcp_vault_cluster" "example" {

- **id** (String) The ID of this resource.
- **min_vault_version** (String) The minimum Vault version to use when creating the cluster. If not specified, it is defaulted to the version that is currently recommended by HCP.
- **paths_filter** (List of String) The performance replication [paths filter](https://learn.hashicorp.com/tutorials/vault/paths-filter). Applies to performance replication secondaries only and operates in "deny" mode only.
- **primary_link** (String) The `self_link` of the HCP Vault Plus tier cluster which is the primary in the performance replication setup with this HCP Vault Plus tier cluster. If not specified, it is a standalone Plus tier HCP Vault cluster.
- **public_endpoint** (Boolean) Denotes that the cluster has a public endpoint. Defaults to false.
- **tier** (String) Tier of the HCP Vault cluster. Valid options for tiers - `dev`, `starter_small`, `standard_small`, `standard_medium`, `standard_large`, `plus_small`, `plus_medium`, `plus_large`. See [pricing information](https://cloud.hashicorp.com/pricing/vault).
Expand Down
1 change: 1 addition & 0 deletions examples/guides/vault_perf_replication/replication.tf
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,5 @@ resource "hcp_vault_cluster" "secondary" {
hvn_id = hcp_hvn.secondary_network.hvn_id
tier = "plus_medium"
primary_link = hcp_vault_cluster.primary.self_link
paths_filter = ["path/a", "path/b"]
}
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ require (
github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320
github.com/hashicorp/go-version v1.4.0
github.com/hashicorp/hcl/v2 v2.8.2 // indirect
github.com/hashicorp/hcp-sdk-go v0.16.0
github.com/hashicorp/hcp-sdk-go v0.18.0
github.com/hashicorp/terraform-plugin-docs v0.5.1
github.com/hashicorp/terraform-plugin-sdk/v2 v2.10.1
github.com/posener/complete v1.2.1 // indirect
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -379,8 +379,8 @@ github.com/hashicorp/hc-install v0.3.1/go.mod h1:3LCdWcCDS1gaHC9mhHCGbkYfoY6vdsK
github.com/hashicorp/hcl/v2 v2.3.0/go.mod h1:d+FwDBbOLvpAM3Z6J7gPj/VoAGkNe/gm352ZhjJ/Zv8=
github.com/hashicorp/hcl/v2 v2.8.2 h1:wmFle3D1vu0okesm8BTLVDyJ6/OL9DCLUwn0b2OptiY=
github.com/hashicorp/hcl/v2 v2.8.2/go.mod h1:bQTN5mpo+jewjJgh8jr0JUguIi7qPHUF6yIfAEN3jqY=
github.com/hashicorp/hcp-sdk-go v0.16.0 h1:/UfRdiI1Z2AJGBi24aFO8MeNTWBa08EHyAvH1C9BWw8=
github.com/hashicorp/hcp-sdk-go v0.16.0/go.mod h1:z0I0eZ+TVJJ7pycnCzMM/ouOw5D5Qnp/zylNXkqGEX0=
github.com/hashicorp/hcp-sdk-go v0.18.0 h1:SnYFPebdfbc/sjit71Zx5Ji9fuQFgjvpIdrlgjzlriE=
github.com/hashicorp/hcp-sdk-go v0.18.0/go.mod h1:z0I0eZ+TVJJ7pycnCzMM/ouOw5D5Qnp/zylNXkqGEX0=
github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/terraform-exec v0.15.0 h1:cqjh4d8HYNQrDoEmlSGelHmg2DYDh5yayckvJ5bV18E=
Expand Down
46 changes: 46 additions & 0 deletions internal/clients/vault_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -138,3 +138,49 @@ func UpdateVaultClusterTier(ctx context.Context, client *Client, loc *sharedmode

return updateResp.Payload, nil
}

// UpdateVaultPathsFilter will make a call to the Vault service to update the paths filter for a secondary cluster
func UpdateVaultPathsFilter(ctx context.Context, client *Client, loc *sharedmodels.HashicorpCloudLocationLocation,
clusterID string, params vaultmodels.HashicorpCloudVault20201125ClusterPerformanceReplicationPathsFilter) (*vaultmodels.HashicorpCloudVault20201125UpdatePathsFilterResponse, error) {

updateParams := vault_service.NewUpdatePathsFilterParams()
updateParams.Context = ctx
updateParams.ClusterID = clusterID
updateParams.LocationProjectID = loc.ProjectID
updateParams.LocationOrganizationID = loc.OrganizationID
updateParams.Body = &vaultmodels.HashicorpCloudVault20201125UpdatePathsFilterRequest{
// ClusterID and Location are repeated because the values above are required to populate the URL,
// and the values below are required in the API request body
ClusterID: clusterID,
Location: loc,
Mode: params.Mode,
Paths: params.Paths,
}

updateResp, err := client.Vault.UpdatePathsFilter(updateParams, nil)
if err != nil {
return nil, err
}

return updateResp.Payload, nil
}

// DeleteVaultPathsFilter will make a call to the Vault service to delete the paths filter for a secondary cluster
func DeleteVaultPathsFilter(ctx context.Context, client *Client, loc *sharedmodels.HashicorpCloudLocationLocation,
clusterID string) (*vaultmodels.HashicorpCloudVault20201125DeletePathsFilterResponse, error) {

deleteParams := vault_service.NewDeletePathsFilterParams()
deleteParams.Context = ctx
deleteParams.ClusterID = clusterID
deleteParams.LocationProjectID = loc.ProjectID
deleteParams.LocationOrganizationID = loc.OrganizationID
deleteParams.LocationRegionProvider = &loc.Region.Provider
deleteParams.LocationRegionRegion = &loc.Region.Region

deleteResp, err := client.Vault.DeletePathsFilter(deleteParams, nil)
if err != nil {
return nil, err
}

return deleteResp.Payload, nil
}
8 changes: 8 additions & 0 deletions internal/provider/data_source_vault_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,14 @@ func dataSourceVaultCluster() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
"paths_filter": {
Description: "The performance replication [paths filter](https://learn.hashicorp.com/tutorials/vault/paths-filter). Applies to performance replication secondaries only and operates in \"deny\" mode only.",
Type: schema.TypeList,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Computed: true,
},
},
}
}
Expand Down
Loading

0 comments on commit 4eb8575

Please sign in to comment.