Skip to content

Commit

Permalink
feat: Supporting new API in update operation of advanced cluster (#2460)
Browse files Browse the repository at this point in the history
* wip separating logic for handling update with new API

* move upgrade logic to use new API

* modify tests for verifying disk_size_gb udpates

* add update for new schema

* adjustment to is asymmetric in checks

* add third replication spec in new schema

* change docs and fix updating root electable when new API is called

* add test for supporting disk_size_gb change in inner spec with new schema

* support update of disk_size_gb at electable level when using old schema structure

* minor docs update

* adjust value of disk size gb in acceptance test

* add check for change in analytics specs as well

* adjusting hardcoded value in check

* address docs comments
  • Loading branch information
AgustinBettati authored Jul 29, 2024
1 parent 786e6d3 commit 6faf4fa
Show file tree
Hide file tree
Showing 6 changed files with 278 additions and 143 deletions.
6 changes: 3 additions & 3 deletions docs/resources/advanced_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -565,7 +565,7 @@ If you are upgrading a replica set to a sharded cluster, you cannot increase the
* `STANDARD` volume types can't exceed the default IOPS rate for the selected volume size.
* `PROVISIONED` volume types must fall within the allowable IOPS range for the selected volume size.
* `node_count` - (Optional) Number of nodes of the given type for MongoDB Atlas to deploy to the region.
* `disk_size_gb` - (Optional) Storage capacity that the host's root volume possesses expressed in gigabytes. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier. **Note:** Using disk_size_gb with Standard IOPS could lead to errors and configuration issues. Therefore, it should be used only with the [Provisioned IOPS volume type](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#PROVISIONED). When using Provisioned IOPS, the disk_size_gb parameter specifies the storage capacity, but the IOPS are set independently. Ensuring that disk_size_gb is used exclusively with Provisioned IOPS will help avoid these issues.
* `disk_size_gb` - (Optional) Storage capacity that the host's root volume possesses expressed in gigabytes. This value must be equal for all shards and node types. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier. **Note:** Using `disk_size_gb` with Standard IOPS could lead to errors and configuration issues. Therefore, it should be used only with the [Provisioned IOPS volume type](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#PROVISIONED). When using Provisioned IOPS, the disk_size_gb parameter specifies the storage capacity, but the IOPS are set independently. Ensuring that `disk_size_gb` is used exclusively with Provisioned IOPS will help avoid these issues.


### analytics_specs
Expand All @@ -576,7 +576,7 @@ If you are upgrading a replica set to a sharded cluster, you cannot increase the
* `PROVISIONED` volume types must fall within the allowable IOPS range for the selected volume size.
* `instance_size` - (Optional) Hardware specification for the instance sizes in this region. Each instance size has a default storage and memory capacity. The instance size you select applies to all the data-bearing hosts in your instance size.
* `node_count` - (Optional) Number of nodes of the given type for MongoDB Atlas to deploy to the region.
* `disk_size_gb` - (Optional) Storage capacity that the host's root volume possesses expressed in gigabytes. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier. **Note:** Using disk_size_gb with Standard IOPS could lead to errors and configuration issues. Therefore, it should be used only with the [Provisioned IOPS volume type](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#PROVISIONED). When using Provisioned IOPS, the disk_size_gb parameter specifies the storage capacity, but the IOPS are set independently. Ensuring that disk_size_gb is used exclusively with Provisioned IOPS will help avoid these issues.
* `disk_size_gb` - (Optional) Storage capacity that the host's root volume possesses expressed in gigabytes. This value must be equal for all shards and node types. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier. **Note:** Using `disk_size_gb` with Standard IOPS could lead to errors and configuration issues. Therefore, it should be used only with the [Provisioned IOPS volume type](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#PROVISIONED). When using Provisioned IOPS, the disk_size_gb parameter specifies the storage capacity, but the IOPS are set independently. Ensuring that `disk_size_gb` is used exclusively with Provisioned IOPS will help avoid these issues.

### read_only_specs

Expand All @@ -586,7 +586,7 @@ If you are upgrading a replica set to a sharded cluster, you cannot increase the
* `PROVISIONED` volume types must fall within the allowable IOPS range for the selected volume size.
* `instance_size` - (Optional) Hardware specification for the instance sizes in this region. Each instance size has a default storage and memory capacity. The instance size you select applies to all the data-bearing hosts in your instance size.
* `node_count` - (Optional) Number of nodes of the given type for MongoDB Atlas to deploy to the region.
* `disk_size_gb` - (Optional) Storage capacity that the host's root volume possesses expressed in gigabytes. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier. **Note:** Using disk_size_gb with Standard IOPS could lead to errors and configuration issues. Therefore, it should be used only with the [Provisioned IOPS volume type](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#PROVISIONED). When using Provisioned IOPS, the disk_size_gb parameter specifies the storage capacity, but the IOPS are set independently. Ensuring that disk_size_gb is used exclusively with Provisioned IOPS will help avoid these issues.
* `disk_size_gb` - (Optional) Storage capacity that the host's root volume possesses expressed in gigabytes. This value must be equal for all shards and node types. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier. **Note:** Using `disk_size_gb` with Standard IOPS could lead to errors and configuration issues. Therefore, it should be used only with the [Provisioned IOPS volume type](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#PROVISIONED). When using Provisioned IOPS, the disk_size_gb parameter specifies the storage capacity, but the IOPS are set independently. Ensuring that `disk_size_gb` is used exclusively with Provisioned IOPS will help avoid these issues.

### auto_scaling

Expand Down
13 changes: 9 additions & 4 deletions internal/service/advancedcluster/model_advanced_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ func GetDiskSizeGBFromReplicationSpec(cluster *admin.ClusterDescription20250101)
return configs[0].ElectableSpecs.GetDiskSizeGB()
}

func UpgradeRefreshFunc(ctx context.Context, name, projectID string, client admin20231115.ClustersApi) retry.StateRefreshFunc {
func UpgradeRefreshFunc(ctx context.Context, name, projectID string, client admin.ClustersApi) retry.StateRefreshFunc {
return func() (any, string, error) {
cluster, resp, err := client.GetCluster(ctx, projectID, name).Execute()

Expand Down Expand Up @@ -888,7 +888,9 @@ func expandAdvancedReplicationSpec(tfMap map[string]any, rootDiskSizeGB *float64
ZoneName: conversion.StringPtr(tfMap["zone_name"].(string)),
RegionConfigs: expandRegionConfigs(tfMap["region_configs"].([]any), rootDiskSizeGB),
}
// TODO: CLOUDP-259836 here we will populate id value using external_id value from the state (relevant for update request)
if tfMap["external_id"].(string) != "" {
apiObject.Id = conversion.StringPtr(tfMap["external_id"].(string))
}
return apiObject
}

Expand Down Expand Up @@ -969,12 +971,15 @@ func expandRegionConfigSpec(tfList []any, providerName string, rootDiskSizeGB *f
apiObject.NodeCount = conversion.Pointer(v.(int))
}

apiObject.DiskSizeGB = rootDiskSizeGB
// disk size gb defined in inner level will take precedence over root level.
if v, ok := tfMap["disk_size_gb"]; ok && v.(float64) != 0 {
apiObject.DiskSizeGB = conversion.Pointer(v.(float64))
}

// value defined in root is set if it is defined in the create, or value has changed in the update.
if rootDiskSizeGB != nil {
apiObject.DiskSizeGB = rootDiskSizeGB
}

return apiObject
}

Expand Down
11 changes: 5 additions & 6 deletions internal/service/advancedcluster/model_advanced_cluster_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ import (
"testing"

admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin"
mockadmin20231115 "go.mongodb.org/atlas-sdk/v20231115014/mockadmin"

"go.mongodb.org/atlas-sdk/v20240530002/admin"
"go.mongodb.org/atlas-sdk/v20240530002/mockadmin"
Expand Down Expand Up @@ -199,7 +198,7 @@ type Result struct {

func TestUpgradeRefreshFunc(t *testing.T) {
testCases := []struct {
mockCluster *admin20231115.AdvancedClusterDescription
mockCluster *admin.ClusterDescription20250101
mockResponse *http.Response
expectedResult Result
mockError error
Expand Down Expand Up @@ -261,11 +260,11 @@ func TestUpgradeRefreshFunc(t *testing.T) {
},
{
name: "Successful",
mockCluster: &admin20231115.AdvancedClusterDescription{StateName: conversion.StringPtr("stateName")},
mockCluster: &admin.ClusterDescription20250101{StateName: conversion.StringPtr("stateName")},
mockResponse: &http.Response{StatusCode: 200},
expectedError: false,
expectedResult: Result{
response: &admin20231115.AdvancedClusterDescription{StateName: conversion.StringPtr("stateName")},
response: &admin.ClusterDescription20250101{StateName: conversion.StringPtr("stateName")},
state: "stateName",
error: nil,
},
Expand All @@ -274,9 +273,9 @@ func TestUpgradeRefreshFunc(t *testing.T) {

for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
testObject := mockadmin20231115.NewClustersApi(t)
testObject := mockadmin.NewClustersApi(t)

testObject.EXPECT().GetCluster(mock.Anything, mock.Anything, mock.Anything).Return(admin20231115.GetClusterApiRequest{ApiService: testObject}).Once()
testObject.EXPECT().GetCluster(mock.Anything, mock.Anything, mock.Anything).Return(admin.GetClusterApiRequest{ApiService: testObject}).Once()
testObject.EXPECT().GetClusterExecute(mock.Anything).Return(tc.mockCluster, tc.mockResponse, tc.mockError).Once()

result, stateName, err := advancedcluster.UpgradeRefreshFunc(context.Background(), dummyClusterName, dummyProjectID, testObject)()
Expand Down
Loading

0 comments on commit 6faf4fa

Please sign in to comment.