Skip to content

Commit

Permalink
add resource and data source implementations
Browse files Browse the repository at this point in the history
  • Loading branch information
AgustinBettati committed Nov 12, 2024
1 parent a5f16cc commit 58f186e
Show file tree
Hide file tree
Showing 12 changed files with 241 additions and 29 deletions.
11 changes: 11 additions & 0 deletions .changelog/2790.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
```release-note:enhancement
resource/mongodbatlas_cluster: Adds `pinned_fcv` attribute
```

```release-note:enhancement
data-source/mongodbatlas_cluster: Adds `pinned_fcv` attribute
```

```release-note:enhancement
data-source/mongodbatlas_clusters: Adds `pinned_fcv` attribute
```
6 changes: 6 additions & 0 deletions docs/data-sources/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,7 @@ In addition to all arguments above, the following attributes are exported:
* `tags` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags).
* `labels` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#labels). **DEPRECATED** Use `tags` instead.
* `mongo_db_major_version` - Indicates the version of the cluster to deploy.
* `pinned_fcv` - The pinned Feature Compatibility Version (FCV) with its associated expiration date. See [below](#pinned-fcv).
* `num_shards` - Indicates whether the cluster is a replica set or a sharded cluster.
* `cloud_backup` - Flag indicating if the cluster uses Cloud Backup Snapshots for backups.
* `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster.
Expand Down Expand Up @@ -233,4 +234,9 @@ Contains a key-value pair that tags that the cluster was created by a Terraform
* `transaction_lifetime_limit_seconds` - Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds.
* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. This parameter is only supported for MongoDB version 6.0 and above. Defaults to `-1`(off).

### Pinned FCV

* `expiration_date` - Expiration date of the fixed FCV. This value is in the ISO 8601 timestamp format (e.g. "2024-12-04T16:25:00Z").
* `version` - Feature compatibility version of the cluster.

See detailed information for arguments and attributes: [MongoDB API Clusters](https://docs.atlas.mongodb.com/reference/api/clusters-create-one/)
5 changes: 5 additions & 0 deletions docs/data-sources/clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@ In addition to all arguments above, the following attributes are exported:
* `tags` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags).
* `labels` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#labels). **DEPRECATED** Use `tags` instead.
* `mongo_db_major_version` - Indicates the version of the cluster to deploy.
* `pinned_fcv` - The pinned Feature Compatibility Version (FCV) with its associated expiration date. See [below](#pinned-fcv).
* `num_shards` - Indicates whether the cluster is a replica set or a sharded cluster.
* `provider_backup_enabled` - Flag indicating if the cluster uses Cloud Backup Snapshots for backups. **DEPRECATED** Use `cloud_backup` instead.
* `cloud_backup` - Flag indicating if the cluster uses Cloud Backup Snapshots for backups.
Expand Down Expand Up @@ -220,5 +221,9 @@ Contains a key-value pair that tags that the cluster was created by a Terraform
* `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled.
* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. This parameter is only supported for MongoDB version 6.0 and above. Defaults to `-1`(off).

### Pinned FCV

* `expiration_date` - Expiration date of the fixed FCV. This value is in the ISO 8601 timestamp format (e.g. "2024-12-04T16:25:00Z").
* `version` - Feature compatibility version of the cluster.

See detailed information for arguments and attributes: [MongoDB API Clusters](https://docs.atlas.mongodb.com/reference/api/clusters-create-one/)
6 changes: 6 additions & 0 deletions docs/resources/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -340,6 +340,7 @@ But in order to explicitly change `provider_instance_size_name` comment the `lif
* `tags` - (Optional) Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags).
* `labels` - (Optional) Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#labels). **DEPRECATED** Use `tags` instead.
* `mongo_db_major_version` - (Optional) Version of the cluster to deploy. Atlas supports all the MongoDB versions that have **not** reached [End of Live](https://www.mongodb.com/legal/support-policy/lifecycles) for M10+ clusters. If omitted, Atlas deploys the cluster with the default version. For more details, see [documentation](https://www.mongodb.com/docs/atlas/reference/faq/database/#which-versions-of-mongodb-do-service-clusters-use-). Atlas always deploys the cluster with the latest stable release of the specified version. See [Release Notes](https://www.mongodb.com/docs/upcoming/release-notes/) for latest Current Stable Release.
* `pinned_fcv` - (Optional) Pins the Feature Compatibility Version (FCV) to the current MongoDB version with a provided expiration date. To unpin the FCV the `pinned_fcv` attribute must be removed. Once FCV has expired `pinned_fcv` attribute must be removed. See [below](#pinned-fcv).
* `num_shards` - (Optional) Selects whether the cluster is a replica set or a sharded cluster. If you use the replicationSpecs parameter, you must set num_shards.
* `pit_enabled` - (Optional) - Flag that indicates if the cluster uses Continuous Cloud Backup. If set to true, cloud_backup must also be set to true.
* `cloud_backup` - (Optional) Flag indicating if the cluster uses Cloud Backup for backups.
Expand Down Expand Up @@ -539,6 +540,11 @@ To learn more, see [Resource Tags](https://dochub.mongodb.org/core/add-cluster-t

-> **NOTE:** MongoDB Atlas doesn't display your labels.

### Pinned FCV

* `expiration_date` - (Required) Expiration date of the fixed FCV. This value is in the ISO 8601 timestamp format (e.g. "2024-12-04T16:25:00Z"). Note that this field cannot exceed 4 weeks from the pinned date.
* `version` - Feature compatibility version of the cluster.

## Attributes Reference

In addition to all arguments above, the following attributes are exported:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,7 @@ func flattenAdvancedClusters(ctx context.Context, connV220240530 *admin20240530.
"redact_client_log_data": cluster.GetRedactClientLogData(),
"config_server_management_mode": cluster.GetConfigServerManagementMode(),
"config_server_type": cluster.GetConfigServerType(),
"pinned_fcv": flattenPinnedFCV(cluster),
"pinned_fcv": FlattenPinnedFCV(cluster),
}
results = append(results, result)
}
Expand Down Expand Up @@ -451,7 +451,7 @@ func flattenAdvancedClustersOldSDK(ctx context.Context, connV20240530 *admin2024
"redact_client_log_data": clusterDescNew.GetRedactClientLogData(),
"config_server_management_mode": clusterDescNew.GetConfigServerManagementMode(),
"config_server_type": clusterDescNew.GetConfigServerType(),
"pinned_fcv": flattenPinnedFCV(clusterDescNew),
"pinned_fcv": FlattenPinnedFCV(clusterDescNew),
}
results = append(results, result)
}
Expand Down
2 changes: 1 addition & 1 deletion internal/service/advancedcluster/model_advanced_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -426,7 +426,7 @@ func CheckRegionConfigsPriorityOrderOld(regionConfigs []admin20240530.Replicatio
return nil
}

func flattenPinnedFCV(cluster *admin.ClusterDescription20240805) []map[string]string {
func FlattenPinnedFCV(cluster *admin.ClusterDescription20240805) []map[string]string {
if cluster.FeatureCompatibilityVersion == nil { // pinned_fcv is defined in state only if featureCompatibilityVersion is present in cluster response
return nil
}
Expand Down
16 changes: 8 additions & 8 deletions internal/service/advancedcluster/resource_advanced_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -539,7 +539,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
}

if pinnedFCVBlock, ok := d.Get("pinned_fcv").([]any); ok && len(pinnedFCVBlock) > 0 {
if diags := pinFCV(ctx, connV2, projectID, cluster.GetName(), pinnedFCVBlock[0]); diags.HasError() {
if diags := PinFCV(ctx, connV2, projectID, cluster.GetName(), pinnedFCVBlock[0]); diags.HasError() {
return diags
}
waitForChanges = true
Expand Down Expand Up @@ -642,7 +642,7 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di
clusterResp = cluster
}

warning := warningIfFCVExpiredOrUnpinnedExternally(d, clusterResp) // has to be called before pinned_fcv value is updated in ResourceData to know prior state value
warning := WarningIfFCVExpiredOrUnpinnedExternally(d, clusterResp) // has to be called before pinned_fcv value is updated in ResourceData to know prior state value
diags := setRootFields(d, clusterResp, true)
if diags.HasError() {
return diags
Expand Down Expand Up @@ -800,14 +800,14 @@ func setRootFields(d *schema.ResourceData, cluster *admin.ClusterDescription2024
return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "config_server_management_mode", clusterName, err))
}

if err := d.Set("pinned_fcv", flattenPinnedFCV(cluster)); err != nil {
if err := d.Set("pinned_fcv", FlattenPinnedFCV(cluster)); err != nil {
return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "pinned_fcv", clusterName, err))
}

return nil
}

func warningIfFCVExpiredOrUnpinnedExternally(d *schema.ResourceData, cluster *admin.ClusterDescription20240805) diag.Diagnostics {
func WarningIfFCVExpiredOrUnpinnedExternally(d *schema.ResourceData, cluster *admin.ClusterDescription20240805) diag.Diagnostics {
pinnedFCVBlock, ok := d.Get("pinned_fcv").([]any)
presentInState := ok && len(pinnedFCVBlock) > 0
presentInAPIResp := cluster.GetFeatureCompatibilityVersion() != ""
Expand Down Expand Up @@ -882,7 +882,7 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.
timeout := d.Timeout(schema.TimeoutUpdate)

// FCV update is intentionally handled before other cluster updates, and will wait for cluster to reach IDLE state before continuing
if diags := handlePinnedFCVUpdate(ctx, connV2, projectID, clusterName, d, timeout); diags != nil {
if diags := HandlePinnedFCVUpdate(ctx, connV2, projectID, clusterName, d, timeout); diags != nil {
return diags
}

Expand Down Expand Up @@ -984,10 +984,10 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.
return resourceRead(ctx, d, meta)
}

func handlePinnedFCVUpdate(ctx context.Context, connV2 *admin.APIClient, projectID, clusterName string, d *schema.ResourceData, timeout time.Duration) diag.Diagnostics {
func HandlePinnedFCVUpdate(ctx context.Context, connV2 *admin.APIClient, projectID, clusterName string, d *schema.ResourceData, timeout time.Duration) diag.Diagnostics {
if d.HasChange("pinned_fcv") {
if pinnedFCVBlock, ok := d.Get("pinned_fcv").([]any); ok && len(pinnedFCVBlock) > 0 {
if diags := pinFCV(ctx, connV2, projectID, clusterName, pinnedFCVBlock[0]); diags.HasError() {
if diags := PinFCV(ctx, connV2, projectID, clusterName, pinnedFCVBlock[0]); diags.HasError() {
return diags
}
} else {
Expand All @@ -1004,7 +1004,7 @@ func handlePinnedFCVUpdate(ctx context.Context, connV2 *admin.APIClient, project
return nil
}

func pinFCV(ctx context.Context, connV2 *admin.APIClient, projectID, clusterName string, fcvBlock any) diag.Diagnostics {
func PinFCV(ctx context.Context, connV2 *admin.APIClient, projectID, clusterName string, fcvBlock any) diag.Diagnostics {
req := admin.PinFCV{}
if nestedObj, ok := fcvBlock.(map[string]any); ok {
expDateStrPtr := conversion.StringPtr(cast.ToString(nestedObj["expiration_date"]))
Expand Down
24 changes: 22 additions & 2 deletions internal/service/cluster/data_source_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -319,6 +319,22 @@ func DataSource() *schema.Resource {
Type: schema.TypeBool,
Computed: true,
},
"pinned_fcv": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"version": {
Type: schema.TypeString,
Computed: true,
},
"expiration_date": {
Type: schema.TypeString,
Computed: true,
},
},
},
},
},
}
}
Expand Down Expand Up @@ -491,14 +507,18 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.
return diag.FromErr(err)
}

redactClientLogData, err := newAtlasGet(ctx, connV2, projectID, clusterName)
latestClusterModel, err := newAtlasGet(ctx, connV2, projectID, clusterName)
if err != nil {
return diag.FromErr(fmt.Errorf(errorClusterRead, clusterName, err))
}
if err := d.Set("redact_client_log_data", redactClientLogData); err != nil {
if err := d.Set("redact_client_log_data", latestClusterModel.GetRedactClientLogData()); err != nil {
return diag.FromErr(fmt.Errorf(advancedcluster.ErrorClusterSetting, "redact_client_log_data", clusterName, err))
}

if err := d.Set("pinned_fcv", advancedcluster.FlattenPinnedFCV(latestClusterModel)); err != nil {
return diag.FromErr(fmt.Errorf(advancedcluster.ErrorClusterSetting, "pinned_fcv", clusterName, err))
}

d.SetId(cluster.ID)

return nil
Expand Down
26 changes: 22 additions & 4 deletions internal/service/cluster/data_source_clusters.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster"
"go.mongodb.org/atlas-sdk/v20241023002/admin"
matlas "go.mongodb.org/atlas/mongodbatlas"
)

Expand Down Expand Up @@ -322,6 +323,22 @@ func PluralDataSource() *schema.Resource {
Type: schema.TypeBool,
Computed: true,
},
"pinned_fcv": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"version": {
Type: schema.TypeString,
Computed: true,
},
"expiration_date": {
Type: schema.TypeString,
Computed: true,
},
},
},
},
},
},
},
Expand All @@ -343,22 +360,22 @@ func dataSourcePluralRead(ctx context.Context, d *schema.ResourceData, meta any)
return diag.FromErr(fmt.Errorf("error reading cluster list for project(%s): %s", projectID, err))
}

redactClientLogDataMap, err := newAtlasList(ctx, connV2, projectID)
latestClusterModels, err := newAtlasList(ctx, connV2, projectID)
if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound {
return nil
}
return diag.FromErr(fmt.Errorf("error reading new cluster list for project(%s): %s", projectID, err))
}

if err := d.Set("results", flattenClusters(ctx, d, conn, clusters, redactClientLogDataMap)); err != nil {
if err := d.Set("results", flattenClusters(ctx, d, conn, clusters, latestClusterModels)); err != nil {
return diag.FromErr(fmt.Errorf(advancedcluster.ErrorClusterSetting, "results", d.Id(), err))
}

return nil
}

func flattenClusters(ctx context.Context, d *schema.ResourceData, conn *matlas.Client, clusters []matlas.Cluster, redactClientLogDataMap map[string]bool) []map[string]any {
func flattenClusters(ctx context.Context, d *schema.ResourceData, conn *matlas.Client, clusters []matlas.Cluster, latestClusterModels map[string]*admin.ClusterDescription20240805) []map[string]any {
results := make([]map[string]any, 0)

for i := range clusters {
Expand Down Expand Up @@ -420,7 +437,8 @@ func flattenClusters(ctx context.Context, d *schema.ResourceData, conn *matlas.C
"termination_protection_enabled": clusters[i].TerminationProtectionEnabled,
"version_release_system": clusters[i].VersionReleaseSystem,
"container_id": containerID,
"redact_client_log_data": redactClientLogDataMap[clusters[i].Name],
"redact_client_log_data": latestClusterModels[clusters[i].Name].GetRedactClientLogData(),
"pinned_fcv": advancedcluster.FlattenPinnedFCV(latestClusterModels[clusters[i].Name]),
}
results = append(results, result)
}
Expand Down
12 changes: 6 additions & 6 deletions internal/service/cluster/new_atlas.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ func newAtlasUpdate(ctx context.Context, timeout time.Duration, connV2 *admin.AP
if err != nil {
return err
}
if current == redactClientLogData {
if current.GetRedactClientLogData() == redactClientLogData {
return nil
}
req := &admin20240805.ClusterDescription20240805{
Expand All @@ -31,20 +31,20 @@ func newAtlasUpdate(ctx context.Context, timeout time.Duration, connV2 *admin.AP
return nil
}

func newAtlasGet(ctx context.Context, connV2 *admin.APIClient, projectID, clusterName string) (redactClientLogData bool, err error) {
func newAtlasGet(ctx context.Context, connV2 *admin.APIClient, projectID, clusterName string) (*admin.ClusterDescription20240805, error) {
cluster, _, err := connV2.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute()
return cluster.GetRedactClientLogData(), err
return cluster, err
}

func newAtlasList(ctx context.Context, connV2 *admin.APIClient, projectID string) (map[string]bool, error) {
func newAtlasList(ctx context.Context, connV2 *admin.APIClient, projectID string) (map[string]*admin.ClusterDescription20240805, error) {
clusters, _, err := connV2.ClustersApi.ListClusters(ctx, projectID).Execute()
if err != nil {
return nil, err
}
results := clusters.GetResults()
list := make(map[string]bool)
list := make(map[string]*admin.ClusterDescription20240805)
for i := range results {
list[results[i].GetName()] = results[i].GetRedactClientLogData()
list[results[i].GetName()] = &results[i]
}
return list, nil
}
Loading

0 comments on commit 58f186e

Please sign in to comment.