From 25e10d6bfca3b6ffde6a112475cc35bad7634779 Mon Sep 17 00:00:00 2001 From: Agustin Bettati Date: Mon, 12 Aug 2024 14:23:07 +0200 Subject: [PATCH] chore: Bring latest changes from master into dev branch (includes adopting latest stable SDK version) (#2491) * doc: Updates `mongodbatlas_global_cluster_config` doc about self-managed sharding clusters (#2372) * update doc * add link * test: Unifies Azure and GCP networking tests (#2371) * unify Azure and GCP tests * TEMPORARY no update * Revert "TEMPORARY no update" This reverts commit ab60d67dece8f53272b2fad4a68b60b890e7636c. * run in parallel * chore: Updates examples link in index.html.markdown for v1.17.3 release * chore: Updates CHANGELOG.md header for v1.17.3 release * doc: Updates Terraform Compatibility Matrix documentation (#2370) Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * use ComposeAggregateTestCheckFunc (#2375) * chore: Updates asdf to TF 1.9.0 and compatibility matrix body (#2376) * update asdf to TF 1.9.0 * update compatibility message * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * Fix actionlint --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: stale.yaml gh action (#2379) * doc: Updates alert-config examples (#2378) * doc: Update alert-config examples * doc: Removes other references to GROUP_CHARTS_ADMIN * chore: align table * chore: Updates Atlas Go SDK (#2380) * build(deps): bump go.mongodb.org/atlas-sdk * rename DiskBackupSnapshotAWSExportBucket to DiskBackupSnapshotExportBucket * add param to DeleteAtlasSearchDeployment * add LatestDefinition * more LatestDefinition and start using SearchIndexCreateRequest * HasElementsSliceOrMap * update * ToAnySlicePointer * fix update --------- Co-authored-by: lantoli <430982+lantoli@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.8 to 1.54.13 (#2383) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.8 to 1.54.13. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.8...v1.54.13) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump amannn/action-semantic-pull-request from 5.5.2 to 5.5.3 (#2382) Bumps [amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request) from 5.5.2 to 5.5.3. - [Release notes](https://github.com/amannn/action-semantic-pull-request/releases) - [Changelog](https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md) - [Commits](https://github.com/amannn/action-semantic-pull-request/compare/cfb60706e18bc85e8aec535e3c577abe8f70378e...0723387faaf9b38adef4775cd42cfd5155ed6017) --- updated-dependencies: - dependency-name: amannn/action-semantic-pull-request dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * test: Improves tests for mongodbatlas_search_index (#2384) * checkVector * checkBasic * checkWithMapping * checkWithSynonyms * checkAdditional * checkAdditionalAnalyzers and checkAdditionalMappingsFields * remove addAttrChecks and addAttrSetChecks * use commonChecks in all checks * test checks cleanup * chore: Updates nightly tests to TF 1.9.x (#2386) * update nightly tests to TF 1.9.x * use TF var * keep until 1.3.x * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: Emptying cloud_back_schedule "copy_settings" (#2387) * test: add test to reproduce Github Issue * fix: update copy_settings on changes (even when empty) * docs: Add changelog entry * chore: fix changelog entry * apply review comments * chore: Updates CHANGELOG.md for #2387 * chore: Updates delete logic for `mongodbatlas_search_deployment` (#2389) * update delete logic * update unit test * refactor: use advanced_cluster instead of cluster (#2392) * fix: Returns error if the analyzers attribute contains unknown fields. (#2394) * fix: Returns error if the analyzers attribute contains unknown fields. * adds changelog file. * Update .changelog/2394.txt Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2394 * chore: Bump github.com/aws/aws-sdk-go from 1.54.13 to 1.54.17 (#2401) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.13 to 1.54.17. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.13...v1.54.17) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-testing (#2400) Bumps [github.com/hashicorp/terraform-plugin-testing](https://github.com/hashicorp/terraform-plugin-testing) from 1.8.0 to 1.9.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-testing/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-testing/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-testing/compare/v1.8.0...v1.9.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-testing dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-framework (#2398) Bumps [github.com/hashicorp/terraform-plugin-framework](https://github.com/hashicorp/terraform-plugin-framework) from 1.9.0 to 1.10.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-framework/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-framework/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-framework/compare/v1.9.0...v1.10.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-framework dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-framework-validators (#2399) Bumps [github.com/hashicorp/terraform-plugin-framework-validators](https://github.com/hashicorp/terraform-plugin-framework-validators) from 0.12.0 to 0.13.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-framework-validators/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-framework-validators/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-framework-validators/compare/v0.12.0...v0.13.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-framework-validators dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * test: Uses hclwrite to generate the cluster for GetClusterInfo (#2404) * test: Use hclwrite to generate the cluster for GetClusterInfo * test: fix unit test * refactor: minor improvements * refactor: use Zone 1 as the default ZoneName to make tests pass * refactor: remove num_shards in request and add more tests * fix: use same default region as before * test: Support disk_size_gb for ClusterInfo and add test case for multiple dependencies * refactor: move replication specs to ClusterRequest * test: add support for CloudRegionConfig * add: suggestions from PR comments * refactor: use acc.ReplicationSpecRequest instead of admin.ReplicationSpec * fix: Fixes `disk_iops` attribute for Azure cloud provider in `mongodbatlas_advanced_cluster` resource (#2396) * fix disk_iops in Azure * expand * tests for disk_iops * chore: Updates CHANGELOG.md for #2396 * test: Refactors `mongodbatlas_private_endpoint_regional_mode` to use cluster info (#2403) * test: refactor to use cluster info * test: enable test in CI and fix duplicate zone name * test: use AWS_REGION_UPPERCASE and add pre-checks * fix: use clusterResourceName * test: fix GetClusterInfo call * fix: pre check call * fix: add UPPERCASE/LOWERCASE to network test suite * test: Skip in ci since it is slow and use new GetClusterInfo api * test: Fix the broken test and simpify assert statements * test: enable in CI, after refactorings ~1230s * test: Refactors resource tests to use GetClusterInfo `online_archive` (#2409) * feat: adds support for Tags & AutoScalingDiskGbEnabled * feat: refactor tests to use GetClusterInfo & new SDK * chore: fomatting fix * test: make unit test deterministic * test: onlinearchive force us_east_1 * spelling in comment * test: fix migration test to use package clusterRequest (with correct region) * update .tool-versions (#2417) * feat: Adds `stored_source` attribute to `mongodbatlas_search_index` resource and corresponding data sources (#2388) * fix ds schemas * add changelog * add storedSource to configBasic and checkBasic * update doc about index_id * update boolean test * first implementation of stored_source as string * create model file * marshal * don't allow update * test for objects in stored_source * TestAccSearchIndex_withStoredSourceUpdate * update StoredSource * fix merge * tests for storedSource updates * swap test names * doc * chore: Updates CHANGELOG.md for #2388 * doc: Improves Guides menu (#2408) * add 0.8.2 metadata * update old category and remove unneeded headers * update page_title * fix titles * remove old guide * test: Refactors resource tests to use GetClusterInfo `ldap_configuration` (#2411) * test: Refactors resource tests to use GetClusterInfo ldap_configuration * test: Fix depends_on clause * test: remove unused clusterName and align fields * test: Refactors resource tests to use GetClusterInfo `cloud_backup_snapshot_restore_job` (#2413) * test: Refactors resource tests to use GetClusterInfo `cloud_backup_snapshot_restore_job` * test: fix reference to clusterResourceName * doc: Clarify usage of maintenance window resource (#2418) * test: Refactors resource tests to use GetClusterInfo `cloud_backup_schedule` (#2414) * test: Cluster support PitEnabled * test: Refactors resource tests to use GetClusterInfo `mongodbatlas_cloud_backup_schedule` * apply PR suggestions * test: fix broken test after merging * test: Refactors resource tests to use GetClusterInfo `federated_database_instance` (#2412) * test: Support getting cluster info with project * test: Refactors resource tests to use GetClusterInfo `federated_database_instance` * test: refactor, use a single GetClusterInfo and support AddDefaults * test: use renamed argument in test * doc: Removes docs headers as they are not needed (#2422) * remove unneeded YAML frontmatter headers * small adjustements * change root files * remove from templates * use Deprecated category * apply feedback * test: Refactors resource tests to use GetClusterInfo `backup_compliance_policy` (#2415) * test: Support AdvancedConfiguration, MongoDBMajorVersion, RetainBackupsEnabled, EbsVolumeType in cluster * test: refactor test to use GetClusterInfo * test: Refactors resource tests to use GetClusterInfo `cluster_outage_simulation` (#2423) * test: support Priority and NodeCountReadOnly * test: Refactors resource tests to use GetClusterInfo `cluster_outage_simulation` * test: reuse test case in migration test * chore: increase timeout to ensure test is passing * test: avoid global variables to ensure no duplicate cluster names * revert delete timeout change * test: Fixes DUPLICATE_CLUSTER_NAME failures (#2424) * test: fix DUPLICATE_CLUSTER_NAME online_archive * test: fix DUPLICATE_CLUSTER_NAME backup_snapshot_restore_job * test: Refactors GetClusterInfo (#2426) * test: support creating a datasource when using GetClusterInfo * test: Add documentation for cluster methods * refactor: move out config_cluster to its own file * refactor: move configClusterGlobal to the only usage file * refactor: remove ProjectIDStr field * test: update references for cluster_info fields * chore: missing whitespace * test: fix missing quotes around projectID * Update internal/testutil/acc/cluster.go Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * doc: Updates to new Terraform doc structure (#2425) * move to root doc folder * rename ds and resource folders * change file extension to .md * update doc links * gitignore * releasing instructions * git hook * codeowners * workflow template * gha workflows * scripts * remove website-lint * update references to html.markdown * fix compatibility script matrix * rename rest of files * fix generate doc script using docs-out folder to temporary generate all files and copying only to docs folder the specified resource files * fix typo * chore: Bump github.com/zclconf/go-cty from 1.14.4 to 1.15.0 (#2433) Bumps [github.com/zclconf/go-cty](https://github.com/zclconf/go-cty) from 1.14.4 to 1.15.0. - [Release notes](https://github.com/zclconf/go-cty/releases) - [Changelog](https://github.com/zclconf/go-cty/blob/main/CHANGELOG.md) - [Commits](https://github.com/zclconf/go-cty/compare/v1.14.4...v1.15.0) --- updated-dependencies: - dependency-name: github.com/zclconf/go-cty dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.17 to 1.54.19 (#2432) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.17 to 1.54.19. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.17...v1.54.19) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump actions/setup-go from 5.0.1 to 5.0.2 (#2431) Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.0.1 to 5.0.2. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](https://github.com/actions/setup-go/compare/cdcb36043654635271a94b9a6d1392de5bb323a7...0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32) --- updated-dependencies: - dependency-name: actions/setup-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump tj-actions/verify-changed-files (#2430) Bumps [tj-actions/verify-changed-files](https://github.com/tj-actions/verify-changed-files) from 11ea2b36f98609331b8dc9c5ad9071ee317c6d28 to 79f398ac63ab46f7f820470c821d830e5c340ef9. - [Release notes](https://github.com/tj-actions/verify-changed-files/releases) - [Changelog](https://github.com/tj-actions/verify-changed-files/blob/main/HISTORY.md) - [Commits](https://github.com/tj-actions/verify-changed-files/compare/11ea2b36f98609331b8dc9c5ad9071ee317c6d28...79f398ac63ab46f7f820470c821d830e5c340ef9) --- updated-dependencies: - dependency-name: tj-actions/verify-changed-files dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * refactor: avoid usage of github.com/go-test/deep (use `reflect.DeepEqual instead`) (#2427) * chore: Deletes modules folder (#2435) * remove modules folder * gitignore * chore: Makes sure doc generation is up-to-date (#2441) * generate doc * split in runs * detect changes * TEMPORARY: change 3 files to trigger doc failures * rename * Revert "TEMPORARY: change 3 files to trigger doc failures" This reverts commit cc36481d9682f46792203662db610806d6593d89. * chore: Enables GitHub Action linter errors in GitHub (#2440) * TEMPORARY: make action linter fail * problem matcher * Revert "TEMPORARY: make action linter fail" This reverts commit 2ea3cd5fee4836f9275f59d5daaf72213e78aabe. * update version (#2439) * doc: Updates examples & docs that use replicaSet clusters (#2428) * update basic examples * fix linter * fix tf-validate * update tflint version * fix validate * remove tf linter exceptions * make linter fail * simplify and show linter errors in GH * tlint problem matcher * problem matcher * minimum severity warning * fix linter * make tf-validate logic easier to be run in local * less verbose tf init * fix /mongodbatlas_network_peering/aws * doc for backup_compliance_policy * fix container_id reference * fix mongodbatlas_network_peering/azure * use temp fodler * fix examples/mongodbatlas_network_peering/gcp * remaining examples * fix mongodbatlas_clusters * fix adv_cluster doc * remaining doc changes * fix typo * fix examples with deprecated arguments * get the first value for containter_id * container_id in doc * address feedback * fix MongoDB_Atlas (#2445) * chore: Updates examples link in index.md for v1.17.4 release * chore: Updates CHANGELOG.md header for v1.17.4 release * chore: Migrates `mongodbatlas_cloud_backup_snapshot_export_job` to new auto-generated SDK (#2436) * migrate to new auto-generated SDK * refactor and deprecate err_msg field * add changelog entry * docs * change deprecation version to 1.20 * reduce changelog explanation * chore: Migrates `mongodbatlas_project_api_key` to new auto-generated SDK (#2437) * resource create * migrate update read and delete of resource * data sources migrated to new sdk * remove apiUserId from create and update in payload(is read only) * PR comments * chore: Removes usage of old Admin SDK in tests (#2442) * remove matlas from alert_configuration test * remove matlas from custom_db_role test * chore: Updates CHANGELOG.md for #2436 * chore: Clean up usages of old SDK (#2449) * remove usages of old SDK * add az2 to vpc endpoint * Revert "add az2 to vpc endpoint" This reverts commit ce6f7cc09d4d31292479cc58dd3c5d9e92dd7738. * skip flaky test * allow 0 (#2456) * fix: Fixes creation of organization (#2462) * fix TerraformVersion interface conversion * refactor organization resource * add changelog entry * PR comment * chore: Updates CHANGELOG.md for #2462 * fix: Fixes nil pointer dereference in `mongodbatlas_alert_configuration` (#2463) * fix nil pointer dereference * avoid nil pointer dereference in metric_threshold_config * changelog entry * changelog suggestion * Update .changelog/2463.txt Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * remove periods at the end of changelog entries to make it consistent --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2463 * chore: Updates examples link in index.md for v1.17.5 release * chore: Updates CHANGELOG.md header for v1.17.5 release * chore: Bump golangci/golangci-lint-action from 6.0.1 to 6.1.0 (#2469) Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 6.0.1 to 6.1.0. - [Release notes](https://github.com/golangci/golangci-lint-action/releases) - [Commits](https://github.com/golangci/golangci-lint-action/compare/a4f60bb28d35aeee14e6880718e0c85ff1882e64...aaa42aa0628b4ae2578232a66b541047968fac86) --- updated-dependencies: - dependency-name: golangci/golangci-lint-action dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.19 to 1.55.5 (#2468) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.19 to 1.55.5. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.19...v1.55.5) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix: Handles update of `mongodbatlas_backup_compliance_policy` as a create operation (#2480) * handle update as a create * add test to make sure no plan changes appear when reapplying config with non default values * add changelog * fix projectId * fix name of resource in test * Update .changelog/2480.txt Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> --------- Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2480 * chore: Updates examples link in index.md for v1.17.6 release * chore: Updates CHANGELOG.md header for v1.17.6 release * feat: Adds azure support for backup snapshot export bucket (#2486) * feat: add azure support for backup snapshot export bucket * fix: add acceptance test configuration * fix changelog entry number * upgrade azuread to 2.53.1 in example * fix checks * fix checks for mongodbatlas_access_list_api_key * fix docs check * fix docs check for data source * add readme.md in examples * use acc.AddAttrChecks in tests * remove importstateverifyignore --------- Co-authored-by: Luiz Viana * chore: Updates CHANGELOG.md for #2486 * chore: Improves backup_compliance_policy test(#2484) * chore: Updates Atlas Go SDK to version 2024-08-05 (#2487) * automatic changes with renaming * fix trivial compilation errors * include 2024-05-30 version and adjust cloud-backup-schedule to use old SDK * adjust global-cluster-config to use old API * adjust advanced-cluster to use old API * fix hcl config generation remove num_shards attribute * manual fixes of versions in advanced cluster, cloud backup schedule, and other small compilations * fix incorrect merging in cloud backup schedule tests * using connV2 for import in advanced cluster * use lastest sdk model for tests that require autoscaling model * avoid using old SDK for delete operation --------- Signed-off-by: dependabot[bot] Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: svc-apix-bot Co-authored-by: svc-apix-Bot <142542575+svc-apix-Bot@users.noreply.github.com> Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Andrea Angiolillo Co-authored-by: Espen Albert Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Marco Suma Co-authored-by: Oriol Co-authored-by: kyuan-mongodb <78768401+kyuan-mongodb@users.noreply.github.com> Co-authored-by: Luiz Viana --- .changelog/2436.txt | 11 + .changelog/2462.txt | 3 + .changelog/2463.txt | 3 + .changelog/2480.txt | 3 + .changelog/2486.txt | 11 + .github/workflows/acceptance-tests-runner.yml | 9 + .github/workflows/acceptance-tests.yml | 2 + .github/workflows/code-health.yml | 2 +- CHANGELOG.md | 27 ++ .../cloud_backup_snapshot_export_bucket.md | 10 +- .../cloud_backup_snapshot_export_buckets.md | 9 +- .../cloud_backup_snapshot_export_job.md | 2 +- .../cloud_backup_snapshot_export_jobs.md | 2 +- docs/index.md | 2 +- .../cloud_backup_snapshot_export_bucket.md | 25 +- .../cloud_backup_snapshot_export_job.md | 2 +- .../aws/README.md | 16 ++ .../{ => aws}/aws-roles.tf | 0 .../{ => aws}/main.tf | 0 .../{ => aws}/provider.tf | 0 .../{ => aws}/variables.tf | 0 .../{ => aws}/versions.tf | 0 .../azure/README.md | 21 ++ .../azure/azure.tf | 30 +++ .../azure/main.tf | 30 +++ .../azure/provider.tf | 15 ++ .../azure/variables.tf | 40 +++ .../azure/versions.tf | 17 ++ examples/mongodbatlas_database_user/Readme.md | 4 +- .../atlas_cluster.tf | 2 +- .../mongodbatlas_network_peering/aws/main.tf | 6 +- go.mod | 6 +- go.sum | 12 +- internal/common/conversion/flatten_expand.go | 2 +- internal/config/client.go | 24 +- .../data_source_accesslist_api_keys.go | 2 +- .../resource_access_list_api_key.go | 2 +- .../data_source_advanced_cluster.go | 12 +- .../data_source_advanced_clusters.go | 24 +- .../advancedcluster/model_advanced_cluster.go | 80 +++--- .../model_advanced_cluster_test.go | 64 ++--- .../model_sdk_version_conversion.go | 84 +++--- .../resource_advanced_cluster.go | 56 ++-- .../resource_advanced_cluster_test.go | 16 +- .../advancedcluster/resource_update_logic.go | 14 +- .../resource_update_logic_test.go | 46 ++-- .../data_source_alert_configuration.go | 2 +- .../data_source_alert_configurations.go | 2 +- .../data_source_alert_configurations_test.go | 11 +- .../model_alert_configuration.go | 30 +-- .../model_alert_configuration_test.go | 2 +- .../resource_alert_configuration.go | 6 +- .../service/apikey/data_source_api_keys.go | 2 +- internal/service/apikey/resource_api_key.go | 2 +- .../atlasuser/data_source_atlas_user.go | 2 +- .../atlasuser/data_source_atlas_user_test.go | 2 +- .../atlasuser/data_source_atlas_users.go | 2 +- .../atlasuser/data_source_atlas_users_test.go | 2 +- .../service/auditing/resource_auditing.go | 2 +- .../resource_backup_compliance_policy.go | 255 ++++++------------ .../resource_backup_compliance_policy_test.go | 95 ++++++- .../data_source_cloud_backup_schedule.go | 14 +- .../model_cloud_backup_schedule.go | 12 +- .../model_cloud_backup_schedule_test.go | 12 +- .../model_sdk_version_conversion.go | 32 +-- .../resource_cloud_backup_schedule.go | 60 ++--- ...ce_cloud_backup_schedule_migration_test.go | 9 +- .../resource_cloud_backup_schedule_test.go | 42 +-- .../data_source_cloud_backup_snapshots.go | 2 +- .../model_cloud_backup_snapshot.go | 2 +- .../model_cloud_backup_snapshot_test.go | 2 +- .../resource_cloud_backup_snapshot.go | 2 +- ...rce_cloud_backup_snapshot_export_bucket.go | 24 ++ ...ce_cloud_backup_snapshot_export_buckets.go | 17 +- ...rce_cloud_backup_snapshot_export_bucket.go | 45 +++- ...p_snapshot_export_bucket_migration_test.go | 2 +- ...loud_backup_snapshot_export_bucket_test.go | 167 ++++++++++-- ...source_cloud_backup_snapshot_export_job.go | 5 +- ...ource_cloud_backup_snapshot_export_jobs.go | 56 ++-- ...source_cloud_backup_snapshot_export_job.go | 98 +++---- ...e_cloud_backup_snapshot_export_job_test.go | 7 +- ...urce_cloud_backup_snapshot_restore_jobs.go | 2 +- ...ource_cloud_backup_snapshot_restore_job.go | 2 +- ...rce_cloud_provider_access_authorization.go | 2 +- .../resource_cloud_provider_access_setup.go | 2 +- .../resource_cluster_outage_simulation.go | 2 +- .../service/controlplaneipaddresses/model.go | 2 +- .../controlplaneipaddresses/model_test.go | 2 +- .../data_source_custom_db_roles.go | 2 +- .../customdbrole/resource_custom_db_role.go | 2 +- .../resource_custom_db_role_test.go | 236 ++++++++-------- ...ce_custom_dns_configuration_cluster_aws.go | 2 +- .../databaseuser/model_database_user.go | 2 +- .../databaseuser/model_database_user_test.go | 2 +- .../resource_database_user_migration_test.go | 2 +- .../resource_database_user_test.go | 2 +- .../data_source_data_lake_pipeline_run.go | 2 +- .../data_source_data_lake_pipeline_runs.go | 2 +- .../data_source_data_lake_pipelines.go | 2 +- .../resource_data_lake_pipeline.go | 2 +- .../model_encryption_at_rest.go | 2 +- .../model_encryption_at_rest_test.go | 2 +- .../resource_encryption_at_rest.go | 2 +- ...ource_encryption_at_rest_migration_test.go | 2 +- .../resource_encryption_at_rest_test.go | 4 +- ...source_federated_database_instance_test.go | 2 +- ...ata_source_federated_database_instances.go | 2 +- .../resource_federated_database_instance.go | 8 +- .../data_source_federated_query_limits.go | 2 +- .../resource_federated_query_limit.go | 2 +- ...e_federated_settings_identity_providers.go | 2 +- ...el_federated_settings_identity_provider.go | 2 +- ...derated_settings_identity_provider_test.go | 2 +- .../data_source_federated_settings.go | 2 +- ...ource_federated_settings_connected_orgs.go | 2 +- ...model_federated_settings_connected_orgs.go | 2 +- ...ce_federated_settings_org_role_mappings.go | 2 +- ...del_federated_settings_org_role_mapping.go | 2 +- ...rce_federated_settings_org_role_mapping.go | 2 +- .../data_source_global_cluster_config.go | 4 +- .../resource_global_cluster_config.go | 42 +-- .../resource_ldap_configuration.go | 2 +- .../ldapverify/resource_ldap_verify.go | 2 +- .../resource_maintenance_window.go | 2 +- .../data_source_network_containers.go | 2 +- .../resource_network_container.go | 2 +- .../data_source_network_peering.go | 2 +- .../data_source_network_peerings.go | 2 +- .../resource_network_peering.go | 2 +- .../onlinearchive/resource_online_archive.go | 4 +- .../organization/data_source_organization.go | 5 +- .../data_source_organization_test.go | 4 +- .../organization/data_source_organizations.go | 7 +- .../data_source_organizations_test.go | 8 +- .../organization/resource_organization.go | 30 +-- .../resource_organization_migration_test.go | 4 +- .../resource_organization_test.go | 38 +-- .../orginvitation/resource_org_invitation.go | 2 +- ...resource_private_endpoint_regional_mode.go | 2 +- ...rce_private_endpoint_regional_mode_test.go | 11 +- .../resource_privatelink_endpoint.go | 2 +- ...esource_privatelink_endpoint_serverless.go | 2 +- .../resource_privatelink_endpoint_service.go | 2 +- ...service_data_federation_online_archives.go | 2 +- ..._service_data_federation_online_archive.go | 2 +- ...rivatelink_endpoints_service_serverless.go | 2 +- ...privatelink_endpoint_service_serverless.go | 2 +- .../service/project/data_source_project.go | 2 +- .../service/project/data_source_projects.go | 2 +- internal/service/project/model_project.go | 2 +- .../service/project/model_project_test.go | 2 +- internal/service/project/resource_project.go | 4 +- .../resource_project_migration_test.go | 2 +- .../service/project/resource_project_test.go | 14 +- .../data_source_project_api_key.go | 21 +- .../data_source_project_api_keys.go | 32 +-- .../projectapikey/resource_project_api_key.go | 185 +++++++------ .../resource_project_api_key_test.go | 15 +- .../resource_project_invitation.go | 2 +- .../model_project_ip_access_list.go | 2 +- .../model_project_ip_access_list_test.go | 2 +- .../resource_project_ip_access_list.go | 2 +- internal/service/pushbasedlogexport/model.go | 2 +- .../service/pushbasedlogexport/model_test.go | 2 +- .../service/pushbasedlogexport/resource.go | 2 +- .../pushbasedlogexport/state_transition.go | 2 +- .../state_transition_test.go | 4 +- .../model_search_deployment.go | 2 +- .../model_search_deployment_test.go | 2 +- .../state_transition_search_deployment.go | 2 +- ...state_transition_search_deployment_test.go | 4 +- .../searchindex/data_source_search_indexes.go | 2 +- .../service/searchindex/model_search_index.go | 2 +- .../searchindex/resource_search_index.go | 2 +- .../data_source_serverless_instances.go | 2 +- .../resource_serverless_instance.go | 2 +- .../resource_serverless_instance_test.go | 2 +- ...a_source_cloud_shared_tier_restore_jobs.go | 2 +- .../data_source_shared_tier_snapshots.go | 2 +- .../data_source_stream_connections.go | 2 +- .../data_source_stream_connections_test.go | 2 +- .../model_stream_connection.go | 2 +- .../model_stream_connection_test.go | 2 +- .../data_source_stream_instances.go | 2 +- .../data_source_stream_instances_test.go | 2 +- .../streaminstance/model_stream_instance.go | 2 +- .../model_stream_instance_test.go | 2 +- internal/service/team/data_source_team.go | 2 +- internal/service/team/resource_team.go | 2 +- .../data_source_third_party_integrations.go | 2 +- .../resource_third_party_integration_test.go | 4 +- ...ource_x509_authentication_database_user.go | 2 +- internal/testutil/acc/advanced_cluster.go | 2 +- internal/testutil/acc/atlas.go | 12 +- internal/testutil/acc/cluster.go | 43 ++- internal/testutil/acc/config_cluster.go | 9 +- internal/testutil/acc/config_cluster_test.go | 11 +- internal/testutil/acc/database_user.go | 2 +- internal/testutil/acc/factory.go | 2 +- internal/testutil/acc/pre_check.go | 8 + internal/testutil/acc/project.go | 2 +- internal/testutil/acc/serverless.go | 2 +- templates/data-source.md.tmpl | 2 + templates/resources.md.tmpl | 2 + 204 files changed, 1588 insertions(+), 1148 deletions(-) create mode 100644 .changelog/2436.txt create mode 100644 .changelog/2462.txt create mode 100644 .changelog/2463.txt create mode 100644 .changelog/2480.txt create mode 100644 .changelog/2486.txt create mode 100644 examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/README.md rename examples/mongodbatlas_cloud_backup_snapshot_export_bucket/{ => aws}/aws-roles.tf (100%) rename examples/mongodbatlas_cloud_backup_snapshot_export_bucket/{ => aws}/main.tf (100%) rename examples/mongodbatlas_cloud_backup_snapshot_export_bucket/{ => aws}/provider.tf (100%) rename examples/mongodbatlas_cloud_backup_snapshot_export_bucket/{ => aws}/variables.tf (100%) rename examples/mongodbatlas_cloud_backup_snapshot_export_bucket/{ => aws}/versions.tf (100%) create mode 100644 examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/README.md create mode 100644 examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/azure.tf create mode 100644 examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/main.tf create mode 100644 examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/provider.tf create mode 100644 examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/variables.tf create mode 100644 examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/versions.tf diff --git a/.changelog/2436.txt b/.changelog/2436.txt new file mode 100644 index 0000000000..347e6ddb71 --- /dev/null +++ b/.changelog/2436.txt @@ -0,0 +1,11 @@ +```release-note:note +resource/mongodbatlas_cloud_backup_snapshot_export_job: Deprecates the `err_msg` attribute +``` + +```release-note:note +data-source/mongodbatlas_cloud_backup_snapshot_export_job: Deprecates the `err_msg` attribute +``` + +```release-note:note +data-source/mongodbatlas_cloud_backup_snapshot_export_jobs: Deprecates the `err_msg` attribute +``` diff --git a/.changelog/2462.txt b/.changelog/2462.txt new file mode 100644 index 0000000000..588a6e8d3b --- /dev/null +++ b/.changelog/2462.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/mongodbatlas_organization: Fixes a bug in organization resource creation where the provider crashed +``` diff --git a/.changelog/2463.txt b/.changelog/2463.txt new file mode 100644 index 0000000000..9c4edff18e --- /dev/null +++ b/.changelog/2463.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/mongodbatlas_alert_configuration: Fixes an issue where the `terraform apply` command crashes if you attempt to edit an existing `mongodbatlas_alert_configuration` by adding a value to `threshold_config` +``` diff --git a/.changelog/2480.txt b/.changelog/2480.txt new file mode 100644 index 0000000000..9474013e4e --- /dev/null +++ b/.changelog/2480.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/mongodbatlas_backup_compliance_policy: Fixes an issue where the update operation modified attributes that were not supposed to be modified" +``` diff --git a/.changelog/2486.txt b/.changelog/2486.txt new file mode 100644 index 0000000000..643464db5b --- /dev/null +++ b/.changelog/2486.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +data-source/mongodbatlas_cloud_backup_snapshot_export_bucket: Adds Azure support +``` + +```release-note:enhancement +resource/mongodbatlas_cloud_backup_snapshot_export_bucket: Adds Azure support +``` + +```release-note:enhancement +data-source/mongodbatlas_cloud_backup_snapshot_export_buckets: Adds Azure support +``` diff --git a/.github/workflows/acceptance-tests-runner.yml b/.github/workflows/acceptance-tests-runner.yml index 7a69c08827..136c693902 100644 --- a/.github/workflows/acceptance-tests-runner.yml +++ b/.github/workflows/acceptance-tests-runner.yml @@ -103,6 +103,10 @@ on: required: true aws_s3_bucket_backup: required: true + azure_service_url_backup: + required: true + azure_blob_storage_container_backup: + required: true mongodb_atlas_ldap_hostname: required: true mongodb_atlas_ldap_username: @@ -364,6 +368,11 @@ jobs: AWS_SECRET_ACCESS_KEY: ${{ secrets.aws_secret_access_key }} AWS_ACCESS_KEY_ID: ${{ secrets.aws_access_key_id }} AWS_S3_BUCKET: ${{ secrets.aws_s3_bucket_backup }} + AZURE_BLOB_STORAGE_CONTAINER_NAME: ${{ secrets.azure_blob_storage_container_backup }} + AZURE_SERVICE_URL: ${{ secrets.azure_service_url_backup }} + AZURE_ATLAS_APP_ID: ${{ inputs.azure_atlas_app_id }} + AZURE_SERVICE_PRINCIPAL_ID: ${{ inputs.azure_service_principal_id }} + AZURE_TENANT_ID: ${{ inputs.azure_tenant_id }} ACCTEST_PACKAGES: | ./internal/service/cloudbackupschedule ./internal/service/cloudbackupsnapshot diff --git a/.github/workflows/acceptance-tests.yml b/.github/workflows/acceptance-tests.yml index 09a05fd3f8..e02fd08675 100644 --- a/.github/workflows/acceptance-tests.yml +++ b/.github/workflows/acceptance-tests.yml @@ -63,6 +63,8 @@ jobs: aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws_s3_bucket_federation: ${{ secrets.AWS_S3_BUCKET_FEDERATION }} aws_s3_bucket_backup: ${{ secrets.AWS_S3_BUCKET_BACKUP }} + azure_service_url_backup: ${{ secrets.AZURE_SERVICE_URL_BACKUP }} + azure_blob_storage_container_backup: ${{ secrets.AZURE_BLOB_STORAGE_CONTAINER_BACKUP }} mongodb_atlas_ldap_hostname: ${{ secrets.MONGODB_ATLAS_LDAP_HOSTNAME }} mongodb_atlas_ldap_username: ${{ secrets.MONGODB_ATLAS_LDAP_USERNAME }} mongodb_atlas_ldap_password: ${{ secrets.MONGODB_ATLAS_LDAP_PASSWORD }} diff --git a/.github/workflows/code-health.yml b/.github/workflows/code-health.yml index 0be3c9f1f1..a6930734e1 100644 --- a/.github/workflows/code-health.yml +++ b/.github/workflows/code-health.yml @@ -47,7 +47,7 @@ jobs: go-version-file: 'go.mod' cache: false # see https://github.com/golangci/golangci-lint-action/issues/807 - name: golangci-lint - uses: golangci/golangci-lint-action@a4f60bb28d35aeee14e6880718e0c85ff1882e64 + uses: golangci/golangci-lint-action@aaa42aa0628b4ae2578232a66b541047968fac86 with: version: v1.59.1 # Also update GOLANGCI_VERSION variable in GNUmakefile when updating this version - name: actionlint diff --git a/CHANGELOG.md b/CHANGELOG.md index c1e23726ce..5c30d1f9ca 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,33 @@ ENHANCEMENTS: +* data-source/mongodbatlas_cloud_backup_snapshot_export_bucket: Adds Azure support ([#2486](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2486)) +* data-source/mongodbatlas_cloud_backup_snapshot_export_buckets: Adds Azure support ([#2486](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2486)) +* resource/mongodbatlas_cloud_backup_snapshot_export_bucket: Adds Azure support ([#2486](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2486)) + +## 1.17.6 (August 07, 2024) + +BUG FIXES: + +* resource/mongodbatlas_backup_compliance_policy: Fixes an issue where the update operation modified attributes that were not supposed to be modified" ([#2480](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2480)) + +## 1.17.5 (July 30, 2024) + +NOTES: + +* data-source/mongodbatlas_cloud_backup_snapshot_export_job: Deprecates the `err_msg` attribute ([#2436](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2436)) +* data-source/mongodbatlas_cloud_backup_snapshot_export_jobs: Deprecates the `err_msg` attribute ([#2436](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2436)) +* resource/mongodbatlas_cloud_backup_snapshot_export_job: Deprecates the `err_msg` attribute ([#2436](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2436)) + +BUG FIXES: + +* resource/mongodbatlas_alert_configuration: Fixes an issue where the `terraform apply` command crashes if you attempt to edit an existing `mongodbatlas_alert_configuration` by adding a value to `threshold_config` ([#2463](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2463)) +* resource/mongodbatlas_organization: Fixes a bug in organization resource creation where the provider crashed ([#2462](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2462)) + +## 1.17.4 (July 19, 2024) + +ENHANCEMENTS: + * data-source/mongodbatlas_search_index: Adds attribute `stored_source` ([#2388](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2388)) * data-source/mongodbatlas_search_indexes: Adds attribute `stored_source` ([#2388](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2388)) * resource/mongodbatlas_search_index: Adds attribute `stored_source` ([#2388](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/2388)) diff --git a/docs/data-sources/cloud_backup_snapshot_export_bucket.md b/docs/data-sources/cloud_backup_snapshot_export_bucket.md index a715db503b..35dbee7d08 100644 --- a/docs/data-sources/cloud_backup_snapshot_export_bucket.md +++ b/docs/data-sources/cloud_backup_snapshot_export_bucket.md @@ -30,9 +30,13 @@ data "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { In addition to all arguments above, the following attributes are exported: -* `iam_role_id` - Unique identifier of the role that Atlas can use to access the bucket. You must also specify the `bucket_name`. -* `bucket_name` - Name of the bucket that the provided role ID is authorized to access. You must also specify the `iam_role_id`. -* `cloud_provider` - Name of the provider of the cloud service where Atlas can access the S3 bucket. Atlas only supports `AWS`. +* `iam_role_id` - Unique identifier of the role that Atlas can use to access the bucket. +* `bucket_name` - Name of the bucket that the provided role ID is authorized to access. +* `cloud_provider` - Name of the provider of the cloud service where Atlas can access the S3 bucket. +* `role_id` - Unique identifier of the Azure Service Principal that Atlas can use to access the Azure Blob Storage Container. +* `service_url` - URL that identifies the blob Endpoint of the Azure Blob Storage Account. +* `tenant_id` - UUID that identifies the Azure Active Directory Tenant ID. + diff --git a/docs/data-sources/cloud_backup_snapshot_export_buckets.md b/docs/data-sources/cloud_backup_snapshot_export_buckets.md index d57e565439..64a49ab8ff 100644 --- a/docs/data-sources/cloud_backup_snapshot_export_buckets.md +++ b/docs/data-sources/cloud_backup_snapshot_export_buckets.md @@ -39,9 +39,12 @@ In addition to all arguments above, the following attributes are exported: ### CloudProviderSnapshotExportBucket * `project_id` - The unique identifier of the project for the Atlas cluster. * `export_bucket_id` - Unique identifier of the snapshot bucket id. -* `iam_role_id` - Unique identifier of the role that Atlas can use to access the bucket. You must also specify the `bucket_name`. -* `bucket_name` - Name of the bucket that the provided role ID is authorized to access. You must also specify the `iam_role_id`. -* `cloud_provider` - Name of the provider of the cloud service where Atlas can access the S3 bucket. Atlas only supports `AWS`. +* `iam_role_id` - Unique identifier of the role that Atlas can use to access the bucket. +* `bucket_name` - Name of the bucket that the provided role ID is authorized to access. +* `cloud_provider` - Name of the provider of the cloud service where Atlas can access the S3 bucket. +* `role_id` - Unique identifier of the Azure Service Principal that Atlas can use to access the Azure Blob Storage Container. +* `service_url` - URL that identifies the blob Endpoint of the Azure Blob Storage Account. +* `tenant_id` - UUID that identifies the Azure Active Directory Tenant ID. For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/export/create-one-export-bucket/) diff --git a/docs/data-sources/cloud_backup_snapshot_export_job.md b/docs/data-sources/cloud_backup_snapshot_export_job.md index 6307ef5a10..b7af5446f7 100644 --- a/docs/data-sources/cloud_backup_snapshot_export_job.md +++ b/docs/data-sources/cloud_backup_snapshot_export_job.md @@ -49,7 +49,7 @@ In addition to all arguments above, the following attributes are exported: * `custom_data` - Custom data to include in the metadata file named `.complete` that Atlas uploads to the bucket when the export job finishes. Custom data can be specified as key and value pairs. * `components` - _Returned for sharded clusters only._ Export job details for each replica set in the sharded cluster. * `created_at` - Timestamp in ISO 8601 date and time format in UTC when the export job was created. -* `err_msg` - Error message, only if the export job failed. +* `err_msg` - Error message, only if the export job failed. **Note:** This attribute is deprecated as it is not being used. * `export_status` - _Returned for replica set only._ Status of the export job. * `finished_at` - Timestamp in ISO 8601 date and time format in UTC when the export job completes. * `export_job_id` - Unique identifier of the export job. diff --git a/docs/data-sources/cloud_backup_snapshot_export_jobs.md b/docs/data-sources/cloud_backup_snapshot_export_jobs.md index 5ffb6a7a07..c4fb5bad89 100644 --- a/docs/data-sources/cloud_backup_snapshot_export_jobs.md +++ b/docs/data-sources/cloud_backup_snapshot_export_jobs.md @@ -58,7 +58,7 @@ In addition to all arguments above, the following attributes are exported: * `custom_data` - Custom data to include in the metadata file named `.complete` that Atlas uploads to the bucket when the export job finishes. Custom data can be specified as key and value pairs. * `components` - _Returned for sharded clusters only._ Export job details for each replica set in the sharded cluster. * `created_at` - Timestamp in ISO 8601 date and time format in UTC when the export job was created. -* `err_msg` - Error message, only if the export job failed. +* `err_msg` - Error message, only if the export job failed. **Note:** This attribute is deprecated as it is not being used. * `export_status` - _Returned for replica set only._ Status of the export job. * `finished_at` - Timestamp in ISO 8601 date and time format in UTC when the export job completes. * `export_job_id` - Unique identifier of the export job. diff --git a/docs/index.md b/docs/index.md index 7bef25fda4..06f666ecc1 100644 --- a/docs/index.md +++ b/docs/index.md @@ -219,7 +219,7 @@ We ship binaries but do not prioritize fixes for the following operating system ## Examples from MongoDB and the Community -We have [example configurations](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/v1.17.3/examples) +We have [example configurations](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/v1.17.6/examples) in our GitHub repo that will help both beginner and more advanced users. Have a good example you've created and want to share? diff --git a/docs/resources/cloud_backup_snapshot_export_bucket.md b/docs/resources/cloud_backup_snapshot_export_bucket.md index 2ffef835aa..6a8c6c6a26 100644 --- a/docs/resources/cloud_backup_snapshot_export_bucket.md +++ b/docs/resources/cloud_backup_snapshot_export_bucket.md @@ -7,6 +7,9 @@ ## Example Usage + +### AWS Example + ```terraform resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { project_id = "{PROJECT_ID}" @@ -16,12 +19,28 @@ resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { } ``` +### Azure Example + +```terraform +resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { + project_id = "{PROJECT_ID}" + role_id = "{ROLE_ID}" + service_url = "{SERVICE_URL}" + tenant_id = "{TENANT_ID}" + bucket_name = "example-bucket" + cloud_provider = "AZURE" +} +``` + ## Argument Reference * `project_id` - (Required) The unique identifier of the project for the Atlas cluster. -* `iam_role_id` - (Required) Unique identifier of the role that Atlas can use to access the bucket. You must also specify the `bucket_name`. -* `bucket_name` - (Required) Name of the bucket that the provided role ID is authorized to access. You must also specify the `iam_role_id`. -* `cloud_provider` - (Required) Name of the provider of the cloud service where Atlas can access the S3 bucket. Atlas only supports `AWS`. +* `bucket_name` - (Required) Name of the bucket that the provided role ID is authorized to access. +* `cloud_provider` - (Required) Name of the provider of the cloud service where Atlas can access the S3 bucket. +* `iam_role_id` - Unique identifier of the role that Atlas can use to access the bucket. Required if `cloud_provider` is set to `AWS`. +* `role_id` - Unique identifier of the Azure Service Principal that Atlas can use to access the Azure Blob Storage Container. Required if `cloud_provider` is set to `AZURE`. +* `service_url` - URL that identifies the blob Endpoint of the Azure Blob Storage Account. Required if `cloud_provider` is set to `AZURE`. +* `tenant_id` - UUID that identifies the Azure Active Directory Tenant ID. Required if `cloud_provider` is set to `AZURE`. ## Attributes Reference diff --git a/docs/resources/cloud_backup_snapshot_export_job.md b/docs/resources/cloud_backup_snapshot_export_job.md index 2fdc724104..2eb9c404df 100644 --- a/docs/resources/cloud_backup_snapshot_export_job.md +++ b/docs/resources/cloud_backup_snapshot_export_job.md @@ -101,7 +101,7 @@ In addition to all arguments above, the following attributes are exported: * `components` - _Returned for sharded clusters only._ Export job details for each replica set in the sharded cluster. * `created_at` - Timestamp in ISO 8601 date and time format in UTC when the export job was created. -* `err_msg` - Error message, only if the export job failed. +* `err_msg` - Error message, only if the export job failed. **Note:** This attribute is deprecated as it is not being used. * `export_status` - _Returned for replica set only._ Status of the export job. * `finished_at` - Timestamp in ISO 8601 date and time format in UTC when the export job completes. * `export_job_id` - Unique identifier of the export job. diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/README.md b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/README.md new file mode 100644 index 0000000000..daaed49ee3 --- /dev/null +++ b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/README.md @@ -0,0 +1,16 @@ +# MongoDB Atlas Provider - Atlas Cloud Backup Snapshot Export Bucket in AWS + +This example shows how to set up Cloud Backup Snapshot Export Bucket in Atlas through Terraform. + +You must set the following variables: + +- `public_key`: Atlas public key +- `private_key`: Atlas private key +- `project_id`: Unique 24-hexadecimal digit string that identifies the project where the stream instance will be created. +- `access_key`: AWS Access Key +- `secret_key`: AWS Secret Key. +- `aws_region`: AWS region. + +To learn more, see the [Export Cloud Backup Snapshot Documentation](https://www.mongodb.com/docs/atlas/backup/cloud-backup/export/). + + diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws-roles.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/aws-roles.tf similarity index 100% rename from examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws-roles.tf rename to examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/aws-roles.tf diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/main.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/main.tf similarity index 100% rename from examples/mongodbatlas_cloud_backup_snapshot_export_bucket/main.tf rename to examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/main.tf diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/provider.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/provider.tf similarity index 100% rename from examples/mongodbatlas_cloud_backup_snapshot_export_bucket/provider.tf rename to examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/provider.tf diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/variables.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/variables.tf similarity index 100% rename from examples/mongodbatlas_cloud_backup_snapshot_export_bucket/variables.tf rename to examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/variables.tf diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/versions.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/versions.tf similarity index 100% rename from examples/mongodbatlas_cloud_backup_snapshot_export_bucket/versions.tf rename to examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/versions.tf diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/README.md b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/README.md new file mode 100644 index 0000000000..3885096d8a --- /dev/null +++ b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/README.md @@ -0,0 +1,21 @@ +# MongoDB Atlas Provider - Atlas Cloud Backup Snapshot Export Bucket in Azure + +This example shows how to set up Cloud Backup Snapshot Export Bucket in Atlas through Terraform. + +You must set the following variables: + +- `public_key`: Atlas public key. +- `private_key`: Atlas private key. +- `project_id`: Unique 24-hexadecimal digit string that identifies the project where the stream instance will be created. +- `azure_tenant_id`: The Tenant ID which should be used. +- `subscription_id`: Azure Subscription ID. +- `client_id`: Azure Client ID. +- `client_secret`: Azure Client Secret. +- `tenant_id`: Azure Tenant ID. +- `azure_atlas_app_id`: The client ID of the application for which to create a service principal. +- `azure_resource_group_location`: The Azure Region where the Resource Group should exist. +- `storage_account_name`: Specifies the name of the storage account. + +To learn more, see the [Export Cloud Backup Snapshot Documentation](https://www.mongodb.com/docs/atlas/backup/cloud-backup/export/). + + diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/azure.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/azure.tf new file mode 100644 index 0000000000..b6f90e6cba --- /dev/null +++ b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/azure.tf @@ -0,0 +1,30 @@ +resource "azuread_service_principal" "mongo" { + client_id = var.azure_atlas_app_id + app_role_assignment_required = false +} + +# Define the resource group +resource "azurerm_resource_group" "test_resource_group" { + name = "mongo-test-resource-group" + location = var.azure_resource_group_location +} + +resource "azurerm_storage_account" "test_storage_account" { + name = var.storage_account_name + resource_group_name = azurerm_resource_group.test_resource_group.name + location = azurerm_resource_group.test_resource_group.location + account_tier = "Standard" + account_replication_type = "LRS" +} + +resource "azurerm_storage_container" "test_storage_container" { + name = "mongo-test-storage-container" + storage_account_name = azurerm_storage_account.test_storage_account.name + container_access_type = "private" +} + +resource "azurerm_role_assignment" "test_role_assignment" { + principal_id = azuread_service_principal.mongo.id + role_definition_name = "Storage Blob Data Contributor" + scope = azurerm_storage_account.test_storage_account.id +} diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/main.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/main.tf new file mode 100644 index 0000000000..4910b17936 --- /dev/null +++ b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/main.tf @@ -0,0 +1,30 @@ +resource "mongodbatlas_cloud_provider_access_setup" "setup_only" { + project_id = var.project_id + provider_name = "AZURE" + azure_config { + atlas_azure_app_id = var.azure_atlas_app_id + service_principal_id = azuread_service_principal.mongo.id + tenant_id = var.tenant_id + } +} + +resource "mongodbatlas_cloud_provider_access_authorization" "auth_role" { + project_id = var.project_id + role_id = mongodbatlas_cloud_provider_access_setup.setup_only.role_id + + azure { + atlas_azure_app_id = var.azure_atlas_app_id + service_principal_id = azuread_service_principal.mongo.id + tenant_id = var.tenant_id + } +} + + +resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { + project_id = var.tenant_id + bucket_name = azurerm_storage_container.test_storage_container.name + cloud_provider = "AZURE" + service_url = azurerm_storage_account.test_storage_account.primary_blob_endpoint + role_id = mongodbatlas_cloud_provider_access_authorization.auth_role.role_id + tenant_id = var.tenant_id +} \ No newline at end of file diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/provider.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/provider.tf new file mode 100644 index 0000000000..d7f7431784 --- /dev/null +++ b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/provider.tf @@ -0,0 +1,15 @@ +provider "mongodbatlas" { + public_key = var.public_key + private_key = var.private_key +} +provider "azuread" { + tenant_id = var.azure_tenant_id +} +provider "azurerm" { + subscription_id = var.subscription_id + client_id = var.client_id + client_secret = var.client_secret + tenant_id = var.tenant_id + features { + } +} \ No newline at end of file diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/variables.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/variables.tf new file mode 100644 index 0000000000..f76cf1143a --- /dev/null +++ b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/variables.tf @@ -0,0 +1,40 @@ +variable "public_key" { + description = "The public API key for MongoDB Atlas" + type = string +} +variable "private_key" { + description = "The private API key for MongoDB Atlas" + type = string +} +variable "project_id" { + description = "Atlas project ID" + type = string +} +variable "azure_tenant_id" { + type = string +} +variable "subscription_id" { + default = "Azure Subscription ID" + type = string +} +variable "client_id" { + default = "Azure Client ID" + type = string +} +variable "client_secret" { + default = "Azure Client Secret" + type = string +} +variable "tenant_id" { + default = "Azure Tenant ID" + type = string +} +variable "azure_atlas_app_id" { + type = string +} +variable "azure_resource_group_location" { + type = string +} +variable "storage_account_name" { + type = string +} diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/versions.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/versions.tf new file mode 100644 index 0000000000..dec0bfe787 --- /dev/null +++ b/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure/versions.tf @@ -0,0 +1,17 @@ +terraform { + required_providers { + azuread = { + source = "hashicorp/azuread" + version = "~> 2.53.1" + } + azurerm = { + source = "hashicorp/azurerm" + version = "~> 3.0" + } + mongodbatlas = { + source = "mongodb/mongodbatlas" + version = "~> 1.0" + } + } + required_version = ">= 1.0" +} diff --git a/examples/mongodbatlas_database_user/Readme.md b/examples/mongodbatlas_database_user/Readme.md index 08f4211b75..0290514aec 100644 --- a/examples/mongodbatlas_database_user/Readme.md +++ b/examples/mongodbatlas_database_user/Readme.md @@ -77,8 +77,8 @@ atlasclusterstring = [ "aws_private_link_srv" = {} "private" = "" "private_srv" = "" - "standard" = "mongodb://MongoDB_Atlas-shard-00-00.xgpi2.mongodb.net:27017,MongoDB_Atlas-shard-00-01.xgpi2.mongodb.net:27017,MongoDB_Atlas-shard-00-02.xgpi2.mongodb.net:27017/?ssl=true&authSource=admin&replicaSet=atlas-90b49a-shard-0" - "standard_srv" = "mongodb+srv://MongoDB_Atlas.xgpi2.mongodb.net" + "standard" = "mongodb://MongoDBAtlas-shard-00-00.xgpi2.mongodb.net:27017,MongoDBAtlas-shard-00-01.xgpi2.mongodb.net:27017,MongoDBAtlas-shard-00-02.xgpi2.mongodb.net:27017/?ssl=true&authSource=admin&replicaSet=atlas-90b49a-shard-0" + "standard_srv" = "mongodb+srv://MongoDBAtlas.xgpi2.mongodb.net" }, ] project_name = Atlas-DB-Scope diff --git a/examples/mongodbatlas_database_user/atlas_cluster.tf b/examples/mongodbatlas_database_user/atlas_cluster.tf index 985cc4462c..0c19072a80 100644 --- a/examples/mongodbatlas_database_user/atlas_cluster.tf +++ b/examples/mongodbatlas_database_user/atlas_cluster.tf @@ -1,6 +1,6 @@ resource "mongodbatlas_advanced_cluster" "cluster" { project_id = mongodbatlas_project.project1.id - name = "MongoDB_Atlas" + name = "MongoDBAtlas" cluster_type = "REPLICASET" backup_enabled = true diff --git a/examples/mongodbatlas_network_peering/aws/main.tf b/examples/mongodbatlas_network_peering/aws/main.tf index a296a30645..28da1d5cda 100644 --- a/examples/mongodbatlas_network_peering/aws/main.tf +++ b/examples/mongodbatlas_network_peering/aws/main.tf @@ -8,9 +8,9 @@ resource "mongodbatlas_project" "aws_atlas" { org_id = var.atlas_org_id } -resource "mongodbatlas_advanced_cluster" "cluster_atlas" { +resource "mongodbatlas_advanced_cluster" "cluster-atlas" { project_id = mongodbatlas_project.aws_atlas.id - name = "ClusterAtlas" + name = "cluster-atlas" cluster_type = "REPLICASET" backup_enabled = true @@ -42,7 +42,7 @@ resource "mongodbatlas_database_user" "db-user" { resource "mongodbatlas_network_peering" "aws-atlas" { accepter_region_name = var.aws_region project_id = mongodbatlas_project.aws_atlas.id - container_id = one(values(mongodbatlas_advanced_cluster.cluster_atlas.replication_specs[0].container_id)) + container_id = one(values(mongodbatlas_advanced_cluster.cluster-atlas.replication_specs[0].container_id)) provider_name = "AWS" route_table_cidr_block = aws_vpc.primary.cidr_block vpc_id = aws_vpc.primary.id diff --git a/go.mod b/go.mod index 7693eb1604..291b2d375b 100644 --- a/go.mod +++ b/go.mod @@ -4,7 +4,7 @@ go 1.22 require ( github.com/andygrunwald/go-jira/v2 v2.0.0-20240116150243-50d59fe116d6 - github.com/aws/aws-sdk-go v1.54.19 + github.com/aws/aws-sdk-go v1.55.5 github.com/hashicorp/go-changelog v0.0.0-20240318095659-4d68c58a6e7f github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 github.com/hashicorp/go-version v1.7.0 @@ -23,8 +23,8 @@ require ( github.com/stretchr/testify v1.9.0 github.com/zclconf/go-cty v1.15.0 go.mongodb.org/atlas v0.36.0 - go.mongodb.org/atlas-sdk/v20231115014 v20231115014.0.0 - go.mongodb.org/atlas-sdk/v20240530002 v20240530002.0.1-0.20240710142852-8a1b5dd5d8f3 + go.mongodb.org/atlas-sdk/v20240530005 v20240530005.0.0 + go.mongodb.org/atlas-sdk/v20240805001 v20240805001.0.0 go.mongodb.org/realm v0.1.0 ) diff --git a/go.sum b/go.sum index 4c207b062c..2910145caf 100644 --- a/go.sum +++ b/go.sum @@ -243,8 +243,8 @@ github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkY github.com/aws/aws-sdk-go v1.15.78/go.mod h1:E3/ieXAlvM0XWO57iftYVDLLvQ824smPP3ATZkfNZeM= github.com/aws/aws-sdk-go v1.37.0/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.44.122/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= -github.com/aws/aws-sdk-go v1.54.19 h1:tyWV+07jagrNiCcGRzRhdtVjQs7Vy41NwsuOcl0IbVI= -github.com/aws/aws-sdk-go v1.54.19/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= +github.com/aws/aws-sdk-go v1.55.5 h1:KKUZBfBoyqy5d3swXyiC7Q76ic40rYcbqH7qjh59kzU= +github.com/aws/aws-sdk-go v1.55.5/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d h1:xDfNPAt8lFiC1UJrqV3uuy861HCTo708pDMbjHHdCas= github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d/go.mod h1:6QX/PXZ00z/TKoufEY6K/a0k6AhaJrQKdFe6OfVXsa4= github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY= @@ -780,10 +780,10 @@ github.com/zclconf/go-cty-yaml v1.0.2/go.mod h1:IP3Ylp0wQpYm50IHK8OZWKMu6sPJIUgK go.mongodb.org/atlas v0.12.0/go.mod h1:wVCnHcm/7/IfTjEB6K8K35PLG70yGz8BdkRwX0oK9/M= go.mongodb.org/atlas v0.36.0 h1:m05S3AO7zkl+bcG1qaNsEKBnAqnKx2FDwLooHpIG3j4= go.mongodb.org/atlas v0.36.0/go.mod h1:nfPldE9dSama6G2IbIzmEza02Ly7yFZjMMVscaM0uEc= -go.mongodb.org/atlas-sdk/v20231115014 v20231115014.0.0 h1:hN7x3m6THf03q/tE48up1j0U/26lJmx+s1LXB/qvHHc= -go.mongodb.org/atlas-sdk/v20231115014 v20231115014.0.0/go.mod h1:pCl46YnWOIde8lq27whXDwUseNeUvtAy3vy5ZDeTcBA= -go.mongodb.org/atlas-sdk/v20240530002 v20240530002.0.1-0.20240710142852-8a1b5dd5d8f3 h1:Y2OD2wNisDWY/am92KmGGftOZxLOXSzr9+WyACRQ1Zw= -go.mongodb.org/atlas-sdk/v20240530002 v20240530002.0.1-0.20240710142852-8a1b5dd5d8f3/go.mod h1:seuG5HpfG20/8FhJGyWi4yL7hqAcmq7pf/G0gipNOyM= +go.mongodb.org/atlas-sdk/v20240530005 v20240530005.0.0 h1:d/gbYJ+obR0EM/3DZf7+ZMi2QWISegm3mid7Or708cc= +go.mongodb.org/atlas-sdk/v20240530005 v20240530005.0.0/go.mod h1:O47ZrMMfcWb31wznNIq2PQkkdoFoK0ea2GlmRqGJC2s= +go.mongodb.org/atlas-sdk/v20240805001 v20240805001.0.0 h1:EwA2g7i4JYc0b/oE7zvvOH+POYVrHrWR7BONex3MFTA= +go.mongodb.org/atlas-sdk/v20240805001 v20240805001.0.0/go.mod h1:0aHEphVfsYbpg3CiEUcXeAU7OVoOFig1tltXdLjYiSQ= go.mongodb.org/realm v0.1.0 h1:zJiXyLaZrznQ+Pz947ziSrDKUep39DO4SfA0Fzx8M4M= go.mongodb.org/realm v0.1.0/go.mod h1:4Vj6iy+Puo1TDERcoh4XZ+pjtwbOzPpzqy3Cwe8ZmDM= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= diff --git a/internal/common/conversion/flatten_expand.go b/internal/common/conversion/flatten_expand.go index 229934db0e..e97b4450bc 100644 --- a/internal/common/conversion/flatten_expand.go +++ b/internal/common/conversion/flatten_expand.go @@ -3,7 +3,7 @@ package conversion import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func FlattenLinks(links []admin.Link) []map[string]string { diff --git a/internal/config/client.go b/internal/config/client.go index 5aee0d01e6..8b31a10ccd 100644 --- a/internal/config/client.go +++ b/internal/config/client.go @@ -9,8 +9,8 @@ import ( "strings" "time" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" matlasClient "go.mongodb.org/atlas/mongodbatlas" realmAuth "go.mongodb.org/realm/auth" "go.mongodb.org/realm/realm" @@ -31,7 +31,7 @@ const ( type MongoDBClient struct { Atlas *matlasClient.Client AtlasV2 *admin.APIClient - AtlasV220231115 *admin20231115.APIClient + AtlasV220240530 *admin20240530.APIClient // used in advanced_cluster and cloud_backup_schedule for avoiding breaking changes Config *Config } @@ -105,7 +105,7 @@ func (c *Config) NewClient(ctx context.Context) (any, error) { return nil, err } - sdkV220231115Client, err := c.newSDKV220231115Client(client) + sdkV220240530Client, err := c.newSDKV220240530Client(client) if err != nil { return nil, err } @@ -113,7 +113,7 @@ func (c *Config) NewClient(ctx context.Context) (any, error) { clients := &MongoDBClient{ Atlas: atlasClient, AtlasV2: sdkV2Client, - AtlasV220231115: sdkV220231115Client, + AtlasV220240530: sdkV220240530Client, Config: c, } @@ -136,15 +136,15 @@ func (c *Config) newSDKV2Client(client *http.Client) (*admin.APIClient, error) { return sdkv2, nil } -func (c *Config) newSDKV220231115Client(client *http.Client) (*admin20231115.APIClient, error) { - opts := []admin20231115.ClientModifier{ - admin20231115.UseHTTPClient(client), - admin20231115.UseUserAgent(userAgent(c)), - admin20231115.UseBaseURL(c.BaseURL), - admin20231115.UseDebug(false)} +func (c *Config) newSDKV220240530Client(client *http.Client) (*admin20240530.APIClient, error) { + opts := []admin20240530.ClientModifier{ + admin20240530.UseHTTPClient(client), + admin20240530.UseUserAgent(userAgent(c)), + admin20240530.UseBaseURL(c.BaseURL), + admin20240530.UseDebug(false)} // Initialize the MongoDB Versioned Atlas Client. - sdkv2, err := admin20231115.NewClient(opts...) + sdkv2, err := admin20240530.NewClient(opts...) if err != nil { return nil, err } diff --git a/internal/service/accesslistapikey/data_source_accesslist_api_keys.go b/internal/service/accesslistapikey/data_source_accesslist_api_keys.go index ccc34007c2..0ce79a22d4 100644 --- a/internal/service/accesslistapikey/data_source_accesslist_api_keys.go +++ b/internal/service/accesslistapikey/data_source_accesslist_api_keys.go @@ -10,7 +10,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/accesslistapikey/resource_access_list_api_key.go b/internal/service/accesslistapikey/resource_access_list_api_key.go index 1eaf6751f5..f099ec0e14 100644 --- a/internal/service/accesslistapikey/resource_access_list_api_key.go +++ b/internal/service/accesslistapikey/resource_access_list_api_key.go @@ -13,7 +13,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { diff --git a/internal/service/advancedcluster/data_source_advanced_cluster.go b/internal/service/advancedcluster/data_source_advanced_cluster.go index 0bc3a5555d..082fdc113b 100644 --- a/internal/service/advancedcluster/data_source_advanced_cluster.go +++ b/internal/service/advancedcluster/data_source_advanced_cluster.go @@ -5,7 +5,7 @@ import ( "fmt" "net/http" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -251,7 +251,7 @@ func DataSource() *schema.Resource { } func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV220231115 := meta.(*config.MongoDBClient).AtlasV220231115 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID := d.Get("project_id").(string) @@ -265,13 +265,13 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag. } if !useReplicationSpecPerShard { - clusterDescOld, resp, err := connV220231115.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() + clusterDescOld, resp, err := connV220240530.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() if err != nil { if resp != nil { if resp.StatusCode == http.StatusNotFound { return nil } - if admin20231115.IsErrorCode(err, "ASYMMETRIC_SHARD_UNSUPPORTED") { + if admin20240530.IsErrorCode(err, "ASYMMETRIC_SHARD_UNSUPPORTED") { return diag.FromErr(fmt.Errorf("please add `use_replication_spec_per_shard = true` to your data source configuration to enable asymmetric shard support. Refer to documentation for more details. %s", err)) } } @@ -314,7 +314,7 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag. return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "disk_size_gb", clusterName, err)) } - zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, projectID, clusterName, connV220231115) + zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, projectID, clusterName, connV220240530) if err != nil { return diag.FromErr(err) } @@ -334,7 +334,7 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag. return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replication_specs", clusterName, err)) } - processArgs, _, err := connV220231115.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() + processArgs, _, err := connV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() if err != nil { return diag.FromErr(fmt.Errorf(ErrorAdvancedConfRead, clusterName, err)) } diff --git a/internal/service/advancedcluster/data_source_advanced_clusters.go b/internal/service/advancedcluster/data_source_advanced_clusters.go index 6f26126ac9..edd7b3e869 100644 --- a/internal/service/advancedcluster/data_source_advanced_clusters.go +++ b/internal/service/advancedcluster/data_source_advanced_clusters.go @@ -6,8 +6,8 @@ import ( "log" "net/http" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" @@ -267,7 +267,7 @@ func PluralDataSource() *schema.Resource { } func dataSourcePluralRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV220231115 := meta.(*config.MongoDBClient).AtlasV220231115 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID := d.Get("project_id").(string) useReplicationSpecPerShard := false @@ -279,14 +279,14 @@ func dataSourcePluralRead(ctx context.Context, d *schema.ResourceData, meta any) } if !useReplicationSpecPerShard { - list, resp, err := connV220231115.ClustersApi.ListClusters(ctx, projectID).Execute() + list, resp, err := connV220240530.ClustersApi.ListClusters(ctx, projectID).Execute() if err != nil { if resp != nil && resp.StatusCode == http.StatusNotFound { return nil } return diag.FromErr(fmt.Errorf(errorListRead, projectID, err)) } - results, diags := flattenAdvancedClustersOldSDK(ctx, connV220231115, connV2, list.GetResults(), d) + results, diags := flattenAdvancedClustersOldSDK(ctx, connV220240530, connV2, list.GetResults(), d) if len(diags) > 0 { return diags } @@ -301,7 +301,7 @@ func dataSourcePluralRead(ctx context.Context, d *schema.ResourceData, meta any) } return diag.FromErr(fmt.Errorf(errorListRead, projectID, err)) } - results, diags := flattenAdvancedClusters(ctx, connV220231115, connV2, list.GetResults(), d) + results, diags := flattenAdvancedClusters(ctx, connV220240530, connV2, list.GetResults(), d) if len(diags) > 0 { return diags } @@ -312,16 +312,16 @@ func dataSourcePluralRead(ctx context.Context, d *schema.ResourceData, meta any) return nil } -func flattenAdvancedClusters(ctx context.Context, connV220231115 *admin20231115.APIClient, connV2 *admin.APIClient, clusters []admin.ClusterDescription20250101, d *schema.ResourceData) ([]map[string]any, diag.Diagnostics) { +func flattenAdvancedClusters(ctx context.Context, connV220240530 *admin20240530.APIClient, connV2 *admin.APIClient, clusters []admin.ClusterDescription20240805, d *schema.ResourceData) ([]map[string]any, diag.Diagnostics) { results := make([]map[string]any, 0, len(clusters)) for i := range clusters { cluster := &clusters[i] - processArgs, _, err := connV220231115.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() + processArgs, _, err := connV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() if err != nil { log.Printf("[WARN] Error setting `advanced_configuration` for the cluster(%s): %s", cluster.GetId(), err) } - zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, cluster.GetGroupId(), cluster.GetName(), connV220231115) + zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, cluster.GetGroupId(), cluster.GetName(), connV220240530) if err != nil { return nil, diag.FromErr(err) } @@ -359,16 +359,16 @@ func flattenAdvancedClusters(ctx context.Context, connV220231115 *admin20231115. return results, nil } -func flattenAdvancedClustersOldSDK(ctx context.Context, connV220231115 *admin20231115.APIClient, connV2 *admin.APIClient, clusters []admin20231115.AdvancedClusterDescription, d *schema.ResourceData) ([]map[string]any, diag.Diagnostics) { +func flattenAdvancedClustersOldSDK(ctx context.Context, connV20240530 *admin20240530.APIClient, connV2 *admin.APIClient, clusters []admin20240530.AdvancedClusterDescription, d *schema.ResourceData) ([]map[string]any, diag.Diagnostics) { results := make([]map[string]any, 0, len(clusters)) for i := range clusters { cluster := &clusters[i] - processArgs, _, err := connV220231115.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() + processArgs, _, err := connV20240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, cluster.GetGroupId(), cluster.GetName()).Execute() if err != nil { log.Printf("[WARN] Error setting `advanced_configuration` for the cluster(%s): %s", cluster.GetId(), err) } - zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, cluster.GetGroupId(), cluster.GetName(), connV220231115) + zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, cluster.GetGroupId(), cluster.GetName(), connV20240530) if err != nil { return nil, diag.FromErr(err) } diff --git a/internal/service/advancedcluster/model_advanced_cluster.go b/internal/service/advancedcluster/model_advanced_cluster.go index 72d2dd9bd0..c02c7d9f03 100644 --- a/internal/service/advancedcluster/model_advanced_cluster.go +++ b/internal/service/advancedcluster/model_advanced_cluster.go @@ -9,8 +9,8 @@ import ( "slices" "strings" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" @@ -284,7 +284,7 @@ func IsSharedTier(instanceSize string) bool { // GetDiskSizeGBFromReplicationSpec obtains the diskSizeGB value by looking into the electable spec of the first replication spec. // Independent storage size scaling is not supported (CLOUDP-201331), meaning all electable/analytics/readOnly configs in all replication specs are the same. -func GetDiskSizeGBFromReplicationSpec(cluster *admin.ClusterDescription20250101) float64 { +func GetDiskSizeGBFromReplicationSpec(cluster *admin.ClusterDescription20240805) float64 { specs := cluster.GetReplicationSpecs() if len(specs) < 1 { return 0 @@ -446,7 +446,7 @@ func expandBiConnectorConfig(d *schema.ResourceData) *admin.BiConnector { return nil } -func flattenProcessArgs(p *admin20231115.ClusterDescriptionProcessArgs) []map[string]any { +func flattenProcessArgs(p *admin20240530.ClusterDescriptionProcessArgs) []map[string]any { if p == nil { return nil } @@ -467,28 +467,28 @@ func flattenProcessArgs(p *admin20231115.ClusterDescriptionProcessArgs) []map[st } } -func FlattenAdvancedReplicationSpecsOldSDK(ctx context.Context, apiObjects []admin20231115.ReplicationSpec, zoneNameToZoneIDs map[string]string, rootDiskSizeGB float64, tfMapObjects []any, +func FlattenAdvancedReplicationSpecsOldSDK(ctx context.Context, apiObjects []admin20240530.ReplicationSpec, zoneNameToZoneIDs map[string]string, rootDiskSizeGB float64, tfMapObjects []any, d *schema.ResourceData, connV2 *admin.APIClient) ([]map[string]any, error) { // for flattening old model we need information of value defined at root disk_size_gb so we set the value in new location under hardware specs - replicationSpecFlattener := func(ctx context.Context, sdkModel *admin20231115.ReplicationSpec, tfModel map[string]any, resourceData *schema.ResourceData, client *admin.APIClient) (map[string]any, error) { + replicationSpecFlattener := func(ctx context.Context, sdkModel *admin20240530.ReplicationSpec, tfModel map[string]any, resourceData *schema.ResourceData, client *admin.APIClient) (map[string]any, error) { return flattenAdvancedReplicationSpecOldSDK(ctx, sdkModel, zoneNameToZoneIDs, rootDiskSizeGB, tfModel, resourceData, connV2) } - return flattenAdvancedReplicationSpecsLogic[admin20231115.ReplicationSpec](ctx, apiObjects, tfMapObjects, d, + return flattenAdvancedReplicationSpecsLogic[admin20240530.ReplicationSpec](ctx, apiObjects, tfMapObjects, d, doesAdvancedReplicationSpecMatchAPIOldSDK, replicationSpecFlattener, connV2) } -func flattenAdvancedReplicationSpecs(ctx context.Context, apiObjects []admin.ReplicationSpec20250101, zoneNameToOldReplicationSpecIDs map[string]string, tfMapObjects []any, +func flattenAdvancedReplicationSpecs(ctx context.Context, apiObjects []admin.ReplicationSpec20240805, zoneNameToOldReplicationSpecIDs map[string]string, tfMapObjects []any, d *schema.ResourceData, connV2 *admin.APIClient) ([]map[string]any, error) { // for flattening new model we need information of replication spec ids associated to old API to avoid breaking changes for users referencing replication_specs.*.id - replicationSpecFlattener := func(ctx context.Context, sdkModel *admin.ReplicationSpec20250101, tfModel map[string]any, resourceData *schema.ResourceData, client *admin.APIClient) (map[string]any, error) { + replicationSpecFlattener := func(ctx context.Context, sdkModel *admin.ReplicationSpec20240805, tfModel map[string]any, resourceData *schema.ResourceData, client *admin.APIClient) (map[string]any, error) { return flattenAdvancedReplicationSpec(ctx, sdkModel, zoneNameToOldReplicationSpecIDs, tfModel, resourceData, connV2) } - return flattenAdvancedReplicationSpecsLogic[admin.ReplicationSpec20250101](ctx, apiObjects, tfMapObjects, d, + return flattenAdvancedReplicationSpecsLogic[admin.ReplicationSpec20240805](ctx, apiObjects, tfMapObjects, d, doesAdvancedReplicationSpecMatchAPI, replicationSpecFlattener, connV2) } type ReplicationSpecSDKModel interface { - admin20231115.ReplicationSpec | admin.ReplicationSpec20250101 + admin20240530.ReplicationSpec | admin.ReplicationSpec20240805 } func flattenAdvancedReplicationSpecsLogic[T ReplicationSpecSDKModel]( @@ -552,15 +552,15 @@ func flattenAdvancedReplicationSpecsLogic[T ReplicationSpecSDKModel]( return tfList, nil } -func doesAdvancedReplicationSpecMatchAPIOldSDK(tfObject map[string]any, apiObject *admin20231115.ReplicationSpec) bool { +func doesAdvancedReplicationSpecMatchAPIOldSDK(tfObject map[string]any, apiObject *admin20240530.ReplicationSpec) bool { return tfObject["id"] == apiObject.GetId() || (tfObject["id"] == nil && tfObject["zone_name"] == apiObject.GetZoneName()) } -func doesAdvancedReplicationSpecMatchAPI(tfObject map[string]any, apiObject *admin.ReplicationSpec20250101) bool { +func doesAdvancedReplicationSpecMatchAPI(tfObject map[string]any, apiObject *admin.ReplicationSpec20240805) bool { return tfObject["external_id"] == apiObject.GetId() } -func flattenAdvancedReplicationSpecRegionConfigs(ctx context.Context, apiObjects []admin.CloudRegionConfig20250101, tfMapObjects []any, +func flattenAdvancedReplicationSpecRegionConfigs(ctx context.Context, apiObjects []admin.CloudRegionConfig20240805, tfMapObjects []any, d *schema.ResourceData, connV2 *admin.APIClient) (tfResult []map[string]any, containersIDs map[string]string, err error) { if len(apiObjects) == 0 { return nil, nil, nil @@ -596,7 +596,7 @@ func flattenAdvancedReplicationSpecRegionConfigs(ctx context.Context, apiObjects return tfList, containerIDs, nil } -func flattenAdvancedReplicationSpecRegionConfig(apiObject *admin.CloudRegionConfig20250101, tfMapObject map[string]any) map[string]any { +func flattenAdvancedReplicationSpecRegionConfig(apiObject *admin.CloudRegionConfig20240805, tfMapObject map[string]any) map[string]any { if apiObject == nil { return nil } @@ -634,11 +634,11 @@ func flattenAdvancedReplicationSpecRegionConfig(apiObject *admin.CloudRegionConf return tfMap } -func hwSpecToDedicatedHwSpec(apiObject *admin.HardwareSpec20250101) *admin.DedicatedHardwareSpec20250101 { +func hwSpecToDedicatedHwSpec(apiObject *admin.HardwareSpec20240805) *admin.DedicatedHardwareSpec20240805 { if apiObject == nil { return nil } - return &admin.DedicatedHardwareSpec20250101{ + return &admin.DedicatedHardwareSpec20240805{ NodeCount: apiObject.NodeCount, DiskIOPS: apiObject.DiskIOPS, EbsVolumeType: apiObject.EbsVolumeType, @@ -647,11 +647,11 @@ func hwSpecToDedicatedHwSpec(apiObject *admin.HardwareSpec20250101) *admin.Dedic } } -func dedicatedHwSpecToHwSpec(apiObject *admin.DedicatedHardwareSpec20250101) *admin.HardwareSpec20250101 { +func dedicatedHwSpecToHwSpec(apiObject *admin.DedicatedHardwareSpec20240805) *admin.HardwareSpec20240805 { if apiObject == nil { return nil } - return &admin.HardwareSpec20250101{ + return &admin.HardwareSpec20240805{ DiskSizeGB: apiObject.DiskSizeGB, NodeCount: apiObject.NodeCount, DiskIOPS: apiObject.DiskIOPS, @@ -660,7 +660,7 @@ func dedicatedHwSpecToHwSpec(apiObject *admin.DedicatedHardwareSpec20250101) *ad } } -func flattenAdvancedReplicationSpecRegionConfigSpec(apiObject *admin.DedicatedHardwareSpec20250101, providerName string, tfMapObjects []any) []map[string]any { +func flattenAdvancedReplicationSpecRegionConfigSpec(apiObject *admin.DedicatedHardwareSpec20240805, providerName string, tfMapObjects []any) []map[string]any { if apiObject == nil { return nil } @@ -721,7 +721,7 @@ func flattenAdvancedReplicationSpecAutoScaling(apiObject *admin.AdvancedAutoScal return tfList } -func getAdvancedClusterContainerID(containers []admin.CloudProviderContainer, cluster *admin.CloudRegionConfig20250101) string { +func getAdvancedClusterContainerID(containers []admin.CloudProviderContainer, cluster *admin.CloudRegionConfig20240805) string { if len(containers) == 0 { return "" } @@ -738,8 +738,8 @@ func getAdvancedClusterContainerID(containers []admin.CloudProviderContainer, cl return "" } -func expandProcessArgs(d *schema.ResourceData, p map[string]any) admin20231115.ClusterDescriptionProcessArgs { - res := admin20231115.ClusterDescriptionProcessArgs{} +func expandProcessArgs(d *schema.ResourceData, p map[string]any) admin20240530.ClusterDescriptionProcessArgs { + res := admin20240530.ClusterDescriptionProcessArgs{} if _, ok := d.GetOkExists("advanced_configuration.0.default_read_concern"); ok { res.DefaultReadConcern = conversion.StringPtr(cast.ToString(p["default_read_concern"])) @@ -816,8 +816,8 @@ func expandLabelSliceFromSetSchema(d *schema.ResourceData) ([]admin.ComponentLab return res, nil } -func expandAdvancedReplicationSpecs(tfList []any, rootDiskSizeGB *float64) *[]admin.ReplicationSpec20250101 { - var apiObjects []admin.ReplicationSpec20250101 +func expandAdvancedReplicationSpecs(tfList []any, rootDiskSizeGB *float64) *[]admin.ReplicationSpec20240805 { + var apiObjects []admin.ReplicationSpec20240805 for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]any) if !ok || tfMap == nil { @@ -838,8 +838,8 @@ func expandAdvancedReplicationSpecs(tfList []any, rootDiskSizeGB *float64) *[]ad return &apiObjects } -func expandAdvancedReplicationSpecsOldSDK(tfList []any) *[]admin20231115.ReplicationSpec { - var apiObjects []admin20231115.ReplicationSpec +func expandAdvancedReplicationSpecsOldSDK(tfList []any) *[]admin20240530.ReplicationSpec { + var apiObjects []admin20240530.ReplicationSpec for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]any) if !ok || tfMap == nil { @@ -854,8 +854,8 @@ func expandAdvancedReplicationSpecsOldSDK(tfList []any) *[]admin20231115.Replica return &apiObjects } -func expandAdvancedReplicationSpec(tfMap map[string]any, rootDiskSizeGB *float64) *admin.ReplicationSpec20250101 { - apiObject := &admin.ReplicationSpec20250101{ +func expandAdvancedReplicationSpec(tfMap map[string]any, rootDiskSizeGB *float64) *admin.ReplicationSpec20240805 { + apiObject := &admin.ReplicationSpec20240805{ ZoneName: conversion.StringPtr(tfMap["zone_name"].(string)), RegionConfigs: expandRegionConfigs(tfMap["region_configs"].([]any), rootDiskSizeGB), } @@ -865,8 +865,8 @@ func expandAdvancedReplicationSpec(tfMap map[string]any, rootDiskSizeGB *float64 return apiObject } -func expandAdvancedReplicationSpecOldSDK(tfMap map[string]any) *admin20231115.ReplicationSpec { - apiObject := &admin20231115.ReplicationSpec{ +func expandAdvancedReplicationSpecOldSDK(tfMap map[string]any) *admin20240530.ReplicationSpec { + apiObject := &admin20240530.ReplicationSpec{ NumShards: conversion.Pointer(tfMap["num_shards"].(int)), ZoneName: conversion.StringPtr(tfMap["zone_name"].(string)), RegionConfigs: convertRegionConfigSliceToOldSDK(expandRegionConfigs(tfMap["region_configs"].([]any), nil)), @@ -877,8 +877,8 @@ func expandAdvancedReplicationSpecOldSDK(tfMap map[string]any) *admin20231115.Re return apiObject } -func expandRegionConfigs(tfList []any, rootDiskSizeGB *float64) *[]admin.CloudRegionConfig20250101 { - var apiObjects []admin.CloudRegionConfig20250101 +func expandRegionConfigs(tfList []any, rootDiskSizeGB *float64) *[]admin.CloudRegionConfig20240805 { + var apiObjects []admin.CloudRegionConfig20240805 for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]any) if !ok || tfMap == nil { @@ -893,9 +893,9 @@ func expandRegionConfigs(tfList []any, rootDiskSizeGB *float64) *[]admin.CloudRe return &apiObjects } -func expandRegionConfig(tfMap map[string]any, rootDiskSizeGB *float64) *admin.CloudRegionConfig20250101 { +func expandRegionConfig(tfMap map[string]any, rootDiskSizeGB *float64) *admin.CloudRegionConfig20240805 { providerName := tfMap["provider_name"].(string) - apiObject := &admin.CloudRegionConfig20250101{ + apiObject := &admin.CloudRegionConfig20240805{ Priority: conversion.Pointer(cast.ToInt(tfMap["priority"])), ProviderName: conversion.StringPtr(providerName), RegionName: conversion.StringPtr(tfMap["region_name"].(string)), @@ -922,9 +922,9 @@ func expandRegionConfig(tfMap map[string]any, rootDiskSizeGB *float64) *admin.Cl return apiObject } -func expandRegionConfigSpec(tfList []any, providerName string, rootDiskSizeGB *float64) *admin.DedicatedHardwareSpec20250101 { +func expandRegionConfigSpec(tfList []any, providerName string, rootDiskSizeGB *float64) *admin.DedicatedHardwareSpec20240805 { tfMap, _ := tfList[0].(map[string]any) - apiObject := new(admin.DedicatedHardwareSpec20250101) + apiObject := new(admin.DedicatedHardwareSpec20240805) if providerName == constant.AWS || providerName == constant.AZURE { if v, ok := tfMap["disk_iops"]; ok && v.(int) > 0 { apiObject.DiskIOPS = conversion.Pointer(v.(int)) @@ -985,7 +985,7 @@ func expandRegionConfigAutoScaling(tfList []any) *admin.AdvancedAutoScalingSetti return &settings } -func flattenAdvancedReplicationSpecsDS(ctx context.Context, apiRepSpecs []admin.ReplicationSpec20250101, zoneNameToOldReplicationSpecIDs map[string]string, d *schema.ResourceData, connV2 *admin.APIClient) ([]map[string]any, error) { +func flattenAdvancedReplicationSpecsDS(ctx context.Context, apiRepSpecs []admin.ReplicationSpec20240805, zoneNameToOldReplicationSpecIDs map[string]string, d *schema.ResourceData, connV2 *admin.APIClient) ([]map[string]any, error) { if len(apiRepSpecs) == 0 { return nil, nil } @@ -1002,7 +1002,7 @@ func flattenAdvancedReplicationSpecsDS(ctx context.Context, apiRepSpecs []admin. return tfList, nil } -func flattenAdvancedReplicationSpec(ctx context.Context, apiObject *admin.ReplicationSpec20250101, zoneNameToOldReplicationSpecIDs map[string]string, tfMapObject map[string]any, +func flattenAdvancedReplicationSpec(ctx context.Context, apiObject *admin.ReplicationSpec20240805, zoneNameToOldReplicationSpecIDs map[string]string, tfMapObject map[string]any, d *schema.ResourceData, connV2 *admin.APIClient) (map[string]any, error) { if apiObject == nil { return nil, nil @@ -1039,7 +1039,7 @@ func flattenAdvancedReplicationSpec(ctx context.Context, apiObject *admin.Replic return tfMap, nil } -func flattenAdvancedReplicationSpecOldSDK(ctx context.Context, apiObject *admin20231115.ReplicationSpec, zoneNameToZoneIDs map[string]string, rootDiskSizeGB float64, tfMapObject map[string]any, +func flattenAdvancedReplicationSpecOldSDK(ctx context.Context, apiObject *admin20240530.ReplicationSpec, zoneNameToZoneIDs map[string]string, rootDiskSizeGB float64, tfMapObject map[string]any, d *schema.ResourceData, connV2 *admin.APIClient) (map[string]any, error) { if apiObject == nil { return nil, nil diff --git a/internal/service/advancedcluster/model_advanced_cluster_test.go b/internal/service/advancedcluster/model_advanced_cluster_test.go index f9298185d0..5f66222a7f 100644 --- a/internal/service/advancedcluster/model_advanced_cluster_test.go +++ b/internal/service/advancedcluster/model_advanced_cluster_test.go @@ -7,10 +7,10 @@ import ( "net/http" "testing" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" - "go.mongodb.org/atlas-sdk/v20240530002/mockadmin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" + "go.mongodb.org/atlas-sdk/v20240805001/mockadmin" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/stretchr/testify/assert" @@ -25,7 +25,7 @@ var ( dummyClusterName = "clusterName" dummyProjectID = "projectId" errGeneric = errors.New("generic") - advancedClusters = []admin.ClusterDescription20250101{{StateName: conversion.StringPtr("NOT IDLE")}} + advancedClusters = []admin.ClusterDescription20240805{{StateName: conversion.StringPtr("NOT IDLE")}} ) func TestFlattenReplicationSpecs(t *testing.T) { @@ -36,7 +36,7 @@ func TestFlattenReplicationSpecs(t *testing.T) { unexpectedID = "id2" expectedZoneName = "z1" unexpectedZoneName = "z2" - regionConfigAdmin = []admin20231115.CloudRegionConfig{{ + regionConfigAdmin = []admin20240530.CloudRegionConfig{{ ProviderName: &providerName, RegionName: ®ionName, }} @@ -49,8 +49,8 @@ func TestFlattenReplicationSpecs(t *testing.T) { "region_name": regionName, "zone_name": unexpectedZoneName, } - apiSpecExpected = admin20231115.ReplicationSpec{Id: &expectedID, ZoneName: &expectedZoneName, RegionConfigs: ®ionConfigAdmin} - apiSpecDifferent = admin20231115.ReplicationSpec{Id: &unexpectedID, ZoneName: &unexpectedZoneName, RegionConfigs: ®ionConfigAdmin} + apiSpecExpected = admin20240530.ReplicationSpec{Id: &expectedID, ZoneName: &expectedZoneName, RegionConfigs: ®ionConfigAdmin} + apiSpecDifferent = admin20240530.ReplicationSpec{Id: &unexpectedID, ZoneName: &unexpectedZoneName, RegionConfigs: ®ionConfigAdmin} testSchema = map[string]*schema.Schema{ "project_id": {Type: schema.TypeString}, } @@ -80,47 +80,47 @@ func TestFlattenReplicationSpecs(t *testing.T) { } ) testCases := map[string]struct { - adminSpecs []admin20231115.ReplicationSpec + adminSpecs []admin20240530.ReplicationSpec tfInputSpecs []any expectedLen int }{ "empty admin spec should return empty list": { - []admin20231115.ReplicationSpec{}, + []admin20240530.ReplicationSpec{}, []any{tfSameIDSameZone}, 0, }, "existing id, should match admin": { - []admin20231115.ReplicationSpec{apiSpecExpected}, + []admin20240530.ReplicationSpec{apiSpecExpected}, []any{tfSameIDSameZone}, 1, }, "existing different id, should change to admin spec": { - []admin20231115.ReplicationSpec{apiSpecExpected}, + []admin20240530.ReplicationSpec{apiSpecExpected}, []any{tfdiffIDDiffZone}, 1, }, "missing id, should be set when zone_name matches": { - []admin20231115.ReplicationSpec{apiSpecExpected}, + []admin20240530.ReplicationSpec{apiSpecExpected}, []any{tfNoIDSameZone}, 1, }, "missing id and diff zone, should change to admin spec": { - []admin20231115.ReplicationSpec{apiSpecExpected}, + []admin20240530.ReplicationSpec{apiSpecExpected}, []any{tfNoIDDiffZone}, 1, }, "existing id, should match correct api spec using `id` and extra api spec added": { - []admin20231115.ReplicationSpec{apiSpecDifferent, apiSpecExpected}, + []admin20240530.ReplicationSpec{apiSpecDifferent, apiSpecExpected}, []any{tfSameIDSameZone}, 2, }, "missing id, should match correct api spec using `zone_name` and extra api spec added": { - []admin20231115.ReplicationSpec{apiSpecDifferent, apiSpecExpected}, + []admin20240530.ReplicationSpec{apiSpecDifferent, apiSpecExpected}, []any{tfNoIDSameZone}, 2, }, "two matching specs should be set to api specs": { - []admin20231115.ReplicationSpec{apiSpecExpected, apiSpecDifferent}, + []admin20240530.ReplicationSpec{apiSpecExpected, apiSpecDifferent}, []any{tfSameIDSameZone, tfdiffIDDiffZone}, 2, }, @@ -154,14 +154,14 @@ func TestGetDiskSizeGBFromReplicationSpec(t *testing.T) { diskSizeGBValue := 40.0 testCases := map[string]struct { - clusterDescription admin.ClusterDescription20250101 + clusterDescription admin.ClusterDescription20240805 expectedDiskSizeResult float64 }{ "cluster description with disk size gb value at electable spec": { - clusterDescription: admin.ClusterDescription20250101{ - ReplicationSpecs: &[]admin.ReplicationSpec20250101{{ - RegionConfigs: &[]admin.CloudRegionConfig20250101{{ - ElectableSpecs: &admin.HardwareSpec20250101{ + clusterDescription: admin.ClusterDescription20240805{ + ReplicationSpecs: &[]admin.ReplicationSpec20240805{{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{{ + ElectableSpecs: &admin.HardwareSpec20240805{ DiskSizeGB: admin.PtrFloat64(diskSizeGBValue), }, }}, @@ -170,15 +170,15 @@ func TestGetDiskSizeGBFromReplicationSpec(t *testing.T) { expectedDiskSizeResult: diskSizeGBValue, }, "cluster description with no electable spec": { - clusterDescription: admin.ClusterDescription20250101{ - ReplicationSpecs: &[]admin.ReplicationSpec20250101{ - {RegionConfigs: &[]admin.CloudRegionConfig20250101{{}}}, + clusterDescription: admin.ClusterDescription20240805{ + ReplicationSpecs: &[]admin.ReplicationSpec20240805{ + {RegionConfigs: &[]admin.CloudRegionConfig20240805{{}}}, }, }, expectedDiskSizeResult: 0, }, "cluster description with no replication spec": { - clusterDescription: admin.ClusterDescription20250101{}, + clusterDescription: admin.ClusterDescription20240805{}, expectedDiskSizeResult: 0, }, } @@ -198,7 +198,7 @@ type Result struct { func TestUpgradeRefreshFunc(t *testing.T) { testCases := []struct { - mockCluster *admin.ClusterDescription20250101 + mockCluster *admin.ClusterDescription20240805 mockResponse *http.Response expectedResult Result mockError error @@ -260,11 +260,11 @@ func TestUpgradeRefreshFunc(t *testing.T) { }, { name: "Successful", - mockCluster: &admin.ClusterDescription20250101{StateName: conversion.StringPtr("stateName")}, + mockCluster: &admin.ClusterDescription20240805{StateName: conversion.StringPtr("stateName")}, mockResponse: &http.Response{StatusCode: 200}, expectedError: false, expectedResult: Result{ - response: &admin.ClusterDescription20250101{StateName: conversion.StringPtr("stateName")}, + response: &admin.ClusterDescription20240805{StateName: conversion.StringPtr("stateName")}, state: "stateName", error: nil, }, @@ -292,7 +292,7 @@ func TestUpgradeRefreshFunc(t *testing.T) { func TestResourceListAdvancedRefreshFunc(t *testing.T) { testCases := []struct { - mockCluster *admin.PaginatedClusterDescription20250101 + mockCluster *admin.PaginatedClusterDescription20240805 mockResponse *http.Response expectedResult Result mockError error @@ -354,7 +354,7 @@ func TestResourceListAdvancedRefreshFunc(t *testing.T) { }, { name: "Successful but with at least one cluster not idle", - mockCluster: &admin.PaginatedClusterDescription20250101{Results: &advancedClusters}, + mockCluster: &admin.PaginatedClusterDescription20240805{Results: &advancedClusters}, mockResponse: &http.Response{StatusCode: 200}, expectedError: false, expectedResult: Result{ @@ -365,11 +365,11 @@ func TestResourceListAdvancedRefreshFunc(t *testing.T) { }, { name: "Successful", - mockCluster: &admin.PaginatedClusterDescription20250101{}, + mockCluster: &admin.PaginatedClusterDescription20240805{}, mockResponse: &http.Response{StatusCode: 200}, expectedError: false, expectedResult: Result{ - response: &admin.PaginatedClusterDescription20250101{}, + response: &admin.PaginatedClusterDescription20240805{}, state: "IDLE", error: nil, }, diff --git a/internal/service/advancedcluster/model_sdk_version_conversion.go b/internal/service/advancedcluster/model_sdk_version_conversion.go index 8cafc0357e..dcc3559dec 100644 --- a/internal/service/advancedcluster/model_sdk_version_conversion.go +++ b/internal/service/advancedcluster/model_sdk_version_conversion.go @@ -1,8 +1,8 @@ package advancedcluster import ( - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" ) @@ -11,7 +11,7 @@ import ( // - These functions must not contain any business logic. // - All will be removed once we rely on a single API version. -func convertTagsPtrToLatest(tags *[]admin20231115.ResourceTag) *[]admin.ResourceTag { +func convertTagsPtrToLatest(tags *[]admin20240530.ResourceTag) *[]admin.ResourceTag { if tags == nil { return nil } @@ -19,15 +19,15 @@ func convertTagsPtrToLatest(tags *[]admin20231115.ResourceTag) *[]admin.Resource return &result } -func convertTagsPtrToOldSDK(tags *[]admin.ResourceTag) *[]admin20231115.ResourceTag { +func convertTagsPtrToOldSDK(tags *[]admin.ResourceTag) *[]admin20240530.ResourceTag { if tags == nil { return nil } tagsSlice := *tags - results := make([]admin20231115.ResourceTag, len(tagsSlice)) + results := make([]admin20240530.ResourceTag, len(tagsSlice)) for i := range len(tagsSlice) { tag := tagsSlice[i] - results[i] = admin20231115.ResourceTag{ + results[i] = admin20240530.ResourceTag{ Key: tag.Key, Value: tag.Value, } @@ -35,7 +35,7 @@ func convertTagsPtrToOldSDK(tags *[]admin.ResourceTag) *[]admin20231115.Resource return &results } -func convertTagsToLatest(tags []admin20231115.ResourceTag) []admin.ResourceTag { +func convertTagsToLatest(tags []admin20240530.ResourceTag) []admin.ResourceTag { results := make([]admin.ResourceTag, len(tags)) for i := range len(tags) { tag := tags[i] @@ -47,24 +47,24 @@ func convertTagsToLatest(tags []admin20231115.ResourceTag) []admin.ResourceTag { return results } -func convertBiConnectToOldSDK(biconnector *admin.BiConnector) *admin20231115.BiConnector { +func convertBiConnectToOldSDK(biconnector *admin.BiConnector) *admin20240530.BiConnector { if biconnector == nil { return nil } - return &admin20231115.BiConnector{ + return &admin20240530.BiConnector{ Enabled: biconnector.Enabled, ReadPreference: biconnector.ReadPreference, } } -func convertBiConnectToLatest(biconnector *admin20231115.BiConnector) *admin.BiConnector { +func convertBiConnectToLatest(biconnector *admin20240530.BiConnector) *admin.BiConnector { return &admin.BiConnector{ Enabled: biconnector.Enabled, ReadPreference: biconnector.ReadPreference, } } -func convertConnectionStringToLatest(connStrings *admin20231115.ClusterConnectionStrings) *admin.ClusterConnectionStrings { +func convertConnectionStringToLatest(connStrings *admin20240530.ClusterConnectionStrings) *admin.ClusterConnectionStrings { return &admin.ClusterConnectionStrings{ AwsPrivateLink: connStrings.AwsPrivateLink, AwsPrivateLinkSrv: connStrings.AwsPrivateLinkSrv, @@ -76,7 +76,7 @@ func convertConnectionStringToLatest(connStrings *admin20231115.ClusterConnectio } } -func convertPrivateEndpointToLatest(privateEndpoints *[]admin20231115.ClusterDescriptionConnectionStringsPrivateEndpoint) *[]admin.ClusterDescriptionConnectionStringsPrivateEndpoint { +func convertPrivateEndpointToLatest(privateEndpoints *[]admin20240530.ClusterDescriptionConnectionStringsPrivateEndpoint) *[]admin.ClusterDescriptionConnectionStringsPrivateEndpoint { if privateEndpoints == nil { return nil } @@ -95,7 +95,7 @@ func convertPrivateEndpointToLatest(privateEndpoints *[]admin20231115.ClusterDes return &results } -func convertEndpointsToLatest(privateEndpoints *[]admin20231115.ClusterDescriptionConnectionStringsPrivateEndpointEndpoint) *[]admin.ClusterDescriptionConnectionStringsPrivateEndpointEndpoint { +func convertEndpointsToLatest(privateEndpoints *[]admin20240530.ClusterDescriptionConnectionStringsPrivateEndpointEndpoint) *[]admin.ClusterDescriptionConnectionStringsPrivateEndpointEndpoint { if privateEndpoints == nil { return nil } @@ -112,7 +112,7 @@ func convertEndpointsToLatest(privateEndpoints *[]admin20231115.ClusterDescripti return &results } -func convertLabelsToLatest(labels *[]admin20231115.ComponentLabel) *[]admin.ComponentLabel { +func convertLabelsToLatest(labels *[]admin20240530.ComponentLabel) *[]admin.ComponentLabel { labelSlice := *labels results := make([]admin.ComponentLabel, len(labelSlice)) for i := range len(labelSlice) { @@ -125,14 +125,14 @@ func convertLabelsToLatest(labels *[]admin20231115.ComponentLabel) *[]admin.Comp return &results } -func convertLabelSliceToOldSDK(slice []admin.ComponentLabel, err diag.Diagnostics) ([]admin20231115.ComponentLabel, diag.Diagnostics) { +func convertLabelSliceToOldSDK(slice []admin.ComponentLabel, err diag.Diagnostics) ([]admin20240530.ComponentLabel, diag.Diagnostics) { if err != nil { return nil, err } - results := make([]admin20231115.ComponentLabel, len(slice)) + results := make([]admin20240530.ComponentLabel, len(slice)) for i := range len(slice) { label := slice[i] - results[i] = admin20231115.ComponentLabel{ + results[i] = admin20240530.ComponentLabel{ Key: label.Key, Value: label.Value, } @@ -140,15 +140,15 @@ func convertLabelSliceToOldSDK(slice []admin.ComponentLabel, err diag.Diagnostic return results, nil } -func convertRegionConfigSliceToOldSDK(slice *[]admin.CloudRegionConfig20250101) *[]admin20231115.CloudRegionConfig { +func convertRegionConfigSliceToOldSDK(slice *[]admin.CloudRegionConfig20240805) *[]admin20240530.CloudRegionConfig { if slice == nil { return nil } cloudRegionSlice := *slice - results := make([]admin20231115.CloudRegionConfig, len(cloudRegionSlice)) + results := make([]admin20240530.CloudRegionConfig, len(cloudRegionSlice)) for i := range len(cloudRegionSlice) { cloudRegion := cloudRegionSlice[i] - results[i] = admin20231115.CloudRegionConfig{ + results[i] = admin20240530.CloudRegionConfig{ ElectableSpecs: convertHardwareSpecToOldSDK(cloudRegion.ElectableSpecs), Priority: cloudRegion.Priority, ProviderName: cloudRegion.ProviderName, @@ -163,11 +163,11 @@ func convertRegionConfigSliceToOldSDK(slice *[]admin.CloudRegionConfig20250101) return &results } -func convertHardwareSpecToOldSDK(hwspec *admin.HardwareSpec20250101) *admin20231115.HardwareSpec { +func convertHardwareSpecToOldSDK(hwspec *admin.HardwareSpec20240805) *admin20240530.HardwareSpec { if hwspec == nil { return nil } - return &admin20231115.HardwareSpec{ + return &admin20240530.HardwareSpec{ DiskIOPS: hwspec.DiskIOPS, EbsVolumeType: hwspec.EbsVolumeType, InstanceSize: hwspec.InstanceSize, @@ -175,21 +175,21 @@ func convertHardwareSpecToOldSDK(hwspec *admin.HardwareSpec20250101) *admin20231 } } -func convertAdvancedAutoScalingSettingsToOldSDK(settings *admin.AdvancedAutoScalingSettings) *admin20231115.AdvancedAutoScalingSettings { +func convertAdvancedAutoScalingSettingsToOldSDK(settings *admin.AdvancedAutoScalingSettings) *admin20240530.AdvancedAutoScalingSettings { if settings == nil { return nil } - return &admin20231115.AdvancedAutoScalingSettings{ + return &admin20240530.AdvancedAutoScalingSettings{ Compute: convertAdvancedComputeAutoScalingToOldSDK(settings.Compute), DiskGB: convertDiskGBAutoScalingToOldSDK(settings.DiskGB), } } -func convertAdvancedComputeAutoScalingToOldSDK(settings *admin.AdvancedComputeAutoScaling) *admin20231115.AdvancedComputeAutoScaling { +func convertAdvancedComputeAutoScalingToOldSDK(settings *admin.AdvancedComputeAutoScaling) *admin20240530.AdvancedComputeAutoScaling { if settings == nil { return nil } - return &admin20231115.AdvancedComputeAutoScaling{ + return &admin20240530.AdvancedComputeAutoScaling{ Enabled: settings.Enabled, MaxInstanceSize: settings.MaxInstanceSize, MinInstanceSize: settings.MinInstanceSize, @@ -197,20 +197,20 @@ func convertAdvancedComputeAutoScalingToOldSDK(settings *admin.AdvancedComputeAu } } -func convertDiskGBAutoScalingToOldSDK(settings *admin.DiskGBAutoScaling) *admin20231115.DiskGBAutoScaling { +func convertDiskGBAutoScalingToOldSDK(settings *admin.DiskGBAutoScaling) *admin20240530.DiskGBAutoScaling { if settings == nil { return nil } - return &admin20231115.DiskGBAutoScaling{ + return &admin20240530.DiskGBAutoScaling{ Enabled: settings.Enabled, } } -func convertDedicatedHardwareSpecToOldSDK(spec *admin.DedicatedHardwareSpec20250101) *admin20231115.DedicatedHardwareSpec { +func convertDedicatedHardwareSpecToOldSDK(spec *admin.DedicatedHardwareSpec20240805) *admin20240530.DedicatedHardwareSpec { if spec == nil { return nil } - return &admin20231115.DedicatedHardwareSpec{ + return &admin20240530.DedicatedHardwareSpec{ NodeCount: spec.NodeCount, DiskIOPS: spec.DiskIOPS, EbsVolumeType: spec.EbsVolumeType, @@ -218,11 +218,11 @@ func convertDedicatedHardwareSpecToOldSDK(spec *admin.DedicatedHardwareSpec20250 } } -func convertDedicatedHwSpecToLatest(spec *admin20231115.DedicatedHardwareSpec, rootDiskSizeGB float64) *admin.DedicatedHardwareSpec20250101 { +func convertDedicatedHwSpecToLatest(spec *admin20240530.DedicatedHardwareSpec, rootDiskSizeGB float64) *admin.DedicatedHardwareSpec20240805 { if spec == nil { return nil } - return &admin.DedicatedHardwareSpec20250101{ + return &admin.DedicatedHardwareSpec20240805{ NodeCount: spec.NodeCount, DiskIOPS: spec.DiskIOPS, EbsVolumeType: spec.EbsVolumeType, @@ -231,7 +231,7 @@ func convertDedicatedHwSpecToLatest(spec *admin20231115.DedicatedHardwareSpec, r } } -func convertAdvancedAutoScalingSettingsToLatest(settings *admin20231115.AdvancedAutoScalingSettings) *admin.AdvancedAutoScalingSettings { +func convertAdvancedAutoScalingSettingsToLatest(settings *admin20240530.AdvancedAutoScalingSettings) *admin.AdvancedAutoScalingSettings { if settings == nil { return nil } @@ -241,7 +241,7 @@ func convertAdvancedAutoScalingSettingsToLatest(settings *admin20231115.Advanced } } -func convertAdvancedComputeAutoScalingToLatest(settings *admin20231115.AdvancedComputeAutoScaling) *admin.AdvancedComputeAutoScaling { +func convertAdvancedComputeAutoScalingToLatest(settings *admin20240530.AdvancedComputeAutoScaling) *admin.AdvancedComputeAutoScaling { if settings == nil { return nil } @@ -253,7 +253,7 @@ func convertAdvancedComputeAutoScalingToLatest(settings *admin20231115.AdvancedC } } -func convertDiskGBAutoScalingToLatest(settings *admin20231115.DiskGBAutoScaling) *admin.DiskGBAutoScaling { +func convertDiskGBAutoScalingToLatest(settings *admin20240530.DiskGBAutoScaling) *admin.DiskGBAutoScaling { if settings == nil { return nil } @@ -262,11 +262,11 @@ func convertDiskGBAutoScalingToLatest(settings *admin20231115.DiskGBAutoScaling) } } -func convertHardwareSpecToLatest(hwspec *admin20231115.HardwareSpec, rootDiskSizeGB float64) *admin.HardwareSpec20250101 { +func convertHardwareSpecToLatest(hwspec *admin20240530.HardwareSpec, rootDiskSizeGB float64) *admin.HardwareSpec20240805 { if hwspec == nil { return nil } - return &admin.HardwareSpec20250101{ + return &admin.HardwareSpec20240805{ DiskIOPS: hwspec.DiskIOPS, EbsVolumeType: hwspec.EbsVolumeType, InstanceSize: hwspec.InstanceSize, @@ -275,15 +275,15 @@ func convertHardwareSpecToLatest(hwspec *admin20231115.HardwareSpec, rootDiskSiz } } -func convertRegionConfigSliceToLatest(slice *[]admin20231115.CloudRegionConfig, rootDiskSizeGB float64) *[]admin.CloudRegionConfig20250101 { +func convertRegionConfigSliceToLatest(slice *[]admin20240530.CloudRegionConfig, rootDiskSizeGB float64) *[]admin.CloudRegionConfig20240805 { if slice == nil { return nil } cloudRegionSlice := *slice - results := make([]admin.CloudRegionConfig20250101, len(cloudRegionSlice)) + results := make([]admin.CloudRegionConfig20240805, len(cloudRegionSlice)) for i := range len(cloudRegionSlice) { cloudRegion := cloudRegionSlice[i] - results[i] = admin.CloudRegionConfig20250101{ + results[i] = admin.CloudRegionConfig20240805{ ElectableSpecs: convertHardwareSpecToLatest(cloudRegion.ElectableSpecs, rootDiskSizeGB), Priority: cloudRegion.Priority, ProviderName: cloudRegion.ProviderName, @@ -298,8 +298,8 @@ func convertRegionConfigSliceToLatest(slice *[]admin20231115.CloudRegionConfig, return &results } -func convertClusterDescToLatestExcludeRepSpecs(oldClusterDesc *admin20231115.AdvancedClusterDescription) *admin.ClusterDescription20250101 { - return &admin.ClusterDescription20250101{ +func convertClusterDescToLatestExcludeRepSpecs(oldClusterDesc *admin20240530.AdvancedClusterDescription) *admin.ClusterDescription20240805 { + return &admin.ClusterDescription20240805{ BackupEnabled: oldClusterDesc.BackupEnabled, AcceptDataRisksAndForceReplicaSetReconfig: oldClusterDesc.AcceptDataRisksAndForceReplicaSetReconfig, ClusterType: oldClusterDesc.ClusterType, diff --git a/internal/service/advancedcluster/resource_advanced_cluster.go b/internal/service/advancedcluster/resource_advanced_cluster.go index a290cf9b0e..c18965b4f5 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster.go +++ b/internal/service/advancedcluster/resource_advanced_cluster.go @@ -12,8 +12,8 @@ import ( "strings" "time" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" @@ -387,7 +387,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. return diag.FromErr(fmt.Errorf("accept_data_risks_and_force_replica_set_reconfig can not be set in creation, only in update")) } } - connV220231115 := meta.(*config.MongoDBClient).AtlasV220231115 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID := d.Get("project_id").(string) @@ -396,7 +396,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. rootDiskSizeGB = conversion.Pointer(v.(float64)) } - params := &admin.ClusterDescription20250101{ + params := &admin.ClusterDescription20240805{ Name: conversion.StringPtr(cast.ToString(d.Get("name"))), ClusterType: conversion.StringPtr(cast.ToString(d.Get("cluster_type"))), ReplicationSpecs: expandAdvancedReplicationSpecs(d.Get("replication_specs").([]any), rootDiskSizeGB), @@ -445,8 +445,8 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. // Validate oplog_size_mb to show the error before the cluster is created. if oplogSizeMB, ok := d.GetOkExists("advanced_configuration.0.oplog_size_mb"); ok { - if cast.ToInt64(oplogSizeMB) <= 0 { - return diag.FromErr(fmt.Errorf("`advanced_configuration.oplog_size_mb` cannot be <= 0")) + if cast.ToInt64(oplogSizeMB) < 0 { + return diag.FromErr(fmt.Errorf("`advanced_configuration.oplog_size_mb` cannot be < 0")) } } @@ -465,7 +465,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. if ac, ok := d.GetOk("advanced_configuration"); ok { if aclist, ok := ac.([]any); ok && len(aclist) > 0 { params := expandProcessArgs(d, aclist[0].(map[string]any)) - _, _, err := connV220231115.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, cluster.GetName(), ¶ms).Execute() + _, _, err := connV220240530.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, cluster.GetName(), ¶ms).Execute() if err != nil { return diag.FromErr(fmt.Errorf(errorConfigUpdate, cluster.GetName(), err)) } @@ -473,7 +473,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. } if v := d.Get("paused").(bool); v { - request := &admin.ClusterDescription20250101{ + request := &admin.ClusterDescription20240805{ Paused: conversion.Pointer(v), } if _, _, err := connV2.ClustersApi.UpdateCluster(ctx, projectID, d.Get("name").(string), request).Execute(); err != nil { @@ -505,17 +505,17 @@ func CreateStateChangeConfig(ctx context.Context, connV2 *admin.APIClient, proje } func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV220231115 := meta.(*config.MongoDBClient).AtlasV220231115 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 connV2 := meta.(*config.MongoDBClient).AtlasV2 ids := conversion.DecodeStateID(d.Id()) projectID := ids["project_id"] clusterName := ids["cluster_name"] - var clusterResp *admin.ClusterDescription20250101 + var clusterResp *admin.ClusterDescription20240805 var replicationSpecs []map[string]any if isUsingOldAPISchemaStructure(d) { - clusterOldSDK, resp, err := connV220231115.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() + clusterOldSDK, resp, err := connV220240530.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() if err != nil { if resp != nil && resp.StatusCode == http.StatusNotFound { d.SetId("") @@ -554,7 +554,7 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "disk_size_gb", clusterName, err)) } - zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, projectID, clusterName, connV220231115) + zoneNameToOldReplicationSpecIDs, err := getReplicationSpecIDsFromOldAPI(ctx, projectID, clusterName, connV220240530) if err != nil { return diag.FromErr(err) } @@ -576,7 +576,7 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di return diag.FromErr(fmt.Errorf(ErrorClusterAdvancedSetting, "replication_specs", clusterName, err)) } - processArgs, _, err := connV220231115.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() + processArgs, _, err := connV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute() if err != nil { return diag.FromErr(fmt.Errorf(errorConfigRead, clusterName, err)) } @@ -590,9 +590,9 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di // getReplicationSpecIDsFromOldAPI returns the id values of replication specs coming from old API. This is used to populate old replication_specs.*.id attribute avoiding breaking changes. // In the old API each replications spec has a 1:1 relation with each zone, so ids are returned in a map from zoneName to id. -func getReplicationSpecIDsFromOldAPI(ctx context.Context, projectID, clusterName string, connV220231115 *admin20231115.APIClient) (map[string]string, error) { - clusterOldAPI, _, err := connV220231115.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() - if apiError, ok := admin20231115.AsError(err); ok { +func getReplicationSpecIDsFromOldAPI(ctx context.Context, projectID, clusterName string, connV220240530 *admin20240530.APIClient) (map[string]string, error) { + clusterOldAPI, _, err := connV220240530.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() + if apiError, ok := admin20240530.AsError(err); ok { if apiError.GetErrorCode() == "ASYMMETRIC_SHARD_UNSUPPORTED" { return nil, nil // if its the case of an asymmetric shard an error is expected in old API, replication_specs.*.id attribute will not be populated } @@ -621,7 +621,7 @@ func getZoneIDsFromNewAPI(ctx context.Context, projectID, clusterName string, co return result, nil } -func setRootFields(d *schema.ResourceData, cluster *admin.ClusterDescription20250101, isResourceSchema bool) diag.Diagnostics { +func setRootFields(d *schema.ResourceData, cluster *admin.ClusterDescription20240805, isResourceSchema bool) diag.Diagnostics { clusterName := *cluster.Name if isResourceSchema { @@ -754,7 +754,7 @@ func resourceUpgrade(ctx context.Context, upgradeRequest *admin.LegacyAtlasTenan } func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV220231115 := meta.(*config.MongoDBClient).AtlasV220231115 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 connV2 := meta.(*config.MongoDBClient).AtlasV2 ids := conversion.DecodeStateID(d.Id()) projectID := ids["project_id"] @@ -771,9 +771,9 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. if diags != nil { return diags } - clusterChangeDetect := new(admin20231115.AdvancedClusterDescription) + clusterChangeDetect := new(admin20240530.AdvancedClusterDescription) if !reflect.DeepEqual(req, clusterChangeDetect) { - if _, _, err := connV220231115.ClustersApi.UpdateCluster(ctx, projectID, clusterName, req).Execute(); err != nil { + if _, _, err := connV220240530.ClustersApi.UpdateCluster(ctx, projectID, clusterName, req).Execute(); err != nil { return diag.FromErr(fmt.Errorf(errorUpdate, clusterName, err)) } if err := waitForUpdateToFinish(ctx, connV2, projectID, clusterName, timeout); err != nil { @@ -785,7 +785,7 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. if diags != nil { return diags } - clusterChangeDetect := new(admin.ClusterDescription20250101) + clusterChangeDetect := new(admin.ClusterDescription20240805) if !reflect.DeepEqual(req, clusterChangeDetect) { if _, _, err := connV2.ClustersApi.UpdateCluster(ctx, projectID, clusterName, req).Execute(); err != nil { return diag.FromErr(fmt.Errorf(errorUpdate, clusterName, err)) @@ -800,8 +800,8 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. ac := d.Get("advanced_configuration") if aclist, ok := ac.([]any); ok && len(aclist) > 0 { params := expandProcessArgs(d, aclist[0].(map[string]any)) - if !reflect.DeepEqual(params, admin20231115.ClusterDescriptionProcessArgs{}) { - _, _, err := connV220231115.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, clusterName, ¶ms).Execute() + if !reflect.DeepEqual(params, admin20240530.ClusterDescriptionProcessArgs{}) { + _, _, err := connV220240530.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, clusterName, ¶ms).Execute() if err != nil { return diag.FromErr(fmt.Errorf(errorConfigUpdate, clusterName, err)) } @@ -810,7 +810,7 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. } if d.Get("paused").(bool) { - clusterRequest := &admin.ClusterDescription20250101{ + clusterRequest := &admin.ClusterDescription20240805{ Paused: conversion.Pointer(true), } if _, _, err := connV2.ClustersApi.UpdateCluster(ctx, projectID, clusterName, clusterRequest).Execute(); err != nil { @@ -824,8 +824,8 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. return resourceRead(ctx, d, meta) } -func updateRequest(ctx context.Context, d *schema.ResourceData, projectID, clusterName string, connV2 *admin.APIClient) (*admin.ClusterDescription20250101, diag.Diagnostics) { - cluster := new(admin.ClusterDescription20250101) +func updateRequest(ctx context.Context, d *schema.ResourceData, projectID, clusterName string, connV2 *admin.APIClient) (*admin.ClusterDescription20240805, diag.Diagnostics) { + cluster := new(admin.ClusterDescription20240805) if d.HasChange("replication_specs") || d.HasChange("disk_size_gb") { var updatedDiskSizeGB *float64 @@ -915,8 +915,8 @@ func updateRequest(ctx context.Context, d *schema.ResourceData, projectID, clust return cluster, nil } -func updateRequestOldAPI(d *schema.ResourceData, clusterName string) (*admin20231115.AdvancedClusterDescription, diag.Diagnostics) { - cluster := new(admin20231115.AdvancedClusterDescription) +func updateRequestOldAPI(d *schema.ResourceData, clusterName string) (*admin20240530.AdvancedClusterDescription, diag.Diagnostics) { + cluster := new(admin20240530.AdvancedClusterDescription) if d.HasChange("replication_specs") { cluster.ReplicationSpecs = expandAdvancedReplicationSpecsOldSDK(d.Get("replication_specs").([]any)) diff --git a/internal/service/advancedcluster/resource_advanced_cluster_test.go b/internal/service/advancedcluster/resource_advanced_cluster_test.go index 9c378a1088..2ef25cc775 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster_test.go +++ b/internal/service/advancedcluster/resource_advanced_cluster_test.go @@ -7,8 +7,8 @@ import ( "strconv" "testing" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -239,7 +239,7 @@ func TestAccClusterAdvancedCluster_advancedConfig(t *testing.T) { projectID = acc.ProjectIDExecution(t) clusterName = acc.RandomClusterName() clusterNameUpdated = acc.RandomClusterName() - processArgs = &admin20231115.ClusterDescriptionProcessArgs{ + processArgs = &admin20240530.ClusterDescriptionProcessArgs{ DefaultReadConcern: conversion.StringPtr("available"), DefaultWriteConcern: conversion.StringPtr("1"), FailIndexKeyTooLong: conversion.Pointer(false), @@ -251,7 +251,7 @@ func TestAccClusterAdvancedCluster_advancedConfig(t *testing.T) { SampleSizeBIConnector: conversion.Pointer(110), TransactionLifetimeLimitSeconds: conversion.Pointer[int64](300), } - processArgsUpdated = &admin20231115.ClusterDescriptionProcessArgs{ + processArgsUpdated = &admin20240530.ClusterDescriptionProcessArgs{ DefaultReadConcern: conversion.StringPtr("available"), DefaultWriteConcern: conversion.StringPtr("0"), FailIndexKeyTooLong: conversion.Pointer(false), @@ -287,7 +287,7 @@ func TestAccClusterAdvancedCluster_defaultWrite(t *testing.T) { projectID = acc.ProjectIDExecution(t) clusterName = acc.RandomClusterName() clusterNameUpdated = acc.RandomClusterName() - processArgs = &admin20231115.ClusterDescriptionProcessArgs{ + processArgs = &admin20240530.ClusterDescriptionProcessArgs{ DefaultReadConcern: conversion.StringPtr("available"), DefaultWriteConcern: conversion.StringPtr("1"), JavascriptEnabled: conversion.Pointer(true), @@ -297,7 +297,7 @@ func TestAccClusterAdvancedCluster_defaultWrite(t *testing.T) { SampleRefreshIntervalBIConnector: conversion.Pointer(310), SampleSizeBIConnector: conversion.Pointer(110), } - processArgsUpdated = &admin20231115.ClusterDescriptionProcessArgs{ + processArgsUpdated = &admin20240530.ClusterDescriptionProcessArgs{ DefaultReadConcern: conversion.StringPtr("available"), DefaultWriteConcern: conversion.StringPtr("majority"), JavascriptEnabled: conversion.Pointer(true), @@ -1145,7 +1145,7 @@ func checkSingleProviderPaused(name string, paused bool) resource.TestCheckFunc "paused": strconv.FormatBool(paused)}) } -func configAdvanced(projectID, clusterName string, p *admin20231115.ClusterDescriptionProcessArgs) string { +func configAdvanced(projectID, clusterName string, p *admin20240530.ClusterDescriptionProcessArgs) string { return fmt.Sprintf(` resource "mongodbatlas_advanced_cluster" "test" { project_id = %[1]q @@ -1211,7 +1211,7 @@ func checkAdvanced(name, tls string) resource.TestCheckFunc { resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.0.name")) } -func configAdvancedDefaultWrite(projectID, clusterName string, p *admin20231115.ClusterDescriptionProcessArgs) string { +func configAdvancedDefaultWrite(projectID, clusterName string, p *admin20240530.ClusterDescriptionProcessArgs) string { return fmt.Sprintf(` resource "mongodbatlas_advanced_cluster" "test" { project_id = %[1]q diff --git a/internal/service/advancedcluster/resource_update_logic.go b/internal/service/advancedcluster/resource_update_logic.go index 2d6a2684c6..146fe729ad 100644 --- a/internal/service/advancedcluster/resource_update_logic.go +++ b/internal/service/advancedcluster/resource_update_logic.go @@ -6,10 +6,10 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) -func noIDsPopulatedInReplicationSpecs(replicationSpecs *[]admin.ReplicationSpec20250101) bool { +func noIDsPopulatedInReplicationSpecs(replicationSpecs *[]admin.ReplicationSpec20240805) bool { if replicationSpecs == nil || len(*replicationSpecs) == 0 { return false } @@ -21,7 +21,7 @@ func noIDsPopulatedInReplicationSpecs(replicationSpecs *[]admin.ReplicationSpec2 return true } -func populateIDValuesUsingNewAPI(ctx context.Context, projectID, clusterName string, connV2ClusterAPI admin.ClustersApi, replicationSpecs *[]admin.ReplicationSpec20250101) (*[]admin.ReplicationSpec20250101, diag.Diagnostics) { +func populateIDValuesUsingNewAPI(ctx context.Context, projectID, clusterName string, connV2ClusterAPI admin.ClustersApi, replicationSpecs *[]admin.ReplicationSpec20240805) (*[]admin.ReplicationSpec20240805, diag.Diagnostics) { if replicationSpecs == nil || len(*replicationSpecs) == 0 { return replicationSpecs, nil } @@ -35,7 +35,7 @@ func populateIDValuesUsingNewAPI(ctx context.Context, projectID, clusterName str return &result, nil } -func AddIDsToReplicationSpecs(replicationSpecs []admin.ReplicationSpec20250101, zoneToReplicationSpecsIDs map[string][]string) []admin.ReplicationSpec20250101 { +func AddIDsToReplicationSpecs(replicationSpecs []admin.ReplicationSpec20240805, zoneToReplicationSpecsIDs map[string][]string) []admin.ReplicationSpec20240805 { for zoneName, availableIDs := range zoneToReplicationSpecsIDs { var indexOfIDToUse = 0 for i := range replicationSpecs { @@ -52,7 +52,7 @@ func AddIDsToReplicationSpecs(replicationSpecs []admin.ReplicationSpec20250101, return replicationSpecs } -func groupIDsByZone(specs []admin.ReplicationSpec20250101) map[string][]string { +func groupIDsByZone(specs []admin.ReplicationSpec20240805) map[string][]string { result := make(map[string][]string) for _, spec := range specs { result[spec.GetZoneName()] = append(result[spec.GetZoneName()], spec.GetId()) @@ -64,7 +64,7 @@ func groupIDsByZone(specs []admin.ReplicationSpec20250101) map[string][]string { // - Existing replication specs can have the autoscaling values present in the state with default values even if not defined in the config (case when cluster is imported) // - API expects autoScaling and analyticsAutoScaling aligned cross all region configs in the PATCH request // This function is needed to avoid errors if a new replication spec is added, ensuring the PATCH request will have the auto scaling aligned with other replication specs when not present in config. -func SyncAutoScalingConfigs(replicationSpecs *[]admin.ReplicationSpec20250101) { +func SyncAutoScalingConfigs(replicationSpecs *[]admin.ReplicationSpec20240805) { if replicationSpecs == nil || len(*replicationSpecs) == 0 { return } @@ -85,7 +85,7 @@ func SyncAutoScalingConfigs(replicationSpecs *[]admin.ReplicationSpec20250101) { applyDefaultAutoScaling(replicationSpecs, defaultAutoScaling, defaultAnalyticsAutoScaling) } -func applyDefaultAutoScaling(replicationSpecs *[]admin.ReplicationSpec20250101, defaultAutoScaling, defaultAnalyticsAutoScaling *admin.AdvancedAutoScalingSettings) { +func applyDefaultAutoScaling(replicationSpecs *[]admin.ReplicationSpec20240805, defaultAutoScaling, defaultAnalyticsAutoScaling *admin.AdvancedAutoScalingSettings) { for _, spec := range *replicationSpecs { for i := range *spec.RegionConfigs { regionConfig := &(*spec.RegionConfigs)[i] diff --git a/internal/service/advancedcluster/resource_update_logic_test.go b/internal/service/advancedcluster/resource_update_logic_test.go index 0148eb3110..009e51e55d 100644 --- a/internal/service/advancedcluster/resource_update_logic_test.go +++ b/internal/service/advancedcluster/resource_update_logic_test.go @@ -5,17 +5,17 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster" "github.com/stretchr/testify/assert" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func TestAddIDsToReplicationSpecs(t *testing.T) { testCases := map[string]struct { - ReplicationSpecs []admin.ReplicationSpec20250101 + ReplicationSpecs []admin.ReplicationSpec20240805 ZoneToReplicationSpecsIDs map[string][]string - ExpectedReplicationSpecs []admin.ReplicationSpec20250101 + ExpectedReplicationSpecs []admin.ReplicationSpec20240805 }{ "two zones with same amount of available ids and replication specs to populate": { - ReplicationSpecs: []admin.ReplicationSpec20250101{ + ReplicationSpecs: []admin.ReplicationSpec20240805{ { ZoneName: admin.PtrString("Zone 1"), }, @@ -33,7 +33,7 @@ func TestAddIDsToReplicationSpecs(t *testing.T) { "Zone 1": {"zone1-id1", "zone1-id2"}, "Zone 2": {"zone2-id1", "zone2-id2"}, }, - ExpectedReplicationSpecs: []admin.ReplicationSpec20250101{ + ExpectedReplicationSpecs: []admin.ReplicationSpec20240805{ { ZoneName: admin.PtrString("Zone 1"), Id: admin.PtrString("zone1-id1"), @@ -53,7 +53,7 @@ func TestAddIDsToReplicationSpecs(t *testing.T) { }, }, "less available ids than replication specs to populate": { - ReplicationSpecs: []admin.ReplicationSpec20250101{ + ReplicationSpecs: []admin.ReplicationSpec20240805{ { ZoneName: admin.PtrString("Zone 1"), }, @@ -71,7 +71,7 @@ func TestAddIDsToReplicationSpecs(t *testing.T) { "Zone 1": {"zone1-id1"}, "Zone 2": {"zone2-id1"}, }, - ExpectedReplicationSpecs: []admin.ReplicationSpec20250101{ + ExpectedReplicationSpecs: []admin.ReplicationSpec20240805{ { ZoneName: admin.PtrString("Zone 1"), Id: admin.PtrString("zone1-id1"), @@ -91,7 +91,7 @@ func TestAddIDsToReplicationSpecs(t *testing.T) { }, }, "more available ids than replication specs to populate": { - ReplicationSpecs: []admin.ReplicationSpec20250101{ + ReplicationSpecs: []admin.ReplicationSpec20240805{ { ZoneName: admin.PtrString("Zone 1"), }, @@ -103,7 +103,7 @@ func TestAddIDsToReplicationSpecs(t *testing.T) { "Zone 1": {"zone1-id1", "zone1-id2"}, "Zone 2": {"zone2-id1", "zone2-id2"}, }, - ExpectedReplicationSpecs: []admin.ReplicationSpec20250101{ + ExpectedReplicationSpecs: []admin.ReplicationSpec20240805{ { ZoneName: admin.PtrString("Zone 1"), Id: admin.PtrString("zone1-id1"), @@ -126,14 +126,14 @@ func TestAddIDsToReplicationSpecs(t *testing.T) { func TestSyncAutoScalingConfigs(t *testing.T) { testCases := map[string]struct { - ReplicationSpecs []admin.ReplicationSpec20250101 - ExpectedReplicationSpecs []admin.ReplicationSpec20250101 + ReplicationSpecs []admin.ReplicationSpec20240805 + ExpectedReplicationSpecs []admin.ReplicationSpec20240805 }{ "apply same autoscaling options for new replication spec which does not have autoscaling defined": { - ReplicationSpecs: []admin.ReplicationSpec20250101{ + ReplicationSpecs: []admin.ReplicationSpec20240805{ { Id: admin.PtrString("id-1"), - RegionConfigs: &[]admin.CloudRegionConfig20250101{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{ { AutoScaling: &admin.AdvancedAutoScalingSettings{ Compute: &admin.AdvancedComputeAutoScaling{ @@ -152,7 +152,7 @@ func TestSyncAutoScalingConfigs(t *testing.T) { }, { Id: admin.PtrString("id-2"), - RegionConfigs: &[]admin.CloudRegionConfig20250101{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{ { AutoScaling: nil, AnalyticsAutoScaling: nil, @@ -160,10 +160,10 @@ func TestSyncAutoScalingConfigs(t *testing.T) { }, }, }, - ExpectedReplicationSpecs: []admin.ReplicationSpec20250101{ + ExpectedReplicationSpecs: []admin.ReplicationSpec20240805{ { Id: admin.PtrString("id-1"), - RegionConfigs: &[]admin.CloudRegionConfig20250101{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{ { AutoScaling: &admin.AdvancedAutoScalingSettings{ Compute: &admin.AdvancedComputeAutoScaling{ @@ -182,7 +182,7 @@ func TestSyncAutoScalingConfigs(t *testing.T) { }, { Id: admin.PtrString("id-2"), - RegionConfigs: &[]admin.CloudRegionConfig20250101{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{ { AutoScaling: &admin.AdvancedAutoScalingSettings{ Compute: &admin.AdvancedComputeAutoScaling{ @@ -203,10 +203,10 @@ func TestSyncAutoScalingConfigs(t *testing.T) { }, // for this case the API will respond with an error and guide the user to align autoscaling options cross all nodes "when different autoscaling options are defined values will not be changed": { - ReplicationSpecs: []admin.ReplicationSpec20250101{ + ReplicationSpecs: []admin.ReplicationSpec20240805{ { Id: admin.PtrString("id-1"), - RegionConfigs: &[]admin.CloudRegionConfig20250101{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{ { AutoScaling: &admin.AdvancedAutoScalingSettings{ Compute: &admin.AdvancedComputeAutoScaling{ @@ -225,7 +225,7 @@ func TestSyncAutoScalingConfigs(t *testing.T) { }, { Id: admin.PtrString("id-2"), - RegionConfigs: &[]admin.CloudRegionConfig20250101{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{ { AutoScaling: &admin.AdvancedAutoScalingSettings{ Compute: &admin.AdvancedComputeAutoScaling{ @@ -241,10 +241,10 @@ func TestSyncAutoScalingConfigs(t *testing.T) { }, }, }, - ExpectedReplicationSpecs: []admin.ReplicationSpec20250101{ + ExpectedReplicationSpecs: []admin.ReplicationSpec20240805{ { Id: admin.PtrString("id-1"), - RegionConfigs: &[]admin.CloudRegionConfig20250101{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{ { AutoScaling: &admin.AdvancedAutoScalingSettings{ Compute: &admin.AdvancedComputeAutoScaling{ @@ -263,7 +263,7 @@ func TestSyncAutoScalingConfigs(t *testing.T) { }, { Id: admin.PtrString("id-2"), - RegionConfigs: &[]admin.CloudRegionConfig20250101{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{ { AutoScaling: &admin.AdvancedAutoScalingSettings{ Compute: &admin.AdvancedComputeAutoScaling{ diff --git a/internal/service/alertconfiguration/data_source_alert_configuration.go b/internal/service/alertconfiguration/data_source_alert_configuration.go index 2aebfb60c1..8909a10c18 100644 --- a/internal/service/alertconfiguration/data_source_alert_configuration.go +++ b/internal/service/alertconfiguration/data_source_alert_configuration.go @@ -14,7 +14,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/zclconf/go-cty/cty" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) var _ datasource.DataSource = &alertConfigurationDS{} diff --git a/internal/service/alertconfiguration/data_source_alert_configurations.go b/internal/service/alertconfiguration/data_source_alert_configurations.go index 6b178aae06..f3ab30d2bf 100644 --- a/internal/service/alertconfiguration/data_source_alert_configurations.go +++ b/internal/service/alertconfiguration/data_source_alert_configurations.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const alertConfigurationsDataSourceName = "alert_configurations" diff --git a/internal/service/alertconfiguration/data_source_alert_configurations_test.go b/internal/service/alertconfiguration/data_source_alert_configurations_test.go index 61bb45de90..e13bab4246 100644 --- a/internal/service/alertconfiguration/data_source_alert_configurations_test.go +++ b/internal/service/alertconfiguration/data_source_alert_configurations_test.go @@ -12,7 +12,6 @@ import ( "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - matlas "go.mongodb.org/atlas/mongodbatlas" ) func TestAccConfigDSAlertConfigurations_basic(t *testing.T) { @@ -141,11 +140,7 @@ func checkCount(resourceName string) resource.TestCheckFunc { ids := conversion.DecodeStateID(rs.Primary.ID) projectID := ids["project_id"] - alertResp, _, err := acc.Conn().AlertConfigurations.List(context.Background(), projectID, &matlas.ListOptions{ - PageNum: 0, - ItemsPerPage: 100, - IncludeCount: true, - }) + alertResp, _, err := acc.ConnV2().AlertConfigurationsApi.ListAlertConfigurations(context.Background(), projectID).Execute() if err != nil { return fmt.Errorf("the Alert Configurations List for project (%s) could not be read", projectID) @@ -157,8 +152,8 @@ func checkCount(resourceName string) resource.TestCheckFunc { return fmt.Errorf("%s results count is somehow not a number %s", resourceName, resultsCountAttr) } - if resultsCount != len(alertResp) { - return fmt.Errorf("%s results count (%d) did not match that of current Alert Configurations (%d)", resourceName, resultsCount, len(alertResp)) + if resultsCount != len(alertResp.GetResults()) { + return fmt.Errorf("%s results count (%d) did not match that of current Alert Configurations (%d)", resourceName, resultsCount, len(alertResp.GetResults())) } if totalCountAttr := rs.Primary.Attributes["total_count"]; totalCountAttr != "" { diff --git a/internal/service/alertconfiguration/model_alert_configuration.go b/internal/service/alertconfiguration/model_alert_configuration.go index ef42960051..2c7e6571e5 100644 --- a/internal/service/alertconfiguration/model_alert_configuration.go +++ b/internal/service/alertconfiguration/model_alert_configuration.go @@ -6,7 +6,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func NewNotificationList(list []TfNotificationModel) (*[]admin.AlertsNotificationRootForGroup, error) { @@ -200,28 +200,28 @@ func NewTFMetricThresholdConfigModel(t *admin.ServerlessMetricThreshold, currSta return []TfMetricThresholdConfigModel{ { MetricName: conversion.StringNullIfEmpty(t.MetricName), - Operator: conversion.StringNullIfEmpty(*t.Operator), - Threshold: types.Float64Value(*t.Threshold), - Units: conversion.StringNullIfEmpty(*t.Units), - Mode: conversion.StringNullIfEmpty(*t.Mode), + Operator: conversion.StringNullIfEmpty(t.GetOperator()), + Threshold: types.Float64Value(t.GetThreshold()), + Units: conversion.StringNullIfEmpty(t.GetUnits()), + Mode: conversion.StringNullIfEmpty(t.GetMode()), }, } } currState := currStateSlice[0] newState := TfMetricThresholdConfigModel{ - Threshold: types.Float64Value(*t.Threshold), + Threshold: types.Float64Value(t.GetThreshold()), } if !currState.MetricName.IsNull() { newState.MetricName = conversion.StringNullIfEmpty(t.MetricName) } if !currState.Operator.IsNull() { - newState.Operator = conversion.StringNullIfEmpty(*t.Operator) + newState.Operator = conversion.StringNullIfEmpty(t.GetOperator()) } if !currState.Units.IsNull() { - newState.Units = conversion.StringNullIfEmpty(*t.Units) + newState.Units = conversion.StringNullIfEmpty(t.GetUnits()) } if !currState.Mode.IsNull() { - newState.Mode = conversion.StringNullIfEmpty(*t.Mode) + newState.Mode = conversion.StringNullIfEmpty(t.GetMode()) } return []TfMetricThresholdConfigModel{newState} } @@ -234,21 +234,21 @@ func NewTFThresholdConfigModel(t *admin.GreaterThanRawThreshold, currStateSlice if len(currStateSlice) == 0 { // threshold was created elsewhere from terraform, or import statement is being called return []TfThresholdConfigModel{ { - Operator: conversion.StringNullIfEmpty(*t.Operator), - Threshold: types.Float64Value(float64(*t.Threshold)), // int in new SDK but keeping float64 for backward compatibility - Units: conversion.StringNullIfEmpty(*t.Units), + Operator: conversion.StringNullIfEmpty(t.GetOperator()), + Threshold: types.Float64Value(float64(t.GetThreshold())), // int in new SDK but keeping float64 for backward compatibility + Units: conversion.StringNullIfEmpty(t.GetUnits()), }, } } currState := currStateSlice[0] newState := TfThresholdConfigModel{} if !currState.Operator.IsNull() { - newState.Operator = conversion.StringNullIfEmpty(*t.Operator) + newState.Operator = conversion.StringNullIfEmpty(t.GetOperator()) } if !currState.Units.IsNull() { - newState.Units = conversion.StringNullIfEmpty(*t.Units) + newState.Units = conversion.StringNullIfEmpty(t.GetUnits()) } - newState.Threshold = types.Float64Value(float64(*t.Threshold)) + newState.Threshold = types.Float64Value(float64(t.GetThreshold())) return []TfThresholdConfigModel{newState} } diff --git a/internal/service/alertconfiguration/model_alert_configuration_test.go b/internal/service/alertconfiguration/model_alert_configuration_test.go index 7fa162fd7d..ac63c13e83 100644 --- a/internal/service/alertconfiguration/model_alert_configuration_test.go +++ b/internal/service/alertconfiguration/model_alert_configuration_test.go @@ -7,7 +7,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/alertconfiguration" "github.com/stretchr/testify/assert" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/alertconfiguration/resource_alert_configuration.go b/internal/service/alertconfiguration/resource_alert_configuration.go index 29b72498f1..24080129b6 100644 --- a/internal/service/alertconfiguration/resource_alert_configuration.go +++ b/internal/service/alertconfiguration/resource_alert_configuration.go @@ -20,7 +20,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( @@ -520,7 +520,7 @@ func (r *alertConfigurationRS) Update(ctx context.Context, req resource.UpdateRe } func (r *alertConfigurationRS) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { - conn := r.Client.Atlas + connV2 := r.Client.AtlasV2 var alertConfigState TfAlertConfigurationRSModel resp.Diagnostics.Append(req.State.Get(ctx, &alertConfigState)...) @@ -530,7 +530,7 @@ func (r *alertConfigurationRS) Delete(ctx context.Context, req resource.DeleteRe ids := conversion.DecodeStateID(alertConfigState.ID.ValueString()) - _, err := conn.AlertConfigurations.Delete(ctx, ids[EncodedIDKeyProjectID], ids[EncodedIDKeyAlertID]) + _, err := connV2.AlertConfigurationsApi.DeleteAlertConfiguration(ctx, ids[EncodedIDKeyProjectID], ids[EncodedIDKeyAlertID]).Execute() if err != nil { resp.Diagnostics.AddError(errorReadAlertConf, err.Error()) } diff --git a/internal/service/apikey/data_source_api_keys.go b/internal/service/apikey/data_source_api_keys.go index 85ef1db062..19744e8f27 100644 --- a/internal/service/apikey/data_source_api_keys.go +++ b/internal/service/apikey/data_source_api_keys.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/apikey/resource_api_key.go b/internal/service/apikey/resource_api_key.go index 2bbd1449c9..f6731c64d4 100644 --- a/internal/service/apikey/resource_api_key.go +++ b/internal/service/apikey/resource_api_key.go @@ -12,7 +12,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { diff --git a/internal/service/atlasuser/data_source_atlas_user.go b/internal/service/atlasuser/data_source_atlas_user.go index 5bae40ac96..7a662e55e4 100644 --- a/internal/service/atlasuser/data_source_atlas_user.go +++ b/internal/service/atlasuser/data_source_atlas_user.go @@ -12,7 +12,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/atlasuser/data_source_atlas_user_test.go b/internal/service/atlasuser/data_source_atlas_user_test.go index 42d0a594e4..56ace2bf2d 100644 --- a/internal/service/atlasuser/data_source_atlas_user_test.go +++ b/internal/service/atlasuser/data_source_atlas_user_test.go @@ -10,7 +10,7 @@ import ( "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func TestAccConfigDSAtlasUser_ByUserID(t *testing.T) { diff --git a/internal/service/atlasuser/data_source_atlas_users.go b/internal/service/atlasuser/data_source_atlas_users.go index 70f6973475..e036e942b3 100644 --- a/internal/service/atlasuser/data_source_atlas_users.go +++ b/internal/service/atlasuser/data_source_atlas_users.go @@ -13,7 +13,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/atlasuser/data_source_atlas_users_test.go b/internal/service/atlasuser/data_source_atlas_users_test.go index 29f926b319..af0baf55c4 100644 --- a/internal/service/atlasuser/data_source_atlas_users_test.go +++ b/internal/service/atlasuser/data_source_atlas_users_test.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/atlasuser" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func TestAccConfigDSAtlasUsers_ByOrgID(t *testing.T) { diff --git a/internal/service/auditing/resource_auditing.go b/internal/service/auditing/resource_auditing.go index bd4024eee5..91cffaf374 100644 --- a/internal/service/auditing/resource_auditing.go +++ b/internal/service/auditing/resource_auditing.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/backupcompliancepolicy/resource_backup_compliance_policy.go b/internal/service/backupcompliancepolicy/resource_backup_compliance_policy.go index b542c87056..39448e2024 100644 --- a/internal/service/backupcompliancepolicy/resource_backup_compliance_policy.go +++ b/internal/service/backupcompliancepolicy/resource_backup_compliance_policy.go @@ -8,7 +8,7 @@ import ( "net/http" "strings" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -263,85 +263,8 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID := d.Get("project_id").(string) - dataProtectionSettings := &admin.DataProtectionSettings20231001{ - ProjectId: conversion.StringPtr(projectID), - AuthorizedEmail: d.Get("authorized_email").(string), - AuthorizedUserFirstName: d.Get("authorized_user_first_name").(string), - AuthorizedUserLastName: d.Get("authorized_user_last_name").(string), - CopyProtectionEnabled: conversion.Pointer(d.Get("copy_protection_enabled").(bool)), - EncryptionAtRestEnabled: conversion.Pointer(d.Get("encryption_at_rest_enabled").(bool)), - PitEnabled: conversion.Pointer(d.Get("pit_enabled").(bool)), - RestoreWindowDays: conversion.Pointer(cast.ToInt(d.Get("restore_window_days"))), - OnDemandPolicyItem: expandDemandBackupPolicyItem(d), - } + err := updateOrCreateDataProtectionSetting(ctx, d, connV2, projectID) - var backupPoliciesItem []admin.BackupComplianceScheduledPolicyItem - if v, ok := d.GetOk("policy_item_hourly"); ok { - item := v.([]any) - itemObj := item[0].(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Hourly, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - if v, ok := d.GetOk("policy_item_daily"); ok { - item := v.([]any) - itemObj := item[0].(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Daily, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - if v, ok := d.GetOk("policy_item_weekly"); ok { - items := v.([]any) - for _, s := range items { - itemObj := s.(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Weekly, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - } - if v, ok := d.GetOk("policy_item_monthly"); ok { - items := v.([]any) - for _, s := range items { - itemObj := s.(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Monthly, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - } - if v, ok := d.GetOk("policy_item_yearly"); ok { - items := v.([]any) - for _, s := range items { - itemObj := s.(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Yearly, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - } - if len(backupPoliciesItem) > 0 { - dataProtectionSettings.ScheduledPolicyItems = &backupPoliciesItem - } - - params := admin.UpdateDataProtectionSettingsApiParams{ - GroupId: projectID, - DataProtectionSettings20231001: dataProtectionSettings, - OverwriteBackupPolicies: conversion.Pointer(false), - } - _, _, err := connV2.CloudBackupsApi.UpdateDataProtectionSettingsWithParams(ctx, ¶ms).Execute() if err != nil { return diag.FromErr(fmt.Errorf(errorBackupPolicyUpdate, projectID, err)) } @@ -444,97 +367,8 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. ids := conversion.DecodeStateID(d.Id()) projectID := ids["project_id"] - dataProtectionSettings := &admin.DataProtectionSettings20231001{ - ProjectId: conversion.StringPtr(projectID), - AuthorizedEmail: d.Get("authorized_email").(string), - AuthorizedUserFirstName: d.Get("authorized_user_first_name").(string), - AuthorizedUserLastName: d.Get("authorized_user_last_name").(string), - OnDemandPolicyItem: expandDemandBackupPolicyItem(d), - } - - if d.HasChange("copy_protection_enabled") { - dataProtectionSettings.CopyProtectionEnabled = conversion.Pointer(d.Get("copy_protection_enabled").(bool)) - } - - if d.HasChange("encryption_at_rest_enabled") { - dataProtectionSettings.EncryptionAtRestEnabled = conversion.Pointer(d.Get("encryption_at_rest_enabled").(bool)) - } - - if d.HasChange("pit_enabled") { - dataProtectionSettings.PitEnabled = conversion.Pointer(d.Get("pit_enabled").(bool)) - } - - if d.HasChange("restore_window_days") { - dataProtectionSettings.RestoreWindowDays = conversion.Pointer(cast.ToInt(d.Get("restore_window_days"))) - } + err := updateOrCreateDataProtectionSetting(ctx, d, connV2, projectID) - var backupPoliciesItem []admin.BackupComplianceScheduledPolicyItem - if v, ok := d.GetOk("policy_item_hourly"); ok { - item := v.([]any) - itemObj := item[0].(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Hourly, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - if v, ok := d.GetOk("policy_item_daily"); ok { - item := v.([]any) - itemObj := item[0].(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Daily, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - if v, ok := d.GetOk("policy_item_weekly"); ok { - items := v.([]any) - for _, s := range items { - itemObj := s.(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Weekly, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - } - if v, ok := d.GetOk("policy_item_monthly"); ok { - items := v.([]any) - for _, s := range items { - itemObj := s.(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Monthly, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - } - if v, ok := d.GetOk("policy_item_yearly"); ok { - items := v.([]any) - for _, s := range items { - itemObj := s.(map[string]any) - backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ - FrequencyType: cloudbackupschedule.Yearly, - RetentionUnit: itemObj["retention_unit"].(string), - FrequencyInterval: itemObj["frequency_interval"].(int), - RetentionValue: itemObj["retention_value"].(int), - }) - } - } - if len(backupPoliciesItem) > 0 { - dataProtectionSettings.ScheduledPolicyItems = &backupPoliciesItem - } - - params := admin.UpdateDataProtectionSettingsApiParams{ - GroupId: projectID, - DataProtectionSettings20231001: dataProtectionSettings, - OverwriteBackupPolicies: conversion.Pointer(false), - } - _, _, err := connV2.CloudBackupsApi.UpdateDataProtectionSettingsWithParams(ctx, ¶ms).Execute() if err != nil { return diag.FromErr(fmt.Errorf(errorBackupPolicyUpdate, projectID, err)) } @@ -622,3 +456,86 @@ func flattenBackupPolicyItems(items []admin.BackupComplianceScheduledPolicyItem, } return policyItems } + +func updateOrCreateDataProtectionSetting(ctx context.Context, d *schema.ResourceData, connV2 *admin.APIClient, projectID string) error { + dataProtectionSettings := &admin.DataProtectionSettings20231001{ + ProjectId: conversion.StringPtr(projectID), + AuthorizedEmail: d.Get("authorized_email").(string), + AuthorizedUserFirstName: d.Get("authorized_user_first_name").(string), + AuthorizedUserLastName: d.Get("authorized_user_last_name").(string), + CopyProtectionEnabled: conversion.Pointer(d.Get("copy_protection_enabled").(bool)), + EncryptionAtRestEnabled: conversion.Pointer(d.Get("encryption_at_rest_enabled").(bool)), + PitEnabled: conversion.Pointer(d.Get("pit_enabled").(bool)), + RestoreWindowDays: conversion.Pointer(cast.ToInt(d.Get("restore_window_days"))), + OnDemandPolicyItem: expandDemandBackupPolicyItem(d), + } + + var backupPoliciesItem []admin.BackupComplianceScheduledPolicyItem + if v, ok := d.GetOk("policy_item_hourly"); ok { + item := v.([]any) + itemObj := item[0].(map[string]any) + backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ + FrequencyType: cloudbackupschedule.Hourly, + RetentionUnit: itemObj["retention_unit"].(string), + FrequencyInterval: itemObj["frequency_interval"].(int), + RetentionValue: itemObj["retention_value"].(int), + }) + } + if v, ok := d.GetOk("policy_item_daily"); ok { + item := v.([]any) + itemObj := item[0].(map[string]any) + backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ + FrequencyType: cloudbackupschedule.Daily, + RetentionUnit: itemObj["retention_unit"].(string), + FrequencyInterval: itemObj["frequency_interval"].(int), + RetentionValue: itemObj["retention_value"].(int), + }) + } + if v, ok := d.GetOk("policy_item_weekly"); ok { + items := v.([]any) + for _, s := range items { + itemObj := s.(map[string]any) + backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ + FrequencyType: cloudbackupschedule.Weekly, + RetentionUnit: itemObj["retention_unit"].(string), + FrequencyInterval: itemObj["frequency_interval"].(int), + RetentionValue: itemObj["retention_value"].(int), + }) + } + } + if v, ok := d.GetOk("policy_item_monthly"); ok { + items := v.([]any) + for _, s := range items { + itemObj := s.(map[string]any) + backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ + FrequencyType: cloudbackupschedule.Monthly, + RetentionUnit: itemObj["retention_unit"].(string), + FrequencyInterval: itemObj["frequency_interval"].(int), + RetentionValue: itemObj["retention_value"].(int), + }) + } + } + if v, ok := d.GetOk("policy_item_yearly"); ok { + items := v.([]any) + for _, s := range items { + itemObj := s.(map[string]any) + backupPoliciesItem = append(backupPoliciesItem, admin.BackupComplianceScheduledPolicyItem{ + FrequencyType: cloudbackupschedule.Yearly, + RetentionUnit: itemObj["retention_unit"].(string), + FrequencyInterval: itemObj["frequency_interval"].(int), + RetentionValue: itemObj["retention_value"].(int), + }) + } + } + if len(backupPoliciesItem) > 0 { + dataProtectionSettings.ScheduledPolicyItems = &backupPoliciesItem + } + + params := admin.UpdateDataProtectionSettingsApiParams{ + GroupId: projectID, + DataProtectionSettings20231001: dataProtectionSettings, + OverwriteBackupPolicies: conversion.Pointer(false), + } + _, _, err := connV2.CloudBackupsApi.UpdateDataProtectionSettingsWithParams(ctx, ¶ms).Execute() + return err +} diff --git a/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go b/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go index cc709fcdb8..5b1391c8f5 100644 --- a/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go +++ b/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go @@ -8,6 +8,7 @@ import ( "testing" "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/plancheck" "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" @@ -69,9 +70,10 @@ func TestAccBackupCompliancePolicy_overwriteBackupPolicies(t *testing.T) { ProjectID: projectIDTerraform, MongoDBMajorVersion: "6.0", CloudBackup: true, + DiskSizeGb: 12, RetainBackupsEnabled: true, ReplicationSpecs: []acc.ReplicationSpecRequest{ - {EbsVolumeType: "STANDARD", AutoScalingDiskGbEnabled: true, NodeCount: 3, DiskSizeGb: 12}, + {EbsVolumeType: "STANDARD", AutoScalingDiskGbEnabled: true, NodeCount: 3}, }, } clusterInfo = acc.GetClusterInfo(t, &req) @@ -115,6 +117,52 @@ func TestAccBackupCompliancePolicy_withoutRestoreWindowDays(t *testing.T) { }) } +func TestAccBackupCompliancePolicy_UpdateSetsAllAttributes(t *testing.T) { + var ( + orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") + projectName = acc.RandomProjectName() // No ProjectIDExecution to avoid conflicts with backup compliance policy + projectOwnerID = os.Getenv("MONGODB_ATLAS_PROJECT_OWNER_ID") + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + Steps: []resource.TestStep{ + { + Config: configBasicWithOptionalAttributesWithNonDefaultValues(projectName, orgID, projectOwnerID, "7"), + Check: resource.ComposeAggregateTestCheckFunc( + checkExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "authorized_user_first_name", "First"), + resource.TestCheckResourceAttr(resourceName, "authorized_user_last_name", "Last"), + resource.TestCheckResourceAttr(resourceName, "pit_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "encryption_at_rest_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "copy_protection_enabled", "true"), + ), + }, + { + Config: configBasicWithOptionalAttributesWithNonDefaultValues(projectName, orgID, projectOwnerID, "8"), + Check: resource.ComposeAggregateTestCheckFunc( + checkExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "authorized_user_first_name", "First"), + resource.TestCheckResourceAttr(resourceName, "authorized_user_last_name", "Last"), + resource.TestCheckResourceAttr(resourceName, "pit_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "encryption_at_rest_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "copy_protection_enabled", "true"), + ), + }, + { + Config: configBasicWithOptionalAttributesWithNonDefaultValues(projectName, orgID, projectOwnerID, "8"), + ConfigPlanChecks: resource.ConfigPlanChecks{ + PreApply: []plancheck.PlanCheck{ + acc.DebugPlan(), + plancheck.ExpectEmptyPlan(), + }, + }, + }, + }, + }) +} + func basicTestCase(tb testing.TB, useYearly bool) *resource.TestCase { tb.Helper() @@ -418,3 +466,48 @@ func basicChecks() []resource.TestCheckFunc { checks = append(checks, checkExists(resourceName), checkExists(dataSourceName)) return checks } + +func configBasicWithOptionalAttributesWithNonDefaultValues(projectName, orgID, projectOwnerID, restreWindowDays string) string { + return acc.ConfigProjectWithSettings(projectName, orgID, projectOwnerID, false) + + fmt.Sprintf(`resource "mongodbatlas_backup_compliance_policy" "backup_policy_res" { + project_id = mongodbatlas_project.test.id + authorized_email = "test@example.com" + authorized_user_first_name = "First" + authorized_user_last_name = "Last" + copy_protection_enabled = true + pit_enabled = false + encryption_at_rest_enabled = false + + restore_window_days = %[1]s + + on_demand_policy_item { + frequency_interval = 0 + retention_unit = "days" + retention_value = 3 + } + + policy_item_hourly { + frequency_interval = 6 + retention_unit = "days" + retention_value = 7 + } + + policy_item_daily { + frequency_interval = 0 + retention_unit = "days" + retention_value = 7 + } + + policy_item_weekly { + frequency_interval = 0 + retention_unit = "weeks" + retention_value = 4 + } + + policy_item_monthly { + frequency_interval = 0 + retention_unit = "months" + retention_value = 12 + } + }`, restreWindowDays) +} diff --git a/internal/service/cloudbackupschedule/data_source_cloud_backup_schedule.go b/internal/service/cloudbackupschedule/data_source_cloud_backup_schedule.go index 9740219100..25510ef617 100644 --- a/internal/service/cloudbackupschedule/data_source_cloud_backup_schedule.go +++ b/internal/service/cloudbackupschedule/data_source_cloud_backup_schedule.go @@ -7,8 +7,8 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( @@ -260,15 +260,15 @@ func DataSource() *schema.Resource { } func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV220231115 := meta.(*config.MongoDBClient).AtlasV220231115 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID := d.Get("project_id").(string) clusterName := d.Get("cluster_name").(string) useZoneIDForCopySettings := false - var backupSchedule *admin.DiskBackupSnapshotSchedule20250101 - var backupScheduleOldSDK *admin20231115.DiskBackupSnapshotSchedule + var backupSchedule *admin.DiskBackupSnapshotSchedule20240805 + var backupScheduleOldSDK *admin20240530.DiskBackupSnapshotSchedule var copySettings []map[string]any var err error @@ -277,9 +277,9 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag. } if !useZoneIDForCopySettings { - backupScheduleOldSDK, _, err = connV220231115.CloudBackupsApi.GetBackupSchedule(ctx, projectID, clusterName).Execute() + backupScheduleOldSDK, _, err = connV220240530.CloudBackupsApi.GetBackupSchedule(ctx, projectID, clusterName).Execute() if err != nil { - if apiError, ok := admin20231115.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError { + if apiError, ok := admin20240530.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError { return diag.Errorf("%s : %s : %s", errorSnapshotBackupScheduleRead, ErrorOperationNotPermitted, AsymmetricShardsUnsupportedActionDS) } return diag.Errorf(errorSnapshotBackupScheduleRead, clusterName, err) diff --git a/internal/service/cloudbackupschedule/model_cloud_backup_schedule.go b/internal/service/cloudbackupschedule/model_cloud_backup_schedule.go index ed649f2e84..bd8747afee 100644 --- a/internal/service/cloudbackupschedule/model_cloud_backup_schedule.go +++ b/internal/service/cloudbackupschedule/model_cloud_backup_schedule.go @@ -1,8 +1,8 @@ package cloudbackupschedule import ( - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func FlattenPolicyItem(items []admin.DiskBackupApiPolicyItem, frequencyType string) []map[string]any { @@ -21,9 +21,9 @@ func FlattenPolicyItem(items []admin.DiskBackupApiPolicyItem, frequencyType stri return policyItems } -func FlattenExport(roles *admin.DiskBackupSnapshotSchedule20250101) []map[string]any { +func FlattenExport(roles *admin.DiskBackupSnapshotSchedule20240805) []map[string]any { exportList := make([]map[string]any, 0) - emptyStruct := admin.DiskBackupSnapshotSchedule20250101{} + emptyStruct := admin.DiskBackupSnapshotSchedule20240805{} if emptyStruct.GetExport() != roles.GetExport() { exportList = append(exportList, map[string]any{ "frequency_type": roles.Export.GetFrequencyType(), @@ -33,7 +33,7 @@ func FlattenExport(roles *admin.DiskBackupSnapshotSchedule20250101) []map[string return exportList } -func flattenCopySettingsOldSDK(copySettingList []admin20231115.DiskBackupCopySetting) []map[string]any { +func flattenCopySettingsOldSDK(copySettingList []admin20240530.DiskBackupCopySetting) []map[string]any { copySettings := make([]map[string]any, 0) for _, v := range copySettingList { copySettings = append(copySettings, map[string]any{ @@ -47,7 +47,7 @@ func flattenCopySettingsOldSDK(copySettingList []admin20231115.DiskBackupCopySet return copySettings } -func FlattenCopySettings(copySettingList []admin.DiskBackupCopySetting20250101) []map[string]any { +func FlattenCopySettings(copySettingList []admin.DiskBackupCopySetting20240805) []map[string]any { copySettings := make([]map[string]any, 0) for _, v := range copySettingList { copySettings = append(copySettings, map[string]any{ diff --git a/internal/service/cloudbackupschedule/model_cloud_backup_schedule_test.go b/internal/service/cloudbackupschedule/model_cloud_backup_schedule_test.go index 360b7362a0..50304e7d5b 100644 --- a/internal/service/cloudbackupschedule/model_cloud_backup_schedule_test.go +++ b/internal/service/cloudbackupschedule/model_cloud_backup_schedule_test.go @@ -6,7 +6,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/cloudbackupschedule" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func TestFlattenPolicyItem(t *testing.T) { @@ -59,12 +59,12 @@ func TestFlattenPolicyItem(t *testing.T) { func TestFlattenExport(t *testing.T) { testCases := []struct { name string - roles *admin.DiskBackupSnapshotSchedule20250101 + roles *admin.DiskBackupSnapshotSchedule20240805 expected []map[string]any }{ { name: "Non-empty Export", - roles: &admin.DiskBackupSnapshotSchedule20250101{ + roles: &admin.DiskBackupSnapshotSchedule20240805{ Export: &admin.AutoExportPolicy{ FrequencyType: conversion.StringPtr("daily"), ExportBucketId: conversion.StringPtr("bucket123"), @@ -89,12 +89,12 @@ func TestFlattenExport(t *testing.T) { func TestFlattenCopySettings(t *testing.T) { testCases := []struct { name string - settings []admin.DiskBackupCopySetting20250101 + settings []admin.DiskBackupCopySetting20240805 expected []map[string]any }{ { name: "Multiple Copy Settings", - settings: []admin.DiskBackupCopySetting20250101{ + settings: []admin.DiskBackupCopySetting20240805{ { CloudProvider: conversion.StringPtr("AWS"), Frequencies: &[]string{"daily", "weekly"}, @@ -117,7 +117,7 @@ func TestFlattenCopySettings(t *testing.T) { }, { name: "Empty Copy Settings List", - settings: []admin.DiskBackupCopySetting20250101{}, + settings: []admin.DiskBackupCopySetting20240805{}, expected: []map[string]any{}, }, } diff --git a/internal/service/cloudbackupschedule/model_sdk_version_conversion.go b/internal/service/cloudbackupschedule/model_sdk_version_conversion.go index 9d219e82b5..7f156507d0 100644 --- a/internal/service/cloudbackupschedule/model_sdk_version_conversion.go +++ b/internal/service/cloudbackupschedule/model_sdk_version_conversion.go @@ -1,23 +1,23 @@ package cloudbackupschedule import ( - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) // Conversions from one SDK model version to another are used to avoid duplicating our flatten/expand conversion functions. // - These functions must not contain any business logic. // - All will be removed once we rely on a single API version. -func convertPolicyItemsToOldSDK(slice *[]admin.DiskBackupApiPolicyItem) []admin20231115.DiskBackupApiPolicyItem { +func convertPolicyItemsToOldSDK(slice *[]admin.DiskBackupApiPolicyItem) []admin20240530.DiskBackupApiPolicyItem { if slice == nil { return nil } policyItemsSlice := *slice - results := make([]admin20231115.DiskBackupApiPolicyItem, len(policyItemsSlice)) + results := make([]admin20240530.DiskBackupApiPolicyItem, len(policyItemsSlice)) for i := range len(policyItemsSlice) { policyItem := policyItemsSlice[i] - results[i] = admin20231115.DiskBackupApiPolicyItem{ + results[i] = admin20240530.DiskBackupApiPolicyItem{ FrequencyInterval: policyItem.FrequencyInterval, FrequencyType: policyItem.FrequencyType, Id: policyItem.Id, @@ -28,7 +28,7 @@ func convertPolicyItemsToOldSDK(slice *[]admin.DiskBackupApiPolicyItem) []admin2 return results } -func convertPoliciesToLatest(slice *[]admin20231115.AdvancedDiskBackupSnapshotSchedulePolicy) *[]admin.AdvancedDiskBackupSnapshotSchedulePolicy { +func convertPoliciesToLatest(slice *[]admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy) *[]admin.AdvancedDiskBackupSnapshotSchedulePolicy { if slice == nil { return nil } @@ -45,7 +45,7 @@ func convertPoliciesToLatest(slice *[]admin20231115.AdvancedDiskBackupSnapshotSc return &results } -func convertPolicyItemsToLatest(slice *[]admin20231115.DiskBackupApiPolicyItem) *[]admin.DiskBackupApiPolicyItem { +func convertPolicyItemsToLatest(slice *[]admin20240530.DiskBackupApiPolicyItem) *[]admin.DiskBackupApiPolicyItem { if slice == nil { return nil } @@ -64,18 +64,18 @@ func convertPolicyItemsToLatest(slice *[]admin20231115.DiskBackupApiPolicyItem) return &results } -func convertAutoExportPolicyToOldSDK(exportPolicy *admin.AutoExportPolicy) *admin20231115.AutoExportPolicy { +func convertAutoExportPolicyToOldSDK(exportPolicy *admin.AutoExportPolicy) *admin20240530.AutoExportPolicy { if exportPolicy == nil { return nil } - return &admin20231115.AutoExportPolicy{ + return &admin20240530.AutoExportPolicy{ ExportBucketId: exportPolicy.ExportBucketId, FrequencyType: exportPolicy.FrequencyType, } } -func convertAutoExportPolicyToLatest(exportPolicy *admin20231115.AutoExportPolicy) *admin.AutoExportPolicy { +func convertAutoExportPolicyToLatest(exportPolicy *admin20240530.AutoExportPolicy) *admin.AutoExportPolicy { if exportPolicy == nil { return nil } @@ -86,10 +86,10 @@ func convertAutoExportPolicyToLatest(exportPolicy *admin20231115.AutoExportPolic } } -func convertBackupScheduleReqToOldSDK(req *admin.DiskBackupSnapshotSchedule20250101, - copySettingsOldSDK *[]admin20231115.DiskBackupCopySetting, - policiesOldSDK *[]admin20231115.AdvancedDiskBackupSnapshotSchedulePolicy) *admin20231115.DiskBackupSnapshotSchedule { - return &admin20231115.DiskBackupSnapshotSchedule{ +func convertBackupScheduleReqToOldSDK(req *admin.DiskBackupSnapshotSchedule20240805, + copySettingsOldSDK *[]admin20240530.DiskBackupCopySetting, + policiesOldSDK *[]admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy) *admin20240530.DiskBackupSnapshotSchedule { + return &admin20240530.DiskBackupSnapshotSchedule{ CopySettings: copySettingsOldSDK, Policies: policiesOldSDK, AutoExportEnabled: req.AutoExportEnabled, @@ -102,8 +102,8 @@ func convertBackupScheduleReqToOldSDK(req *admin.DiskBackupSnapshotSchedule20250 } } -func convertBackupScheduleToLatestExcludeCopySettings(backupSchedule *admin20231115.DiskBackupSnapshotSchedule) *admin.DiskBackupSnapshotSchedule20250101 { - return &admin.DiskBackupSnapshotSchedule20250101{ +func convertBackupScheduleToLatestExcludeCopySettings(backupSchedule *admin20240530.DiskBackupSnapshotSchedule) *admin.DiskBackupSnapshotSchedule20240805 { + return &admin.DiskBackupSnapshotSchedule20240805{ Policies: convertPoliciesToLatest(backupSchedule.Policies), AutoExportEnabled: backupSchedule.AutoExportEnabled, Export: convertAutoExportPolicyToLatest(backupSchedule.Export), diff --git a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule.go b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule.go index 63992fa483..c67cc55b20 100644 --- a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule.go +++ b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule.go @@ -13,8 +13,8 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/spf13/cast" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( @@ -321,7 +321,7 @@ func Resource() *schema.Resource { func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { var diags diag.Diagnostics - connV220231115 := meta.(*config.MongoDBClient).AtlasV220231115 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID := d.Get("project_id").(string) clusterName := d.Get("cluster_name").(string) @@ -339,7 +339,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. diags = append(diags, diagWarning) } - if err := cloudBackupScheduleCreateOrUpdate(ctx, connV220231115, connV2, d, projectID, clusterName, true); err != nil { + if err := cloudBackupScheduleCreateOrUpdate(ctx, connV220240530, connV2, d, projectID, clusterName, true); err != nil { diags = append(diags, diag.Errorf(errorSnapshotBackupScheduleCreate, err)...) return diags } @@ -353,14 +353,14 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. } func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV220231115 := meta.(*config.MongoDBClient).AtlasV220231115 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 connV2 := meta.(*config.MongoDBClient).AtlasV2 ids := conversion.DecodeStateID(d.Id()) projectID := ids["project_id"] clusterName := ids["cluster_name"] - var backupSchedule *admin.DiskBackupSnapshotSchedule20250101 - var backupScheduleOldSDK *admin20231115.DiskBackupSnapshotSchedule + var backupSchedule *admin.DiskBackupSnapshotSchedule20240805 + var backupScheduleOldSDK *admin20240530.DiskBackupSnapshotSchedule var copySettings []map[string]any var resp *http.Response var err error @@ -371,8 +371,8 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di } if useOldAPI { - backupScheduleOldSDK, resp, err = connV220231115.CloudBackupsApi.GetBackupSchedule(context.Background(), projectID, clusterName).Execute() - if apiError, ok := admin20231115.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError { + backupScheduleOldSDK, resp, err = connV220240530.CloudBackupsApi.GetBackupSchedule(context.Background(), projectID, clusterName).Execute() + if apiError, ok := admin20240530.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError { return diag.Errorf("%s : %s : %s", errorSnapshotBackupScheduleRead, ErrorOperationNotPermitted, AsymmetricShardsUnsupportedAction) } if err != nil { @@ -409,7 +409,7 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di return nil } -func setSchemaFieldsExceptCopySettings(d *schema.ResourceData, backupPolicy *admin.DiskBackupSnapshotSchedule20250101) diag.Diagnostics { +func setSchemaFieldsExceptCopySettings(d *schema.ResourceData, backupPolicy *admin.DiskBackupSnapshotSchedule20240805) diag.Diagnostics { clusterName := backupPolicy.GetClusterName() if err := d.Set("cluster_id", backupPolicy.GetClusterId()); err != nil { return diag.Errorf(errorSnapshotBackupScheduleSetting, "cluster_id", clusterName, err) @@ -470,7 +470,7 @@ func setSchemaFieldsExceptCopySettings(d *schema.ResourceData, backupPolicy *adm } func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV220231115 := meta.(*config.MongoDBClient).AtlasV220231115 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 connV2 := meta.(*config.MongoDBClient).AtlasV2 ids := conversion.DecodeStateID(d.Id()) @@ -483,7 +483,7 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. } } - err := cloudBackupScheduleCreateOrUpdate(ctx, connV220231115, connV2, d, projectID, clusterName, false) + err := cloudBackupScheduleCreateOrUpdate(ctx, connV220240530, connV2, d, projectID, clusterName, false) if err != nil { return diag.Errorf(errorSnapshotBackupScheduleUpdate, err) } @@ -539,7 +539,7 @@ func resourceImport(ctx context.Context, d *schema.ResourceData, meta any) ([]*s return []*schema.ResourceData{d}, nil } -func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV220231115 *admin20231115.APIClient, connV2 *admin.APIClient, d *schema.ResourceData, projectID, clusterName string, isCreate bool) error { +func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV220240530 *admin20240530.APIClient, connV2 *admin.APIClient, d *schema.ResourceData, projectID, clusterName string, isCreate bool) error { var err error copySettings := d.Get("copy_settings") @@ -548,7 +548,7 @@ func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV220231115 *admi return err } - req := &admin.DiskBackupSnapshotSchedule20250101{} + req := &admin.DiskBackupSnapshotSchedule20240805{} var policiesItem []admin.DiskBackupApiPolicyItem if v, ok := d.GetOk("policy_item_hourly"); ok { @@ -595,14 +595,14 @@ func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV220231115 *admi } if useOldAPI { - resp, _, err := connV220231115.CloudBackupsApi.GetBackupSchedule(ctx, projectID, clusterName).Execute() + resp, _, err := connV220240530.CloudBackupsApi.GetBackupSchedule(ctx, projectID, clusterName).Execute() if err != nil { - if apiError, ok := admin20231115.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError { + if apiError, ok := admin20240530.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError { return fmt.Errorf("%s : %s", ErrorOperationNotPermitted, AsymmetricShardsUnsupportedAction) } return fmt.Errorf("error getting MongoDB Cloud Backup Schedule (%s): %s", clusterName, err) } - var copySettingsOldSDK *[]admin20231115.DiskBackupCopySetting + var copySettingsOldSDK *[]admin20240530.DiskBackupCopySetting if isCopySettingsNonEmptyOrChanged(d) { copySettingsOldSDK = expandCopySettingsOldSDK(copySettings.([]any)) } @@ -610,9 +610,9 @@ func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV220231115 *admi policiesOldSDK := getRequestPoliciesOldSDK(convertPolicyItemsToOldSDK(&policiesItem), resp.GetPolicies()) reqOld := convertBackupScheduleReqToOldSDK(req, copySettingsOldSDK, policiesOldSDK) - _, _, err = connV220231115.CloudBackupsApi.UpdateBackupSchedule(context.Background(), projectID, clusterName, reqOld).Execute() + _, _, err = connV220240530.CloudBackupsApi.UpdateBackupSchedule(context.Background(), projectID, clusterName, reqOld).Execute() if err != nil { - if apiError, ok := admin20231115.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError { + if apiError, ok := admin20240530.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError { return fmt.Errorf("%s : %s", ErrorOperationNotPermitted, AsymmetricShardsUnsupportedAction) } return err @@ -639,13 +639,13 @@ func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV220231115 *admi return nil } -func ExpandCopySetting(tfMap map[string]any) *admin.DiskBackupCopySetting20250101 { +func ExpandCopySetting(tfMap map[string]any) *admin.DiskBackupCopySetting20240805 { if tfMap == nil { return nil } frequencies := conversion.ExpandStringList(tfMap["frequencies"].(*schema.Set).List()) - copySetting := &admin.DiskBackupCopySetting20250101{ + copySetting := &admin.DiskBackupCopySetting20240805{ CloudProvider: conversion.Pointer(tfMap["cloud_provider"].(string)), Frequencies: &frequencies, RegionName: conversion.Pointer(tfMap["region_name"].(string)), @@ -655,8 +655,8 @@ func ExpandCopySetting(tfMap map[string]any) *admin.DiskBackupCopySetting2025010 return copySetting } -func ExpandCopySettings(tfList []any) *[]admin.DiskBackupCopySetting20250101 { - copySettings := make([]admin.DiskBackupCopySetting20250101, 0) +func ExpandCopySettings(tfList []any) *[]admin.DiskBackupCopySetting20240805 { + copySettings := make([]admin.DiskBackupCopySetting20240805, 0) for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]any) @@ -669,8 +669,8 @@ func ExpandCopySettings(tfList []any) *[]admin.DiskBackupCopySetting20250101 { return ©Settings } -func expandCopySettingsOldSDK(tfList []any) *[]admin20231115.DiskBackupCopySetting { - copySettings := make([]admin20231115.DiskBackupCopySetting, 0) +func expandCopySettingsOldSDK(tfList []any) *[]admin20240530.DiskBackupCopySetting { + copySettings := make([]admin20240530.DiskBackupCopySetting, 0) for _, tfMapRaw := range tfList { tfMap, ok := tfMapRaw.(map[string]any) @@ -683,13 +683,13 @@ func expandCopySettingsOldSDK(tfList []any) *[]admin20231115.DiskBackupCopySetti return ©Settings } -func expandCopySettingOldSDK(tfMap map[string]any) *admin20231115.DiskBackupCopySetting { +func expandCopySettingOldSDK(tfMap map[string]any) *admin20240530.DiskBackupCopySetting { if tfMap == nil { return nil } frequencies := conversion.ExpandStringList(tfMap["frequencies"].(*schema.Set).List()) - copySetting := &admin20231115.DiskBackupCopySetting{ + copySetting := &admin20240530.DiskBackupCopySetting{ CloudProvider: conversion.Pointer(tfMap["cloud_provider"].(string)), Frequencies: &frequencies, RegionName: conversion.Pointer(tfMap["region_name"].(string)), @@ -791,15 +791,15 @@ func CheckCopySettingsToUseOldAPI(tfList []any, isCreate bool) (bool, error) { return false, nil } -func getRequestPoliciesOldSDK(policiesItem []admin20231115.DiskBackupApiPolicyItem, respPolicies []admin20231115.AdvancedDiskBackupSnapshotSchedulePolicy) *[]admin20231115.AdvancedDiskBackupSnapshotSchedulePolicy { +func getRequestPoliciesOldSDK(policiesItem []admin20240530.DiskBackupApiPolicyItem, respPolicies []admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy) *[]admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy { if len(policiesItem) > 0 { - policy := admin20231115.AdvancedDiskBackupSnapshotSchedulePolicy{ + policy := admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy{ PolicyItems: &policiesItem, } if len(respPolicies) == 1 { policy.Id = respPolicies[0].Id } - return &[]admin20231115.AdvancedDiskBackupSnapshotSchedulePolicy{policy} + return &[]admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy{policy} } return nil } diff --git a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go index b030c80cc9..8ce9343db3 100644 --- a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go +++ b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go @@ -7,14 +7,14 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" ) func TestMigBackupRSCloudBackupSchedule_basic(t *testing.T) { var ( clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{CloudBackup: true}) useYearly = mig.IsProviderVersionAtLeast("1.16.0") // attribute introduced in this version - config = configNewPolicies(&clusterInfo, &admin20231115.DiskBackupSnapshotSchedule{ + config = configNewPolicies(&clusterInfo, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(0), ReferenceMinuteOfHour: conversion.Pointer(0), RestoreWindowDays: conversion.Pointer(7), @@ -31,7 +31,6 @@ func TestMigBackupRSCloudBackupSchedule_basic(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "0"), resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "0"), resource.TestCheckResourceAttr(resourceName, "restore_window_days", "7"), @@ -60,12 +59,12 @@ func TestMigBackupRSCloudBackupSchedule_copySettings(t *testing.T) { terraformStr = clusterInfo.TerraformStr clusterResourceName = clusterInfo.ResourceName projectID = clusterInfo.ProjectID - copySettingsConfigWithRepSpecID = configCopySettings(terraformStr, projectID, clusterResourceName, false, true, &admin20231115.DiskBackupSnapshotSchedule{ + copySettingsConfigWithRepSpecID = configCopySettings(terraformStr, projectID, clusterResourceName, false, true, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(1), }) - copySettingsConfigWithZoneID = configCopySettings(terraformStr, projectID, clusterResourceName, false, false, &admin20231115.DiskBackupSnapshotSchedule{ + copySettingsConfigWithZoneID = configCopySettings(terraformStr, projectID, clusterResourceName, false, false, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(1), diff --git a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go index 5cb663596e..b8ff54e01e 100644 --- a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go +++ b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go @@ -11,7 +11,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/cloudbackupschedule" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - admin20231115 "go.mongodb.org/atlas-sdk/v20231115014/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" ) var ( @@ -30,7 +30,7 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configNoPolicies(&clusterInfo, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configNoPolicies(&clusterInfo, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(4), @@ -47,6 +47,7 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "policy_item_monthly.#", "0"), resource.TestCheckResourceAttr(resourceName, "policy_item_yearly.#", "0"), resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.Name), + resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttrSet(dataSourceName, "reference_hour_of_day"), resource.TestCheckResourceAttrSet(dataSourceName, "reference_minute_of_hour"), resource.TestCheckResourceAttrSet(dataSourceName, "restore_window_days"), @@ -58,7 +59,7 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { ), }, { - Config: configNewPolicies(&clusterInfo, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configNewPolicies(&clusterInfo, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(0), ReferenceMinuteOfHour: conversion.Pointer(0), RestoreWindowDays: conversion.Pointer(7), @@ -95,13 +96,14 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "policy_item_yearly.0.retention_unit", "years"), resource.TestCheckResourceAttr(resourceName, "policy_item_yearly.0.retention_value", "1"), resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.Name), + resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttrSet(dataSourceName, "reference_hour_of_day"), resource.TestCheckResourceAttrSet(dataSourceName, "reference_minute_of_hour"), resource.TestCheckResourceAttrSet(dataSourceName, "restore_window_days"), ), }, { - Config: configAdvancedPolicies(&clusterInfo, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configAdvancedPolicies(&clusterInfo, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(0), ReferenceMinuteOfHour: conversion.Pointer(0), RestoreWindowDays: conversion.Pointer(7), @@ -193,7 +195,7 @@ func TestAccBackupRSCloudBackupSchedule_onePolicy(t *testing.T) { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configDefault(&clusterInfo, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configDefault(&clusterInfo, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(4), @@ -227,7 +229,7 @@ func TestAccBackupRSCloudBackupSchedule_onePolicy(t *testing.T) { ), }, { - Config: configOnePolicy(&clusterInfo, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configOnePolicy(&clusterInfo, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(0), ReferenceMinuteOfHour: conversion.Pointer(0), RestoreWindowDays: conversion.Pointer(7), @@ -318,7 +320,7 @@ func TestAccBackupRSCloudBackupSchedule_copySettings_repSpecId(t *testing.T) { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configCopySettings(terraformStr, projectID, clusterResourceName, false, true, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configCopySettings(terraformStr, projectID, clusterResourceName, false, true, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(1), @@ -326,7 +328,7 @@ func TestAccBackupRSCloudBackupSchedule_copySettings_repSpecId(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc(checksCreateAll...), }, { - Config: configCopySettings(terraformStr, projectID, clusterResourceName, true, true, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configCopySettings(terraformStr, projectID, clusterResourceName, true, true, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(1), @@ -404,7 +406,7 @@ func TestAccBackupRSCloudBackupSchedule_copySettings_zoneId(t *testing.T) { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configCopySettings(terraformStr, projectID, clusterResourceName, false, false, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configCopySettings(terraformStr, projectID, clusterResourceName, false, false, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(1), @@ -412,7 +414,7 @@ func TestAccBackupRSCloudBackupSchedule_copySettings_zoneId(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc(checksCreateAll...), }, { - Config: configCopySettings(terraformStr, projectID, clusterResourceName, true, false, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configCopySettings(terraformStr, projectID, clusterResourceName, true, false, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(1), @@ -434,7 +436,7 @@ func TestAccBackupRSCloudBackupScheduleImport_basic(t *testing.T) { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configDefault(&clusterInfo, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configDefault(&clusterInfo, &admin20240530.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(4), @@ -489,7 +491,7 @@ func TestAccBackupRSCloudBackupSchedule_azure(t *testing.T) { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configAzure(&clusterInfo, &admin20231115.DiskBackupApiPolicyItem{ + Config: configAzure(&clusterInfo, &admin20240530.DiskBackupApiPolicyItem{ FrequencyInterval: 1, RetentionUnit: "days", RetentionValue: 1, @@ -502,7 +504,7 @@ func TestAccBackupRSCloudBackupSchedule_azure(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.retention_value", "1")), }, { - Config: configAzure(&clusterInfo, &admin20231115.DiskBackupApiPolicyItem{ + Config: configAzure(&clusterInfo, &admin20240530.DiskBackupApiPolicyItem{ FrequencyInterval: 2, RetentionUnit: "days", RetentionValue: 3, @@ -644,7 +646,7 @@ func checkDestroy(s *terraform.State) error { return nil } -func configNoPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule) string { +func configNoPolicies(info *acc.ClusterInfo, p *admin20240530.DiskBackupSnapshotSchedule) string { return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s @@ -662,7 +664,7 @@ func configNoPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshot `, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) } -func configDefault(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule) string { +func configDefault(info *acc.ClusterInfo, p *admin20240530.DiskBackupSnapshotSchedule) string { return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s @@ -706,7 +708,7 @@ func configDefault(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSch `, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) } -func configCopySettings(terraformStr, projectID, clusterResourceName string, emptyCopySettings, useRepSpecID bool, p *admin20231115.DiskBackupSnapshotSchedule) string { +func configCopySettings(terraformStr, projectID, clusterResourceName string, emptyCopySettings, useRepSpecID bool, p *admin20240530.DiskBackupSnapshotSchedule) string { var copySettings string var dataSourceConfig string @@ -794,7 +796,7 @@ func configCopySettings(terraformStr, projectID, clusterResourceName string, emp `, terraformStr, projectID, clusterResourceName, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays(), copySettings, dataSourceConfig) } -func configOnePolicy(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule) string { +func configOnePolicy(info *acc.ClusterInfo, p *admin20240530.DiskBackupSnapshotSchedule) string { return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s @@ -813,7 +815,7 @@ func configOnePolicy(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotS `, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) } -func configNewPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule, useYearly bool) string { +func configNewPolicies(info *acc.ClusterInfo, p *admin20240530.DiskBackupSnapshotSchedule, useYearly bool) string { var strYearly string if useYearly { strYearly = ` @@ -864,7 +866,7 @@ func configNewPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapsho `, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays(), strYearly) } -func configAzure(info *acc.ClusterInfo, policy *admin20231115.DiskBackupApiPolicyItem) string { +func configAzure(info *acc.ClusterInfo, policy *admin20240530.DiskBackupApiPolicyItem) string { return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s @@ -884,7 +886,7 @@ func configAzure(info *acc.ClusterInfo, policy *admin20231115.DiskBackupApiPolic `, info.TerraformNameRef, info.ProjectID, policy.GetFrequencyInterval(), policy.GetRetentionUnit(), policy.GetRetentionValue()) } -func configAdvancedPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule) string { +func configAdvancedPolicies(info *acc.ClusterInfo, p *admin20240530.DiskBackupSnapshotSchedule) string { return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s diff --git a/internal/service/cloudbackupsnapshot/data_source_cloud_backup_snapshots.go b/internal/service/cloudbackupsnapshot/data_source_cloud_backup_snapshots.go index c5cc844e62..bf5283b9b2 100644 --- a/internal/service/cloudbackupsnapshot/data_source_cloud_backup_snapshots.go +++ b/internal/service/cloudbackupsnapshot/data_source_cloud_backup_snapshots.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot.go b/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot.go index 6c3539b16f..2f852f50de 100644 --- a/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot.go +++ b/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot.go @@ -4,7 +4,7 @@ import ( "errors" "regexp" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func SplitSnapshotImportID(id string) (*admin.GetReplicaSetBackupApiParams, error) { diff --git a/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go b/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go index 269e98010e..8e2df8d6af 100644 --- a/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go +++ b/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go @@ -5,7 +5,7 @@ import ( "testing" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/cloudbackupsnapshot" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func TestSplitSnapshotImportID(t *testing.T) { diff --git a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot.go b/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot.go index beb904109f..172f1ad22c 100644 --- a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot.go +++ b/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot.go @@ -14,7 +14,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/cluster" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { diff --git a/internal/service/cloudbackupsnapshotexportbucket/data_source_cloud_backup_snapshot_export_bucket.go b/internal/service/cloudbackupsnapshotexportbucket/data_source_cloud_backup_snapshot_export_bucket.go index 8312b770aa..17cc0a46e1 100644 --- a/internal/service/cloudbackupsnapshotexportbucket/data_source_cloud_backup_snapshot_export_bucket.go +++ b/internal/service/cloudbackupsnapshotexportbucket/data_source_cloud_backup_snapshot_export_bucket.go @@ -40,6 +40,18 @@ func DataSource() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "role_id": { + Type: schema.TypeString, + Computed: true, + }, + "service_url": { + Type: schema.TypeString, + Computed: true, + }, + "tenant_id": { + Type: schema.TypeString, + Computed: true, + }, }, } } @@ -71,6 +83,18 @@ func datasourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag. return diag.FromErr(fmt.Errorf("error setting `iam_role_id` for CloudProviderSnapshotExportBuckets (%s): %s", d.Id(), err)) } + if err = d.Set("role_id", bucket.GetRoleId()); err != nil { + return diag.FromErr(fmt.Errorf("error setting `role_id` for CloudProviderSnapshotExportBuckets (%s): %s", d.Id(), err)) + } + + if err = d.Set("service_url", bucket.GetServiceUrl()); err != nil { + return diag.FromErr(fmt.Errorf("error setting `service_url` for CloudProviderSnapshotExportBuckets (%s): %s", d.Id(), err)) + } + + if err = d.Set("tenant_id", bucket.GetTenantId()); err != nil { + return diag.FromErr(fmt.Errorf("error setting `tenant_id` for CloudProviderSnapshotExportBuckets (%s): %s", d.Id(), err)) + } + d.SetId(bucket.GetId()) return nil diff --git a/internal/service/cloudbackupsnapshotexportbucket/data_source_cloud_backup_snapshot_export_buckets.go b/internal/service/cloudbackupsnapshotexportbucket/data_source_cloud_backup_snapshot_export_buckets.go index 8b1b93feef..7b6b5b19f3 100644 --- a/internal/service/cloudbackupsnapshotexportbucket/data_source_cloud_backup_snapshot_export_buckets.go +++ b/internal/service/cloudbackupsnapshotexportbucket/data_source_cloud_backup_snapshot_export_buckets.go @@ -7,7 +7,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { @@ -47,6 +47,18 @@ func PluralDataSource() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "role_id": { + Type: schema.TypeString, + Computed: true, + }, + "service_url": { + Type: schema.TypeString, + Computed: true, + }, + "tenant_id": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, @@ -98,6 +110,9 @@ func flattenBuckets(buckets []admin.DiskBackupSnapshotExportBucket) []map[string "bucket_name": bucket.GetBucketName(), "cloud_provider": bucket.GetCloudProvider(), "iam_role_id": bucket.GetIamRoleId(), + "role_id": bucket.GetRoleId(), + "service_url": bucket.GetServiceUrl(), + "tenant_id": bucket.GetTenantId(), } } diff --git a/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket.go b/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket.go index 0da3e4a58f..cfb2fc4f74 100644 --- a/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket.go +++ b/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket.go @@ -14,7 +14,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { @@ -56,7 +56,22 @@ func Schema() map[string]*schema.Schema { }, "iam_role_id": { Type: schema.TypeString, - Required: true, + Optional: true, + ForceNew: true, + }, + "role_id": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "service_url": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "tenant_id": { + Type: schema.TypeString, + Optional: true, ForceNew: true, }, } @@ -68,14 +83,14 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. projectID := d.Get("project_id").(string) cloudProvider := d.Get("cloud_provider").(string) - if cloudProvider != "AWS" { - return diag.Errorf("atlas only supports AWS") - } request := &admin.DiskBackupSnapshotExportBucket{ IamRoleId: conversion.StringPtr(d.Get("iam_role_id").(string)), - BucketName: conversion.StringPtr(d.Get("bucket_name").(string)), - CloudProvider: &cloudProvider, + BucketName: d.Get("bucket_name").(string), + RoleId: conversion.StringPtr(d.Get("role_id").(string)), + ServiceUrl: conversion.StringPtr(d.Get("service_url").(string)), + TenantId: conversion.StringPtr(d.Get("tenant_id").(string)), + CloudProvider: cloudProvider, } bucketResponse, _, err := conn.CloudBackupsApi.CreateExportBucket(ctx, projectID, request).Execute() @@ -129,6 +144,18 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di return diag.Errorf("error setting `project_id` for snapshot export bucket (%s): %s", d.Id(), err) } + if err := d.Set("service_url", exportBackup.ServiceUrl); err != nil { + return diag.Errorf("error setting `service_url` for snapshot export bucket (%s): %s", d.Id(), err) + } + + if err := d.Set("role_id", exportBackup.RoleId); err != nil { + return diag.Errorf("error setting `role_id` for snapshot export bucket (%s): %s", d.Id(), err) + } + + if err := d.Set("tenant_id", exportBackup.TenantId); err != nil { + return diag.Errorf("error setting `tenant_id` for snapshot export bucket (%s): %s", d.Id(), err) + } + return nil } @@ -162,14 +189,14 @@ func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag. } func resourceImportState(ctx context.Context, d *schema.ResourceData, meta any) ([]*schema.ResourceData, error) { - conn := meta.(*config.MongoDBClient).Atlas + conn := meta.(*config.MongoDBClient).AtlasV2 projectID, id, err := splitImportID(d.Id()) if err != nil { return nil, err } - _, _, err = conn.CloudProviderSnapshotExportBuckets.Get(ctx, *projectID, *id) + _, _, err = conn.CloudBackupsApi.GetExportBucket(ctx, *projectID, *id).Execute() if err != nil { return nil, fmt.Errorf("couldn't import snapshot export bucket %s in project %s, error: %s", *id, *projectID, err) } diff --git a/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_migration_test.go b/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_migration_test.go index dd18e3977e..1e042a8773 100644 --- a/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_migration_test.go +++ b/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_migration_test.go @@ -7,5 +7,5 @@ import ( ) func TestMigBackupSnapshotExportBucket_basic(t *testing.T) { - mig.CreateTestAndRunUseExternalProviderNonParallel(t, basicTestCase(t), mig.ExternalProvidersWithAWS(), nil) + mig.CreateTestAndRunUseExternalProviderNonParallel(t, basicAWSTestCase(t), mig.ExternalProvidersWithAWS(), nil) } diff --git a/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_test.go b/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_test.go index e2ba34e1c3..4647c3a975 100644 --- a/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_test.go +++ b/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_test.go @@ -18,19 +18,44 @@ var ( dataSourcePluralName = "data.mongodbatlas_cloud_backup_snapshot_export_buckets.test" ) -func TestAccBackupSnapshotExportBucket_basic(t *testing.T) { - resource.ParallelTest(t, *basicTestCase(t)) +func TestAccBackupSnapshotExportBucket_basicAWS(t *testing.T) { + resource.ParallelTest(t, *basicAWSTestCase(t)) } -func basicTestCase(tb testing.TB) *resource.TestCase { +func TestAccBackupSnapshotExportBucket_basicAzure(t *testing.T) { + resource.ParallelTest(t, *basicAzureTestCase(t)) +} + +func basicAWSTestCase(tb testing.TB) *resource.TestCase { tb.Helper() var ( - projectID = acc.ProjectIDExecution(tb) - bucketName = os.Getenv("AWS_S3_BUCKET") - policyName = acc.RandomName() - roleName = acc.RandomIAMRole() + projectID = acc.ProjectIDExecution(tb) + bucketName = os.Getenv("AWS_S3_BUCKET") + policyName = acc.RandomName() + roleName = acc.RandomIAMRole() + attrMapCheck = map[string]string{ + "project_id": projectID, + "bucket_name": bucketName, + "cloud_provider": "AWS", + } + pluralAttrMapCheck = map[string]string{ + "project_id": projectID, + "results.#": "1", + "results.0.bucket_name": bucketName, + "results.0.cloud_provider": "AWS", + } + attrsSet = []string{ + "iam_role_id", + } ) + checks := []resource.TestCheckFunc{checkExists(resourceName)} + checks = acc.AddAttrChecks(resourceName, checks, attrMapCheck) + checks = acc.AddAttrSetChecks(resourceName, checks, attrsSet...) + checks = acc.AddAttrChecks(dataSourceName, checks, attrMapCheck) + checks = acc.AddAttrSetChecks(dataSourceName, checks, attrsSet...) + checks = acc.AddAttrChecks(dataSourcePluralName, checks, pluralAttrMapCheck) + checks = acc.AddAttrSetChecks(dataSourcePluralName, checks, []string{"results.0.iam_role_id"}...) return &resource.TestCase{ PreCheck: func() { acc.PreCheckBasic(tb); acc.PreCheckS3Bucket(tb) }, @@ -39,25 +64,68 @@ func basicTestCase(tb testing.TB) *resource.TestCase { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configBasic(projectID, bucketName, policyName, roleName), - Check: resource.ComposeAggregateTestCheckFunc( - checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "project_id", projectID), - resource.TestCheckResourceAttr(resourceName, "bucket_name", bucketName), - resource.TestCheckResourceAttr(resourceName, "cloud_provider", "AWS"), - resource.TestCheckResourceAttrSet(resourceName, "iam_role_id"), - - resource.TestCheckResourceAttr(dataSourceName, "project_id", projectID), - resource.TestCheckResourceAttr(dataSourceName, "bucket_name", bucketName), - resource.TestCheckResourceAttr(dataSourceName, "cloud_provider", "AWS"), - resource.TestCheckResourceAttrSet(dataSourceName, "iam_role_id"), - - resource.TestCheckResourceAttr(dataSourcePluralName, "project_id", projectID), - resource.TestCheckResourceAttr(dataSourcePluralName, "results.#", "1"), - resource.TestCheckResourceAttr(dataSourcePluralName, "results.0.bucket_name", bucketName), - resource.TestCheckResourceAttr(dataSourcePluralName, "results.0.cloud_provider", "AWS"), - resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.0.iam_role_id"), - ), + Config: configAWSBasic(projectID, bucketName, policyName, roleName), + Check: resource.ComposeAggregateTestCheckFunc(checks...), + }, + { + ResourceName: resourceName, + ImportStateIdFunc: importStateIDFunc(resourceName), + ImportState: true, + ImportStateVerify: true, + }, + }, + } +} + +func basicAzureTestCase(t *testing.T) *resource.TestCase { + t.Helper() + + var ( + projectID = acc.ProjectIDExecution(t) + tenantID = os.Getenv("AZURE_TENANT_ID") + bucketName = os.Getenv("AZURE_BLOB_STORAGE_CONTAINER_NAME") + serviceURL = os.Getenv("AZURE_SERVICE_URL") + atlasAzureAppID = os.Getenv("AZURE_ATLAS_APP_ID") + servicePrincipalID = os.Getenv("AZURE_SERVICE_PRINCIPAL_ID") + attrMapCheck = map[string]string{ + "project_id": projectID, + "bucket_name": bucketName, + "service_url": serviceURL, + "tenant_id": tenantID, + "cloud_provider": "AZURE", + } + pluralAttrMapCheck = map[string]string{ + "project_id": projectID, + "results.#": "1", + "results.0.bucket_name": bucketName, + "results.0.service_url": serviceURL, + "results.0.tenant_id": tenantID, + "results.0.cloud_provider": "AZURE", + } + attrsSet = []string{ + "role_id", + } + ) + checks := []resource.TestCheckFunc{checkExists(resourceName)} + checks = acc.AddAttrChecks(resourceName, checks, attrMapCheck) + checks = acc.AddAttrSetChecks(resourceName, checks, attrsSet...) + checks = acc.AddAttrChecks(dataSourceName, checks, attrMapCheck) + checks = acc.AddAttrSetChecks(dataSourceName, checks, attrsSet...) + checks = acc.AddAttrChecks(dataSourcePluralName, checks, pluralAttrMapCheck) + checks = acc.AddAttrSetChecks(dataSourcePluralName, checks, []string{"results.0.role_id"}...) + + return &resource.TestCase{ + PreCheck: func() { + acc.PreCheckBasic(t) + acc.PreCheckCloudProviderAccessAzure(t) + acc.PreCheckAzureExportBucket(t) + }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: checkDestroy, + Steps: []resource.TestStep{ + { + Config: configAzureBasic(projectID, atlasAzureAppID, servicePrincipalID, tenantID, bucketName, serviceURL), + Check: resource.ComposeAggregateTestCheckFunc(checks...), }, { ResourceName: resourceName, @@ -112,7 +180,7 @@ func importStateIDFunc(resourceName string) resource.ImportStateIdFunc { } } -func configBasic(projectID, bucketName, policyName, roleName string) string { +func configAWSBasic(projectID, bucketName, policyName, roleName string) string { return fmt.Sprintf(` resource "aws_iam_role_policy" "test_policy" { name = %[3]q @@ -193,3 +261,48 @@ func configBasic(projectID, bucketName, policyName, roleName string) string { } `, projectID, bucketName, policyName, roleName) } + +func configAzureBasic(projectID, atlasAzureAppID, servicePrincipalID, tenantID, bucketName, serviceURL string) string { + return fmt.Sprintf(` + resource "mongodbatlas_cloud_provider_access_setup" "setup_only" { + project_id = %[1]q + provider_name = "AZURE" + azure_config { + atlas_azure_app_id = %[2]q + service_principal_id = %[3]q + tenant_id = %[4]q + } + } + + resource "mongodbatlas_cloud_provider_access_authorization" "auth_role" { + project_id = %[1]q + role_id = mongodbatlas_cloud_provider_access_setup.setup_only.role_id + + azure { + atlas_azure_app_id = %[2]q + service_principal_id = %[3]q + tenant_id = %[4]q + } + } + + + resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { + project_id = %[1]q + bucket_name = %[5]q + cloud_provider = "AZURE" + service_url = %[6]q + role_id = mongodbatlas_cloud_provider_access_authorization.auth_role.role_id + tenant_id = %[4]q + } + + data "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { + project_id = mongodbatlas_cloud_backup_snapshot_export_bucket.test.project_id + export_bucket_id = mongodbatlas_cloud_backup_snapshot_export_bucket.test.export_bucket_id + id = mongodbatlas_cloud_backup_snapshot_export_bucket.test.export_bucket_id + } + + data "mongodbatlas_cloud_backup_snapshot_export_buckets" "test" { + project_id = mongodbatlas_cloud_backup_snapshot_export_bucket.test.project_id + } + `, projectID, atlasAzureAppID, servicePrincipalID, tenantID, bucketName, serviceURL) +} diff --git a/internal/service/cloudbackupsnapshotexportjob/data_source_cloud_backup_snapshot_export_job.go b/internal/service/cloudbackupsnapshotexportjob/data_source_cloud_backup_snapshot_export_job.go index 54262d2a2a..66c6666965 100644 --- a/internal/service/cloudbackupsnapshotexportjob/data_source_cloud_backup_snapshot_export_job.go +++ b/internal/service/cloudbackupsnapshotexportjob/data_source_cloud_backup_snapshot_export_job.go @@ -71,8 +71,9 @@ func DataSource() *schema.Resource { Computed: true, }, "err_msg": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Deprecated: fmt.Sprintf(constant.DeprecationParamByVersion, "1.20.0"), }, "export_bucket_id": { Type: schema.TypeString, diff --git a/internal/service/cloudbackupsnapshotexportjob/data_source_cloud_backup_snapshot_export_jobs.go b/internal/service/cloudbackupsnapshotexportjob/data_source_cloud_backup_snapshot_export_jobs.go index 23b1fda897..a29f13d3a6 100644 --- a/internal/service/cloudbackupsnapshotexportjob/data_source_cloud_backup_snapshot_export_jobs.go +++ b/internal/service/cloudbackupsnapshotexportjob/data_source_cloud_backup_snapshot_export_jobs.go @@ -2,17 +2,20 @@ package cloudbackupsnapshotexportjob import ( "context" + "fmt" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - matlas "go.mongodb.org/atlas/mongodbatlas" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { return &schema.Resource{ - ReadContext: dataSourceMongoDBAtlasCloudBackupSnapshotsExportJobsRead, + ReadContext: dataSourceRead, Schema: map[string]*schema.Schema{ "project_id": { Type: schema.TypeString, @@ -79,8 +82,9 @@ func PluralDataSource() *schema.Resource { Computed: true, }, "err_msg": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Deprecated: fmt.Sprintf(constant.DeprecationParamByVersion, "1.20.0"), }, "export_bucket_id": { Type: schema.TypeString, @@ -117,28 +121,24 @@ func PluralDataSource() *schema.Resource { } } -func dataSourceMongoDBAtlasCloudBackupSnapshotsExportJobsRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - // Get client connection. - conn := meta.(*config.MongoDBClient).Atlas +func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { + connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID := d.Get("project_id").(string) clusterName := d.Get("cluster_name").(string) + pageNum := d.Get("page_num").(int) + itemsPerPage := d.Get("items_per_page").(int) - options := &matlas.ListOptions{ - PageNum: d.Get("page_num").(int), - ItemsPerPage: d.Get("items_per_page").(int), - } - - jobs, _, err := conn.CloudProviderSnapshotExportJobs.List(ctx, projectID, clusterName, options) + jobs, _, err := connV2.CloudBackupsApi.ListBackupExportJobs(ctx, projectID, clusterName).PageNum(pageNum).ItemsPerPage(itemsPerPage).Execute() if err != nil { return diag.Errorf("error getting CloudProviderSnapshotExportJobs information: %s", err) } - if err := d.Set("results", flattenCloudBackupSnapshotExportJobs(jobs.Results)); err != nil { + if err := d.Set("results", flattenCloudBackupSnapshotExportJobs(jobs.GetResults())); err != nil { return diag.Errorf("error setting `results`: %s", err) } - if err := d.Set("total_count", jobs.TotalCount); err != nil { + if err := d.Set("total_count", jobs.GetTotalCount()); err != nil { return diag.Errorf("error setting `total_count`: %s", err) } @@ -147,7 +147,7 @@ func dataSourceMongoDBAtlasCloudBackupSnapshotsExportJobsRead(ctx context.Contex return nil } -func flattenCloudBackupSnapshotExportJobs(jobs []*matlas.CloudProviderSnapshotExportJob) []map[string]any { +func flattenCloudBackupSnapshotExportJobs(jobs []admin.DiskBackupExportJob) []map[string]any { var results []map[string]any if len(jobs) == 0 { @@ -158,18 +158,18 @@ func flattenCloudBackupSnapshotExportJobs(jobs []*matlas.CloudProviderSnapshotEx for k, job := range jobs { results[k] = map[string]any{ - "export_job_id": job.ID, - "created_at": job.CreatedAt, - "components": flattenExportJobsComponents(job.Components), - "custom_data": flattenExportJobsCustomData(job.CustomData), - "err_msg": job.ErrMsg, - "export_bucket_id": job.ExportBucketID, - "export_status_exported_collections": job.ExportStatus.ExportedCollections, - "export_status_total_collections": job.ExportStatus.TotalCollections, - "finished_at": job.FinishedAt, - "prefix": job.Prefix, - "snapshot_id": job.SnapshotID, - "state": job.State, + "export_job_id": job.GetId(), + "created_at": conversion.TimePtrToStringPtr(job.CreatedAt), + "components": flattenExportJobsComponents(job.GetComponents()), + "custom_data": flattenExportJobsCustomData(job.GetCustomData()), + "export_bucket_id": job.GetExportBucketId(), + "err_msg": "", + "export_status_exported_collections": job.ExportStatus.GetExportedCollections(), + "export_status_total_collections": job.ExportStatus.GetTotalCollections(), + "finished_at": conversion.TimePtrToStringPtr(job.FinishedAt), + "prefix": job.GetPrefix(), + "snapshot_id": job.GetSnapshotId(), + "state": job.GetState(), } } diff --git a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job.go b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job.go index 8405ac9a1b..8fe4a0d7a3 100644 --- a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job.go +++ b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job.go @@ -8,18 +8,19 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - matlas "go.mongodb.org/atlas/mongodbatlas" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { return &schema.Resource{ - CreateContext: resourceMongoDBAtlasCloudBackupSnapshotExportJobCreate, - ReadContext: resourceMongoDBAtlasCloudBackupSnapshotExportJobRead, + CreateContext: resourceCreate, + ReadContext: resourceRead, DeleteContext: resourceDelete, Importer: &schema.ResourceImporter{ - StateContext: resourceMongoDBAtlasCloudBackupSnapshotExportJobImportState, + StateContext: resourceImportState, }, Schema: returnCloudBackupSnapshotExportJobSchema(), } @@ -94,8 +95,9 @@ func returnCloudBackupSnapshotExportJobSchema() map[string]*schema.Schema { Computed: true, }, "err_msg": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Computed: true, + Deprecated: fmt.Sprintf(constant.DeprecationParamByVersion, "1.20.0"), }, "export_status_exported_collections": { Type: schema.TypeInt, @@ -120,7 +122,7 @@ func returnCloudBackupSnapshotExportJobSchema() map[string]*schema.Schema { } } -func resourceMongoDBAtlasCloudBackupSnapshotExportJobRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { +func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { exportJob, err := readExportJob(ctx, meta, d) if err != nil { reset := strings.Contains(err.Error(), "404") && !d.IsNewResource() @@ -135,8 +137,8 @@ func resourceMongoDBAtlasCloudBackupSnapshotExportJobRead(ctx context.Context, d return setExportJobFields(d, exportJob) } -func readExportJob(ctx context.Context, meta any, d *schema.ResourceData) (*matlas.CloudProviderSnapshotExportJob, error) { - conn := meta.(*config.MongoDBClient).Atlas +func readExportJob(ctx context.Context, meta any, d *schema.ResourceData) (*admin.DiskBackupExportJob, error) { + connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID, clusterName, exportID := getRequiredFields(d) if d.Id() != "" && (projectID == "" || clusterName == "" || exportID == "") { ids := conversion.DecodeStateID(d.Id()) @@ -144,12 +146,12 @@ func readExportJob(ctx context.Context, meta any, d *schema.ResourceData) (*matl clusterName = ids["cluster_name"] exportID = ids["export_job_id"] } - exportJob, _, err := conn.CloudProviderSnapshotExportJobs.Get(ctx, projectID, clusterName, exportID) + exportJob, _, err := connV2.CloudBackupsApi.GetBackupExportJob(ctx, projectID, clusterName, exportID).Execute() if err == nil { d.SetId(conversion.EncodeStateID(map[string]string{ "project_id": projectID, "cluster_name": clusterName, - "export_job_id": exportJob.ID, + "export_job_id": exportJob.GetId(), })) } return exportJob, err @@ -162,61 +164,61 @@ func getRequiredFields(d *schema.ResourceData) (projectID, clusterName, exportID return projectID, clusterName, exportID } -func setExportJobFields(d *schema.ResourceData, exportJob *matlas.CloudProviderSnapshotExportJob) diag.Diagnostics { - if err := d.Set("export_job_id", exportJob.ID); err != nil { +func setExportJobFields(d *schema.ResourceData, exportJob *admin.DiskBackupExportJob) diag.Diagnostics { + if err := d.Set("export_job_id", exportJob.GetId()); err != nil { return diag.Errorf("error setting `export_job_id` for snapshot export job (%s): %s", d.Id(), err) } - if err := d.Set("snapshot_id", exportJob.SnapshotID); err != nil { + if err := d.Set("snapshot_id", exportJob.GetSnapshotId()); err != nil { return diag.Errorf("error setting `snapshot_id` for snapshot export job (%s): %s", d.Id(), err) } - if err := d.Set("custom_data", flattenExportJobsCustomData(exportJob.CustomData)); err != nil { + if err := d.Set("custom_data", flattenExportJobsCustomData(exportJob.GetCustomData())); err != nil { return diag.Errorf("error setting `custom_data` for snapshot export job (%s): %s", d.Id(), err) } - if err := d.Set("components", flattenExportJobsComponents(exportJob.Components)); err != nil { + if err := d.Set("components", flattenExportJobsComponents(exportJob.GetComponents())); err != nil { return diag.Errorf("error setting `components` for snapshot export job (%s): %s", d.Id(), err) } - if err := d.Set("created_at", exportJob.CreatedAt); err != nil { + if err := d.Set("created_at", conversion.TimePtrToStringPtr(exportJob.CreatedAt)); err != nil { return diag.Errorf("error setting `created_at` for snapshot export job (%s): %s", d.Id(), err) } - if err := d.Set("err_msg", exportJob.ErrMsg); err != nil { + if err := d.Set("err_msg", ""); err != nil { return diag.Errorf("error setting `created_at` for snapshot export job (%s): %s", d.Id(), err) } - if err := d.Set("export_bucket_id", exportJob.ExportBucketID); err != nil { + if err := d.Set("export_bucket_id", exportJob.GetExportBucketId()); err != nil { return diag.Errorf("error setting `created_at` for snapshot export job (%s): %s", d.Id(), err) } if exportJob.ExportStatus != nil { - if err := d.Set("export_status_exported_collections", exportJob.ExportStatus.ExportedCollections); err != nil { + if err := d.Set("export_status_exported_collections", exportJob.ExportStatus.GetExportedCollections()); err != nil { return diag.Errorf("error setting `export_status_exported_collections` for snapshot export job (%s): %s", d.Id(), err) } - if err := d.Set("export_status_total_collections", exportJob.ExportStatus.TotalCollections); err != nil { + if err := d.Set("export_status_total_collections", exportJob.ExportStatus.GetTotalCollections()); err != nil { return diag.Errorf("error setting `export_status_total_collections` for snapshot export job (%s): %s", d.Id(), err) } } - if err := d.Set("finished_at", exportJob.FinishedAt); err != nil { + if err := d.Set("finished_at", conversion.TimePtrToStringPtr(exportJob.FinishedAt)); err != nil { return diag.Errorf("error setting `finished_at` for snapshot export job (%s): %s", d.Id(), err) } - if err := d.Set("prefix", exportJob.Prefix); err != nil { + if err := d.Set("prefix", exportJob.GetPrefix()); err != nil { return diag.Errorf("error setting `prefix` for snapshot export job (%s): %s", d.Id(), err) } - if err := d.Set("state", exportJob.State); err != nil { + if err := d.Set("state", exportJob.GetState()); err != nil { return diag.Errorf("error setting `prefix` for snapshot export job (%s): %s", d.Id(), err) } return nil } -func flattenExportJobsComponents(components []*matlas.CloudProviderSnapshotExportJobComponent) []map[string]any { +func flattenExportJobsComponents(components []admin.DiskBackupExportMember) []map[string]any { if len(components) == 0 { return nil } @@ -225,15 +227,15 @@ func flattenExportJobsComponents(components []*matlas.CloudProviderSnapshotExpor for i := range components { customData = append(customData, map[string]any{ - "export_id": components[i].ExportID, - "replica_set_name": components[i].ReplicaSetName, + "export_id": (components)[i].GetExportId(), + "replica_set_name": (components)[i].GetReplicaSetName(), }) } return customData } -func flattenExportJobsCustomData(data []*matlas.CloudProviderSnapshotExportJobCustomData) []map[string]any { +func flattenExportJobsCustomData(data []admin.BackupLabel) []map[string]any { if len(data) == 0 { return nil } @@ -242,53 +244,53 @@ func flattenExportJobsCustomData(data []*matlas.CloudProviderSnapshotExportJobCu for i := range data { customData = append(customData, map[string]any{ - "key": data[i].Key, - "value": data[i].Value, + "key": data[i].GetKey(), + "value": data[i].GetValue(), }) } return customData } -func resourceMongoDBAtlasCloudBackupSnapshotExportJobCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(*config.MongoDBClient).Atlas +func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { + connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID := d.Get("project_id").(string) clusterName := d.Get("cluster_name").(string) - request := &matlas.CloudProviderSnapshotExportJob{ - SnapshotID: d.Get("snapshot_id").(string), - ExportBucketID: d.Get("export_bucket_id").(string), + request := &admin.DiskBackupExportJobRequest{ + SnapshotId: d.Get("snapshot_id").(string), + ExportBucketId: d.Get("export_bucket_id").(string), CustomData: expandExportJobCustomData(d), } - jobResponse, _, err := conn.CloudProviderSnapshotExportJobs.Create(ctx, projectID, clusterName, request) + jobResponse, _, err := connV2.CloudBackupsApi.CreateBackupExportJob(ctx, projectID, clusterName, request).Execute() if err != nil { return diag.Errorf("error creating snapshot export job: %s", err) } - if err := d.Set("export_job_id", jobResponse.ID); err != nil { - return diag.Errorf("error setting `export_job_id` for snapshot export job (%s): %s", jobResponse.ID, err) + if err := d.Set("export_job_id", jobResponse.Id); err != nil { + return diag.Errorf("error setting `export_job_id` for snapshot export job (%s): %s", *jobResponse.Id, err) } - return resourceMongoDBAtlasCloudBackupSnapshotExportJobRead(ctx, d, meta) + return resourceRead(ctx, d, meta) } -func expandExportJobCustomData(d *schema.ResourceData) []*matlas.CloudProviderSnapshotExportJobCustomData { +func expandExportJobCustomData(d *schema.ResourceData) *[]admin.BackupLabel { customData := d.Get("custom_data").(*schema.Set) - res := make([]*matlas.CloudProviderSnapshotExportJobCustomData, customData.Len()) + res := make([]admin.BackupLabel, customData.Len()) for i, val := range customData.List() { v := val.(map[string]any) - res[i] = &matlas.CloudProviderSnapshotExportJobCustomData{ - Key: v["key"].(string), - Value: v["value"].(string), + res[i] = admin.BackupLabel{ + Key: conversion.Pointer(v["key"].(string)), + Value: conversion.Pointer(v["value"].(string)), } } - return res + return &res } -func resourceMongoDBAtlasCloudBackupSnapshotExportJobImportState(ctx context.Context, d *schema.ResourceData, meta any) ([]*schema.ResourceData, error) { - conn := meta.(*config.MongoDBClient).Atlas +func resourceImportState(ctx context.Context, d *schema.ResourceData, meta any) ([]*schema.ResourceData, error) { + connV2 := meta.(*config.MongoDBClient).AtlasV2 parts := strings.SplitN(d.Id(), "--", 3) if len(parts) != 3 { @@ -299,7 +301,7 @@ func resourceMongoDBAtlasCloudBackupSnapshotExportJobImportState(ctx context.Con clusterName := parts[1] exportID := parts[2] - _, _, err := conn.CloudProviderSnapshotExportJobs.Get(ctx, projectID, clusterName, exportID) + _, _, err := connV2.CloudBackupsApi.GetBackupExportJob(ctx, projectID, clusterName, exportID).Execute() if err != nil { return nil, fmt.Errorf("couldn't import snapshot export job %s in project %s and cluster %s, error: %s", exportID, projectID, clusterName, err) } diff --git a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go index 7ebf7f5694..99125326f6 100644 --- a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go +++ b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go @@ -41,8 +41,9 @@ func basicTestCase(tb testing.TB) *resource.TestCase { "project_id": projectID, } attrsPluralDS = map[string]string{ - "project_id": projectID, - "results.0.custom_data.0.key": "exported by", + "project_id": projectID, + "results.0.custom_data.0.key": "exported by", + "results.0.custom_data.0.value": "tf-acc-test", } ) checks := []resource.TestCheckFunc{checkExists(resourceName)} @@ -81,7 +82,7 @@ func checkExists(resourceName string) resource.TestCheckFunc { if err != nil { return err } - _, _, err = acc.Conn().CloudProviderSnapshotExportJobs.Get(context.Background(), projectID, clusterName, exportJobID) + _, _, err = acc.ConnV2().CloudBackupsApi.GetBackupExportJob(context.Background(), projectID, clusterName, exportJobID).Execute() if err == nil { return nil } diff --git a/internal/service/cloudbackupsnapshotrestorejob/data_source_cloud_backup_snapshot_restore_jobs.go b/internal/service/cloudbackupsnapshotrestorejob/data_source_cloud_backup_snapshot_restore_jobs.go index 78d743e3ab..61a80a6808 100644 --- a/internal/service/cloudbackupsnapshotrestorejob/data_source_cloud_backup_snapshot_restore_jobs.go +++ b/internal/service/cloudbackupsnapshotrestorejob/data_source_cloud_backup_snapshot_restore_jobs.go @@ -10,7 +10,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job.go b/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job.go index 682e36a27f..2bb1ffc6a6 100644 --- a/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job.go +++ b/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job.go @@ -13,7 +13,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { diff --git a/internal/service/cloudprovideraccess/resource_cloud_provider_access_authorization.go b/internal/service/cloudprovideraccess/resource_cloud_provider_access_authorization.go index 43c2c06a53..0a0a568687 100644 --- a/internal/service/cloudprovideraccess/resource_cloud_provider_access_authorization.go +++ b/internal/service/cloudprovideraccess/resource_cloud_provider_access_authorization.go @@ -12,7 +12,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) /* diff --git a/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup.go b/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup.go index d796f48bdc..dd35fc02ec 100644 --- a/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup.go +++ b/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup.go @@ -6,7 +6,7 @@ import ( "net/http" "regexp" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" diff --git a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation.go b/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation.go index 284d1d04c2..d9271a7baa 100644 --- a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation.go +++ b/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation.go @@ -12,7 +12,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/controlplaneipaddresses/model.go b/internal/service/controlplaneipaddresses/model.go index a99a367c56..e70ec902c0 100644 --- a/internal/service/controlplaneipaddresses/model.go +++ b/internal/service/controlplaneipaddresses/model.go @@ -6,7 +6,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/diag" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func NewTFControlPlaneIPAddresses(ctx context.Context, apiResp *admin.ControlPlaneIPAddresses) (*TFControlPlaneIpAddressesModel, diag.Diagnostics) { diff --git a/internal/service/controlplaneipaddresses/model_test.go b/internal/service/controlplaneipaddresses/model_test.go index c550719e7f..7a4e2f48ea 100644 --- a/internal/service/controlplaneipaddresses/model_test.go +++ b/internal/service/controlplaneipaddresses/model_test.go @@ -9,7 +9,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/controlplaneipaddresses" "github.com/stretchr/testify/assert" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) type sdkToTFModelTestCase struct { diff --git a/internal/service/customdbrole/data_source_custom_db_roles.go b/internal/service/customdbrole/data_source_custom_db_roles.go index a46c8f9542..3f7492bbc7 100644 --- a/internal/service/customdbrole/data_source_custom_db_roles.go +++ b/internal/service/customdbrole/data_source_custom_db_roles.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/customdbrole/resource_custom_db_role.go b/internal/service/customdbrole/resource_custom_db_role.go index 1ba4bab266..4043f34be5 100644 --- a/internal/service/customdbrole/resource_custom_db_role.go +++ b/internal/service/customdbrole/resource_custom_db_role.go @@ -17,7 +17,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/spf13/cast" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { diff --git a/internal/service/customdbrole/resource_custom_db_role_test.go b/internal/service/customdbrole/resource_custom_db_role_test.go index af2e6282b6..8e9360f71f 100644 --- a/internal/service/customdbrole/resource_custom_db_role_test.go +++ b/internal/service/customdbrole/resource_custom_db_role_test.go @@ -11,7 +11,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" "github.com/spf13/cast" - matlas "go.mongodb.org/atlas/mongodbatlas" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const resourceName = "mongodbatlas_custom_db_role.test" @@ -72,64 +72,64 @@ func TestAccConfigRSCustomDBRoles_WithInheritedRoles(t *testing.T) { projectName = acc.RandomProjectName() ) - inheritRole := []matlas.CustomDBRole{ + inheritRole := []admin.UserCustomDBRole{ { RoleName: acc.RandomName(), - Actions: []matlas.Action{{ + Actions: &[]admin.DatabasePrivilegeAction{{ Action: "INSERT", - Resources: []matlas.Resource{{ - DB: conversion.Pointer(acc.RandomClusterName()), + Resources: &[]admin.DatabasePermittedNamespaceResource{{ + Db: acc.RandomClusterName(), }}, }}, }, { RoleName: acc.RandomName(), - Actions: []matlas.Action{{ + Actions: &[]admin.DatabasePrivilegeAction{{ Action: "SERVER_STATUS", - Resources: []matlas.Resource{{ - Cluster: conversion.Pointer(true), + Resources: &[]admin.DatabasePermittedNamespaceResource{{ + Cluster: true, }}, }}, }, } - testRole := &matlas.CustomDBRole{ + testRole := &admin.UserCustomDBRole{ RoleName: acc.RandomName(), - Actions: []matlas.Action{{ + Actions: &[]admin.DatabasePrivilegeAction{{ Action: "UPDATE", - Resources: []matlas.Resource{{ - DB: conversion.Pointer(acc.RandomClusterName()), + Resources: &[]admin.DatabasePermittedNamespaceResource{{ + Db: acc.RandomClusterName(), }}, }}, } - inheritRoleUpdated := []matlas.CustomDBRole{ + inheritRoleUpdated := []admin.UserCustomDBRole{ { RoleName: inheritRole[0].RoleName, - Actions: []matlas.Action{{ + Actions: &[]admin.DatabasePrivilegeAction{{ Action: "FIND", - Resources: []matlas.Resource{{ - DB: conversion.Pointer(acc.RandomClusterName()), + Resources: &[]admin.DatabasePermittedNamespaceResource{{ + Db: acc.RandomClusterName(), }}, }}, }, { RoleName: inheritRole[1].RoleName, - Actions: []matlas.Action{{ + Actions: &[]admin.DatabasePrivilegeAction{{ Action: "CONN_POOL_STATS", - Resources: []matlas.Resource{{ - Cluster: conversion.Pointer(true), + Resources: &[]admin.DatabasePermittedNamespaceResource{{ + Cluster: true, }}, }}, }, } - testRoleUpdated := &matlas.CustomDBRole{ + testRoleUpdated := &admin.UserCustomDBRole{ RoleName: testRole.RoleName, - Actions: []matlas.Action{{ + Actions: &[]admin.DatabasePrivilegeAction{{ Action: "REMOVE", - Resources: []matlas.Resource{{ - DB: conversion.Pointer(acc.RandomClusterName()), + Resources: &[]admin.DatabasePermittedNamespaceResource{{ + Db: acc.RandomClusterName(), }}, }}, } @@ -148,25 +148,25 @@ func TestAccConfigRSCustomDBRoles_WithInheritedRoles(t *testing.T) { checkExists(InheritedRoleResourceNameOne), resource.TestCheckResourceAttrSet(InheritedRoleResourceNameOne, "project_id"), resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "role_name", inheritRole[0].RoleName), - resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.#", cast.ToString(len(inheritRole[0].Actions))), - resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.0.action", inheritRole[0].Actions[0].Action), - resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.0.resources.#", cast.ToString(len(inheritRole[0].Actions[0].Resources))), + resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.#", cast.ToString(len(inheritRole[0].GetActions()))), + resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.0.action", inheritRole[0].GetActions()[0].Action), + resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.0.resources.#", cast.ToString(len(inheritRole[0].GetActions()[0].GetResources()))), // inherited Role [1] checkExists(InheritedRoleResourceNameTwo), resource.TestCheckResourceAttrSet(InheritedRoleResourceNameTwo, "project_id"), resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "role_name", inheritRole[1].RoleName), - resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.#", cast.ToString(len(inheritRole[1].Actions))), - resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.0.action", inheritRole[1].Actions[0].Action), - resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.0.resources.#", cast.ToString(len(inheritRole[1].Actions[0].Resources))), + resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.#", cast.ToString(len(inheritRole[1].GetActions()))), + resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.0.action", inheritRole[1].GetActions()[0].Action), + resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.0.resources.#", cast.ToString(len(inheritRole[1].GetActions()[0].GetResources()))), // For Test Role checkExists(testRoleResourceName), resource.TestCheckResourceAttrSet(testRoleResourceName, "project_id"), resource.TestCheckResourceAttr(testRoleResourceName, "role_name", testRole.RoleName), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRole.Actions))), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRole.Actions[0].Action), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRole.Actions[0].Resources))), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRole.GetActions()))), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRole.GetActions()[0].Action), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRole.GetActions()[0].GetResources()))), resource.TestCheckResourceAttr(testRoleResourceName, "inherited_roles.#", "2"), ), }, @@ -179,25 +179,25 @@ func TestAccConfigRSCustomDBRoles_WithInheritedRoles(t *testing.T) { checkExists(InheritedRoleResourceNameOne), resource.TestCheckResourceAttrSet(InheritedRoleResourceNameOne, "project_id"), resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "role_name", inheritRoleUpdated[0].RoleName), - resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.#", cast.ToString(len(inheritRoleUpdated[0].Actions))), - resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.0.action", inheritRoleUpdated[0].Actions[0].Action), - resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated[0].Actions[0].Resources))), + resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.#", cast.ToString(len(inheritRoleUpdated[0].GetActions()))), + resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.0.action", inheritRoleUpdated[0].GetActions()[0].Action), + resource.TestCheckResourceAttr(InheritedRoleResourceNameOne, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated[0].GetActions()[0].GetResources()))), // inherited Role [1] checkExists(InheritedRoleResourceNameTwo), resource.TestCheckResourceAttrSet(InheritedRoleResourceNameTwo, "project_id"), resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "role_name", inheritRoleUpdated[1].RoleName), - resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.#", cast.ToString(len(inheritRoleUpdated[1].Actions))), - resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.0.action", inheritRoleUpdated[1].Actions[0].Action), - resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated[1].Actions[0].Resources))), + resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.#", cast.ToString(len(inheritRoleUpdated[1].GetActions()))), + resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.0.action", inheritRoleUpdated[1].GetActions()[0].Action), + resource.TestCheckResourceAttr(InheritedRoleResourceNameTwo, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated[1].GetActions()[0].GetResources()))), // For Test Role checkExists(testRoleResourceName), resource.TestCheckResourceAttrSet(testRoleResourceName, "project_id"), resource.TestCheckResourceAttr(testRoleResourceName, "role_name", testRoleUpdated.RoleName), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRoleUpdated.Actions))), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRoleUpdated.Actions[0].Action), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRoleUpdated.Actions[0].Resources))), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRoleUpdated.GetActions()))), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRoleUpdated.GetActions()[0].Action), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRoleUpdated.GetActions()[0].GetResources()))), resource.TestCheckResourceAttr(testRoleResourceName, "inherited_roles.#", "2"), ), }, @@ -213,55 +213,55 @@ func TestAccConfigRSCustomDBRoles_MultipleCustomRoles(t *testing.T) { projectName = acc.RandomProjectName() ) - inheritRole := &matlas.CustomDBRole{ + inheritRole := &admin.UserCustomDBRole{ RoleName: acc.RandomName(), - Actions: []matlas.Action{ + Actions: &[]admin.DatabasePrivilegeAction{ { Action: "REMOVE", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, { Action: "FIND", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, }, } - testRole := &matlas.CustomDBRole{ + testRole := &admin.UserCustomDBRole{ RoleName: acc.RandomName(), - Actions: []matlas.Action{ + Actions: &[]admin.DatabasePrivilegeAction{ { Action: "UPDATE", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, { Action: "INSERT", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, }, - InheritedRoles: []matlas.InheritedRole{ + InheritedRoles: &[]admin.DatabaseInheritedRole{ { Role: inheritRole.RoleName, Db: "admin", @@ -269,55 +269,55 @@ func TestAccConfigRSCustomDBRoles_MultipleCustomRoles(t *testing.T) { }, } - inheritRoleUpdated := &matlas.CustomDBRole{ + inheritRoleUpdated := &admin.UserCustomDBRole{ RoleName: inheritRole.RoleName, - Actions: []matlas.Action{ + Actions: &[]admin.DatabasePrivilegeAction{ { Action: "UPDATE", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, { Action: "FIND", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, { Action: "INSERT", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, }, } - testRoleUpdated := &matlas.CustomDBRole{ + testRoleUpdated := &admin.UserCustomDBRole{ RoleName: testRole.RoleName, - Actions: []matlas.Action{ + Actions: &[]admin.DatabasePrivilegeAction{ { Action: "REMOVE", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, }, - InheritedRoles: []matlas.InheritedRole{ + InheritedRoles: &[]admin.DatabaseInheritedRole{ { Role: inheritRole.RoleName, Db: "admin", @@ -338,17 +338,17 @@ func TestAccConfigRSCustomDBRoles_MultipleCustomRoles(t *testing.T) { checkExists(InheritedRoleResourceName), resource.TestCheckResourceAttrSet(InheritedRoleResourceName, "project_id"), resource.TestCheckResourceAttr(InheritedRoleResourceName, "role_name", inheritRole.RoleName), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRole.Actions))), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRole.Actions[0].Action), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRole.Actions[0].Resources))), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRole.GetActions()))), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRole.GetActions()[0].Action), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRole.GetActions()[0].GetResources()))), // For Test Role checkExists(testRoleResourceName), resource.TestCheckResourceAttrSet(testRoleResourceName, "project_id"), resource.TestCheckResourceAttr(testRoleResourceName, "role_name", testRole.RoleName), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRole.Actions))), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRole.Actions[0].Action), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRole.Actions[0].Resources))), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRole.GetActions()))), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRole.GetActions()[0].Action), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRole.GetActions()[0].GetResources()))), ), }, { @@ -359,17 +359,17 @@ func TestAccConfigRSCustomDBRoles_MultipleCustomRoles(t *testing.T) { checkExists(InheritedRoleResourceName), resource.TestCheckResourceAttrSet(InheritedRoleResourceName, "project_id"), resource.TestCheckResourceAttr(InheritedRoleResourceName, "role_name", inheritRoleUpdated.RoleName), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRoleUpdated.Actions))), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRoleUpdated.Actions[0].Action), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated.Actions[0].Resources))), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRoleUpdated.GetActions()))), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRoleUpdated.GetActions()[0].Action), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated.GetActions()[0].GetResources()))), // For Test Role checkExists(testRoleResourceName), resource.TestCheckResourceAttrSet(testRoleResourceName, "project_id"), resource.TestCheckResourceAttr(testRoleResourceName, "role_name", testRoleUpdated.RoleName), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRoleUpdated.Actions))), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRoleUpdated.Actions[0].Action), - resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRoleUpdated.Actions[0].Resources))), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRoleUpdated.GetActions()))), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRoleUpdated.GetActions()[0].Action), + resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRoleUpdated.GetActions()[0].GetResources()))), resource.TestCheckResourceAttr(testRoleResourceName, "inherited_roles.#", "1"), ), }, @@ -416,70 +416,70 @@ func TestAccConfigRSCustomDBRoles_UpdatedInheritRoles(t *testing.T) { projectName = acc.RandomProjectName() ) - inheritRole := &matlas.CustomDBRole{ + inheritRole := &admin.UserCustomDBRole{ RoleName: acc.RandomName(), - Actions: []matlas.Action{ + Actions: &[]admin.DatabasePrivilegeAction{ { Action: "REMOVE", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, { Action: "FIND", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, }, } - inheritRoleUpdated := &matlas.CustomDBRole{ + inheritRoleUpdated := &admin.UserCustomDBRole{ RoleName: inheritRole.RoleName, - Actions: []matlas.Action{ + Actions: &[]admin.DatabasePrivilegeAction{ { Action: "UPDATE", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, { Action: "FIND", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, { Action: "INSERT", - Resources: []matlas.Resource{ + Resources: &[]admin.DatabasePermittedNamespaceResource{ { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, { - DB: conversion.Pointer(acc.RandomClusterName()), + Db: acc.RandomClusterName(), }, }, }, }, } - testRole := &matlas.CustomDBRole{ + testRole := &admin.UserCustomDBRole{ RoleName: acc.RandomName(), - InheritedRoles: []matlas.InheritedRole{ + InheritedRoles: &[]admin.DatabaseInheritedRole{ { Role: inheritRole.RoleName, Db: "admin", @@ -500,9 +500,9 @@ func TestAccConfigRSCustomDBRoles_UpdatedInheritRoles(t *testing.T) { checkExists(InheritedRoleResourceName), resource.TestCheckResourceAttrSet(InheritedRoleResourceName, "project_id"), resource.TestCheckResourceAttr(InheritedRoleResourceName, "role_name", inheritRole.RoleName), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRole.Actions))), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRole.Actions[0].Action), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRole.Actions[0].Resources))), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRole.GetActions()))), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRole.GetActions()[0].Action), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRole.GetActions()[0].GetResources()))), // For Test Role checkExists(testRoleResourceName), @@ -520,9 +520,9 @@ func TestAccConfigRSCustomDBRoles_UpdatedInheritRoles(t *testing.T) { checkExists(InheritedRoleResourceName), resource.TestCheckResourceAttrSet(InheritedRoleResourceName, "project_id"), resource.TestCheckResourceAttr(InheritedRoleResourceName, "role_name", inheritRoleUpdated.RoleName), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRoleUpdated.Actions))), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRoleUpdated.Actions[0].Action), - resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated.Actions[0].Resources))), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRoleUpdated.GetActions()))), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRoleUpdated.GetActions()[0].Action), + resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated.GetActions()[0].GetResources()))), // For Test Role checkExists(testRoleResourceName), @@ -599,7 +599,7 @@ func configBasic(orgID, projectName, roleName, action, databaseName string) stri `, orgID, projectName, roleName, action, databaseName) } -func configWithInheritedRoles(orgID, projectName string, inheritedRole []matlas.CustomDBRole, testRole *matlas.CustomDBRole) string { +func configWithInheritedRoles(orgID, projectName string, inheritedRole []admin.UserCustomDBRole, testRole *admin.UserCustomDBRole) string { return fmt.Sprintf(` resource "mongodbatlas_project" "test" { @@ -654,30 +654,30 @@ func configWithInheritedRoles(orgID, projectName string, inheritedRole []matlas. } } `, orgID, projectName, - inheritedRole[0].RoleName, inheritedRole[0].Actions[0].Action, *inheritedRole[0].Actions[0].Resources[0].DB, - inheritedRole[1].RoleName, inheritedRole[1].Actions[0].Action, *inheritedRole[1].Actions[0].Resources[0].Cluster, - testRole.RoleName, testRole.Actions[0].Action, *testRole.Actions[0].Resources[0].DB, + inheritedRole[0].RoleName, inheritedRole[0].GetActions()[0].Action, inheritedRole[0].GetActions()[0].GetResources()[0].Db, + inheritedRole[1].RoleName, inheritedRole[1].GetActions()[0].Action, inheritedRole[1].GetActions()[0].GetResources()[0].Cluster, + testRole.RoleName, testRole.GetActions()[0].Action, testRole.GetActions()[0].GetResources()[0].Db, ) } -func configWithMultiple(orgID, projectName string, inheritedRole, testRole *matlas.CustomDBRole) string { - getCustomRoleFields := func(customRole *matlas.CustomDBRole) map[string]string { +func configWithMultiple(orgID, projectName string, inheritedRole, testRole *admin.UserCustomDBRole) string { + getCustomRoleFields := func(customRole *admin.UserCustomDBRole) map[string]string { var ( actions string inheritedRoles string ) - for _, a := range customRole.Actions { + for _, a := range customRole.GetActions() { var resources string // get the resources - for _, r := range a.Resources { + for _, r := range a.GetResources() { resources += fmt.Sprintf(` resources { collection_name = "" database_name = "%s" } - `, *r.DB) + `, r.Db) } // get the actions and set the resources @@ -689,7 +689,7 @@ func configWithMultiple(orgID, projectName string, inheritedRole, testRole *matl `, a.Action, resources) } - for _, in := range customRole.InheritedRoles { + for _, in := range customRole.GetInheritedRoles() { inheritedRoles += fmt.Sprintf(` inherited_roles { role_name = "%s" diff --git a/internal/service/customdnsconfigurationclusteraws/resource_custom_dns_configuration_cluster_aws.go b/internal/service/customdnsconfigurationclusteraws/resource_custom_dns_configuration_cluster_aws.go index 5ce4f48c4e..8fea87b8d2 100644 --- a/internal/service/customdnsconfigurationclusteraws/resource_custom_dns_configuration_cluster_aws.go +++ b/internal/service/customdnsconfigurationclusteraws/resource_custom_dns_configuration_cluster_aws.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/databaseuser/model_database_user.go b/internal/service/databaseuser/model_database_user.go index 113f31f4e5..a27b018149 100644 --- a/internal/service/databaseuser/model_database_user.go +++ b/internal/service/databaseuser/model_database_user.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func NewMongoDBDatabaseUser(ctx context.Context, statePasswordValue types.String, dbUserModel *TfDatabaseUserModel) (*admin.CloudDatabaseUser, diag.Diagnostics) { diff --git a/internal/service/databaseuser/model_database_user_test.go b/internal/service/databaseuser/model_database_user_test.go index 4ba4f849cb..c829481f22 100644 --- a/internal/service/databaseuser/model_database_user_test.go +++ b/internal/service/databaseuser/model_database_user_test.go @@ -9,7 +9,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/databaseuser" "github.com/stretchr/testify/assert" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) var ( diff --git a/internal/service/databaseuser/resource_database_user_migration_test.go b/internal/service/databaseuser/resource_database_user_migration_test.go index 081a6f8212..6d37e4c860 100644 --- a/internal/service/databaseuser/resource_database_user_migration_test.go +++ b/internal/service/databaseuser/resource_database_user_migration_test.go @@ -3,7 +3,7 @@ package databaseuser_test import ( "testing" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" diff --git a/internal/service/databaseuser/resource_database_user_test.go b/internal/service/databaseuser/resource_database_user_test.go index c384b94de3..1e614f5ec4 100644 --- a/internal/service/databaseuser/resource_database_user_test.go +++ b/internal/service/databaseuser/resource_database_user_test.go @@ -11,7 +11,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/databaseuser" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/datalakepipeline/data_source_data_lake_pipeline_run.go b/internal/service/datalakepipeline/data_source_data_lake_pipeline_run.go index e772c39cf6..25bdf48651 100644 --- a/internal/service/datalakepipeline/data_source_data_lake_pipeline_run.go +++ b/internal/service/datalakepipeline/data_source_data_lake_pipeline_run.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const errorDataLakePipelineRunRead = "error reading MongoDB Atlas DataLake Run (%s): %s" diff --git a/internal/service/datalakepipeline/data_source_data_lake_pipeline_runs.go b/internal/service/datalakepipeline/data_source_data_lake_pipeline_runs.go index ef548c46b9..c11ba3ae90 100644 --- a/internal/service/datalakepipeline/data_source_data_lake_pipeline_runs.go +++ b/internal/service/datalakepipeline/data_source_data_lake_pipeline_runs.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const errorDataLakePipelineRunList = "error reading MongoDB Atlas DataLake Runs (%s): %s" diff --git a/internal/service/datalakepipeline/data_source_data_lake_pipelines.go b/internal/service/datalakepipeline/data_source_data_lake_pipelines.go index 41adab2c44..fb4dfffbe9 100644 --- a/internal/service/datalakepipeline/data_source_data_lake_pipelines.go +++ b/internal/service/datalakepipeline/data_source_data_lake_pipelines.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const errorDataLakePipelineList = "error creating MongoDB Atlas DataLake Pipelines: %s" diff --git a/internal/service/datalakepipeline/resource_data_lake_pipeline.go b/internal/service/datalakepipeline/resource_data_lake_pipeline.go index 9d76b99053..dcb97f9268 100644 --- a/internal/service/datalakepipeline/resource_data_lake_pipeline.go +++ b/internal/service/datalakepipeline/resource_data_lake_pipeline.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/encryptionatrest/model_encryption_at_rest.go b/internal/service/encryptionatrest/model_encryption_at_rest.go index d52e8ada5b..0e40129e11 100644 --- a/internal/service/encryptionatrest/model_encryption_at_rest.go +++ b/internal/service/encryptionatrest/model_encryption_at_rest.go @@ -5,7 +5,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func NewTfEncryptionAtRestRSModel(ctx context.Context, projectID string, encryptionResp *admin.EncryptionAtRest) *TfEncryptionAtRestRSModel { diff --git a/internal/service/encryptionatrest/model_encryption_at_rest_test.go b/internal/service/encryptionatrest/model_encryption_at_rest_test.go index e451e85c9c..ea426bc1a8 100644 --- a/internal/service/encryptionatrest/model_encryption_at_rest_test.go +++ b/internal/service/encryptionatrest/model_encryption_at_rest_test.go @@ -7,7 +7,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/encryptionatrest" "github.com/stretchr/testify/assert" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) var ( diff --git a/internal/service/encryptionatrest/resource_encryption_at_rest.go b/internal/service/encryptionatrest/resource_encryption_at_rest.go index 8ba9b7de6b..010cc03f3a 100644 --- a/internal/service/encryptionatrest/resource_encryption_at_rest.go +++ b/internal/service/encryptionatrest/resource_encryption_at_rest.go @@ -24,7 +24,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/project" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/encryptionatrest/resource_encryption_at_rest_migration_test.go b/internal/service/encryptionatrest/resource_encryption_at_rest_migration_test.go index 279738d987..0c5f638c7a 100644 --- a/internal/service/encryptionatrest/resource_encryption_at_rest_migration_test.go +++ b/internal/service/encryptionatrest/resource_encryption_at_rest_migration_test.go @@ -9,7 +9,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func TestMigEncryptionAtRest_basicAWS(t *testing.T) { diff --git a/internal/service/encryptionatrest/resource_encryption_at_rest_test.go b/internal/service/encryptionatrest/resource_encryption_at_rest_test.go index d44e941a0b..0b9980e92c 100644 --- a/internal/service/encryptionatrest/resource_encryption_at_rest_test.go +++ b/internal/service/encryptionatrest/resource_encryption_at_rest_test.go @@ -16,8 +16,8 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/mock" - "go.mongodb.org/atlas-sdk/v20240530002/admin" - "go.mongodb.org/atlas-sdk/v20240530002/mockadmin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" + "go.mongodb.org/atlas-sdk/v20240805001/mockadmin" ) const ( diff --git a/internal/service/federateddatabaseinstance/data_source_federated_database_instance_test.go b/internal/service/federateddatabaseinstance/data_source_federated_database_instance_test.go index 8a8158e399..1f91f587f2 100644 --- a/internal/service/federateddatabaseinstance/data_source_federated_database_instance_test.go +++ b/internal/service/federateddatabaseinstance/data_source_federated_database_instance_test.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func TestAccFederatedDatabaseInstanceDS_s3Bucket(t *testing.T) { diff --git a/internal/service/federateddatabaseinstance/data_source_federated_database_instances.go b/internal/service/federateddatabaseinstance/data_source_federated_database_instances.go index 327ec41abe..aa29744694 100644 --- a/internal/service/federateddatabaseinstance/data_source_federated_database_instances.go +++ b/internal/service/federateddatabaseinstance/data_source_federated_database_instances.go @@ -4,7 +4,7 @@ import ( "context" "fmt" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" diff --git a/internal/service/federateddatabaseinstance/resource_federated_database_instance.go b/internal/service/federateddatabaseinstance/resource_federated_database_instance.go index f637f53a24..647b7629a8 100644 --- a/internal/service/federateddatabaseinstance/resource_federated_database_instance.go +++ b/internal/service/federateddatabaseinstance/resource_federated_database_instance.go @@ -7,7 +7,7 @@ import ( "net/http" "strings" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -714,9 +714,9 @@ func newUrls(urlsFromConfig []any) *[]string { func newCloudProviderConfig(d *schema.ResourceData) *admin.DataLakeCloudProviderConfig { if cloudProvider, ok := d.Get("cloud_provider_config").([]any); ok && len(cloudProvider) == 1 { - cloudProviderConfig := admin.DataLakeCloudProviderConfig{} - cloudProviderConfig.Aws = newAWSConfig(cloudProvider) - return &cloudProviderConfig + return &admin.DataLakeCloudProviderConfig{ + Aws: newAWSConfig(cloudProvider), + } } return nil diff --git a/internal/service/federatedquerylimit/data_source_federated_query_limits.go b/internal/service/federatedquerylimit/data_source_federated_query_limits.go index 20b8257250..c270ed8c99 100644 --- a/internal/service/federatedquerylimit/data_source_federated_query_limits.go +++ b/internal/service/federatedquerylimit/data_source_federated_query_limits.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/federatedquerylimit/resource_federated_query_limit.go b/internal/service/federatedquerylimit/resource_federated_query_limit.go index 58ceb1f7d5..9e8c744a26 100644 --- a/internal/service/federatedquerylimit/resource_federated_query_limit.go +++ b/internal/service/federatedquerylimit/resource_federated_query_limit.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/federatedsettingsidentityprovider/data_source_federated_settings_identity_providers.go b/internal/service/federatedsettingsidentityprovider/data_source_federated_settings_identity_providers.go index 73645c947a..67eaee4feb 100644 --- a/internal/service/federatedsettingsidentityprovider/data_source_federated_settings_identity_providers.go +++ b/internal/service/federatedsettingsidentityprovider/data_source_federated_settings_identity_providers.go @@ -5,7 +5,7 @@ import ( "errors" "fmt" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" diff --git a/internal/service/federatedsettingsidentityprovider/model_federated_settings_identity_provider.go b/internal/service/federatedsettingsidentityprovider/model_federated_settings_identity_provider.go index dfddcbcec5..a307e73983 100644 --- a/internal/service/federatedsettingsidentityprovider/model_federated_settings_identity_provider.go +++ b/internal/service/federatedsettingsidentityprovider/model_federated_settings_identity_provider.go @@ -4,7 +4,7 @@ import ( "sort" "strings" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" diff --git a/internal/service/federatedsettingsidentityprovider/model_federated_settings_identity_provider_test.go b/internal/service/federatedsettingsidentityprovider/model_federated_settings_identity_provider_test.go index a4a8f9b261..a1505b9d89 100644 --- a/internal/service/federatedsettingsidentityprovider/model_federated_settings_identity_provider_test.go +++ b/internal/service/federatedsettingsidentityprovider/model_federated_settings_identity_provider_test.go @@ -4,7 +4,7 @@ import ( "testing" "time" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/stretchr/testify/assert" diff --git a/internal/service/federatedsettingsorgconfig/data_source_federated_settings.go b/internal/service/federatedsettingsorgconfig/data_source_federated_settings.go index 62d6ce0ba4..e930171af6 100644 --- a/internal/service/federatedsettingsorgconfig/data_source_federated_settings.go +++ b/internal/service/federatedsettingsorgconfig/data_source_federated_settings.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func DataSourceSettings() *schema.Resource { diff --git a/internal/service/federatedsettingsorgconfig/data_source_federated_settings_connected_orgs.go b/internal/service/federatedsettingsorgconfig/data_source_federated_settings_connected_orgs.go index d9a948215f..0aca97e00f 100644 --- a/internal/service/federatedsettingsorgconfig/data_source_federated_settings_connected_orgs.go +++ b/internal/service/federatedsettingsorgconfig/data_source_federated_settings_connected_orgs.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/federatedsettingsorgconfig/model_federated_settings_connected_orgs.go b/internal/service/federatedsettingsorgconfig/model_federated_settings_connected_orgs.go index fdc06ffc07..d9a8ab937d 100644 --- a/internal/service/federatedsettingsorgconfig/model_federated_settings_connected_orgs.go +++ b/internal/service/federatedsettingsorgconfig/model_federated_settings_connected_orgs.go @@ -4,7 +4,7 @@ import ( "sort" "strings" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) type roleMappingsByGroupName []admin.AuthFederationRoleMapping diff --git a/internal/service/federatedsettingsorgrolemapping/data_source_federated_settings_org_role_mappings.go b/internal/service/federatedsettingsorgrolemapping/data_source_federated_settings_org_role_mappings.go index f8371255ff..ae8241e996 100644 --- a/internal/service/federatedsettingsorgrolemapping/data_source_federated_settings_org_role_mappings.go +++ b/internal/service/federatedsettingsorgrolemapping/data_source_federated_settings_org_role_mappings.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/federatedsettingsorgrolemapping/model_federated_settings_org_role_mapping.go b/internal/service/federatedsettingsorgrolemapping/model_federated_settings_org_role_mapping.go index bd411c53fd..5a0208f843 100644 --- a/internal/service/federatedsettingsorgrolemapping/model_federated_settings_org_role_mapping.go +++ b/internal/service/federatedsettingsorgrolemapping/model_federated_settings_org_role_mapping.go @@ -6,7 +6,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) type mRoleAssignment []admin.RoleAssignment diff --git a/internal/service/federatedsettingsorgrolemapping/resource_federated_settings_org_role_mapping.go b/internal/service/federatedsettingsorgrolemapping/resource_federated_settings_org_role_mapping.go index f9e9df91bd..fb5512dd1d 100644 --- a/internal/service/federatedsettingsorgrolemapping/resource_federated_settings_org_role_mapping.go +++ b/internal/service/federatedsettingsorgrolemapping/resource_federated_settings_org_role_mapping.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { diff --git a/internal/service/globalclusterconfig/data_source_global_cluster_config.go b/internal/service/globalclusterconfig/data_source_global_cluster_config.go index 0672c005e3..099c4af659 100644 --- a/internal/service/globalclusterconfig/data_source_global_cluster_config.go +++ b/internal/service/globalclusterconfig/data_source_global_cluster_config.go @@ -62,11 +62,11 @@ func DataSource() *schema.Resource { } func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV2 := meta.(*config.MongoDBClient).AtlasV2 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 projectID := d.Get("project_id").(string) clusterName := d.Get("cluster_name").(string) - globalCluster, resp, err := connV2.GlobalClustersApi.GetManagedNamespace(ctx, projectID, clusterName).Execute() + globalCluster, resp, err := connV220240530.GlobalClustersApi.GetManagedNamespace(ctx, projectID, clusterName).Execute() if err != nil { if resp != nil && resp.StatusCode == http.StatusNotFound { return nil diff --git a/internal/service/globalclusterconfig/resource_global_cluster_config.go b/internal/service/globalclusterconfig/resource_global_cluster_config.go index edcbd33111..ff1286d8e5 100644 --- a/internal/service/globalclusterconfig/resource_global_cluster_config.go +++ b/internal/service/globalclusterconfig/resource_global_cluster_config.go @@ -13,7 +13,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin" // fixed to old API due to CLOUDP-263795 ) const ( @@ -101,7 +101,7 @@ func Resource() *schema.Resource { } func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV2 := meta.(*config.MongoDBClient).AtlasV2 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 projectID := d.Get("project_id").(string) clusterName := d.Get("cluster_name").(string) @@ -109,7 +109,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. for _, m := range v.(*schema.Set).List() { mn := m.(map[string]any) - addManagedNamespace := &admin.ManagedNamespace{ + addManagedNamespace := &admin20240530.ManagedNamespace{ Collection: conversion.StringPtr(mn["collection"].(string)), Db: conversion.StringPtr(mn["db"].(string)), CustomShardKey: conversion.StringPtr(mn["custom_shard_key"].(string)), @@ -124,10 +124,10 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. } err := retry.RetryContext(ctx, 2*time.Minute, func() *retry.RetryError { - _, _, err := connV2.GlobalClustersApi.CreateManagedNamespace(ctx, projectID, clusterName, addManagedNamespace).Execute() + _, _, err := connV220240530.GlobalClustersApi.CreateManagedNamespace(ctx, projectID, clusterName, addManagedNamespace).Execute() if err != nil { - if admin.IsErrorCode(err, "DUPLICATE_MANAGED_NAMESPACE") { - if err := removeManagedNamespaces(ctx, connV2, v.(*schema.Set).List(), projectID, clusterName); err != nil { + if admin20240530.IsErrorCode(err, "DUPLICATE_MANAGED_NAMESPACE") { + if err := removeManagedNamespaces(ctx, connV220240530, v.(*schema.Set).List(), projectID, clusterName); err != nil { return retry.NonRetryableError(fmt.Errorf(errorGlobalClusterCreate, err)) } return retry.RetryableError(err) @@ -143,13 +143,13 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. } if v, ok := d.GetOk("custom_zone_mappings"); ok { - _, _, err := connV2.GlobalClustersApi.CreateCustomZoneMapping(ctx, projectID, clusterName, &admin.CustomZoneMappings{ + _, _, err := connV220240530.GlobalClustersApi.CreateCustomZoneMapping(ctx, projectID, clusterName, &admin20240530.CustomZoneMappings{ CustomZoneMappings: newCustomZoneMappings(v.(*schema.Set).List()), }).Execute() if err != nil { if v2, ok2 := d.GetOk("managed_namespaces"); ok2 { - if err := removeManagedNamespaces(ctx, connV2, v2.(*schema.Set).List(), projectID, clusterName); err != nil { + if err := removeManagedNamespaces(ctx, connV220240530, v2.(*schema.Set).List(), projectID, clusterName); err != nil { return diag.FromErr(fmt.Errorf(errorGlobalClusterCreate, err)) } } @@ -166,12 +166,12 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. } func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV2 := meta.(*config.MongoDBClient).AtlasV2 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 // fixed to old API due to CLOUDP-263795 ids := conversion.DecodeStateID(d.Id()) projectID := ids["project_id"] clusterName := ids["cluster_name"] - globalCluster, resp, err := connV2.GlobalClustersApi.GetManagedNamespace(ctx, projectID, clusterName).Execute() + globalCluster, resp, err := connV220240530.GlobalClustersApi.GetManagedNamespace(ctx, projectID, clusterName).Execute() if err != nil { if resp != nil && resp.StatusCode == http.StatusNotFound { d.SetId("") @@ -199,20 +199,20 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. } func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - connV2 := meta.(*config.MongoDBClient).AtlasV2 + connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530 ids := conversion.DecodeStateID(d.Id()) projectID := ids["project_id"] clusterName := ids["cluster_name"] if v, ok := d.GetOk("managed_namespaces"); ok { - if err := removeManagedNamespaces(ctx, connV2, v.(*schema.Set).List(), projectID, clusterName); err != nil { + if err := removeManagedNamespaces(ctx, connV220240530, v.(*schema.Set).List(), projectID, clusterName); err != nil { return diag.FromErr(fmt.Errorf(errorGlobalClusterDelete, clusterName, err)) } } if v, ok := d.GetOk("custom_zone_mappings"); ok { if v.(*schema.Set).Len() > 0 { - if _, _, err := connV2.GlobalClustersApi.DeleteAllCustomZoneMappings(ctx, projectID, clusterName).Execute(); err != nil { + if _, _, err := connV220240530.GlobalClustersApi.DeleteAllCustomZoneMappings(ctx, projectID, clusterName).Execute(); err != nil { return diag.FromErr(fmt.Errorf(errorGlobalClusterDelete, clusterName, err)) } } @@ -221,7 +221,7 @@ func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag. return nil } -func flattenManagedNamespaces(managedNamespaces []admin.ManagedNamespaces) []map[string]any { +func flattenManagedNamespaces(managedNamespaces []admin20240530.ManagedNamespaces) []map[string]any { var results []map[string]any if len(managedNamespaces) > 0 { @@ -265,17 +265,17 @@ func resourceImport(ctx context.Context, d *schema.ResourceData, meta any) ([]*s return []*schema.ResourceData{d}, nil } -func removeManagedNamespaces(ctx context.Context, connV2 *admin.APIClient, remove []any, projectID, clusterName string) error { +func removeManagedNamespaces(ctx context.Context, connV220240530 *admin20240530.APIClient, remove []any, projectID, clusterName string) error { for _, m := range remove { mn := m.(map[string]any) - managedNamespace := &admin.DeleteManagedNamespaceApiParams{ + managedNamespace := &admin20240530.DeleteManagedNamespaceApiParams{ Collection: conversion.StringPtr(mn["collection"].(string)), Db: conversion.StringPtr(mn["db"].(string)), ClusterName: clusterName, GroupId: projectID, } - _, _, err := connV2.GlobalClustersApi.DeleteManagedNamespaceWithParams(ctx, managedNamespace).Execute() + _, _, err := connV220240530.GlobalClustersApi.DeleteManagedNamespaceWithParams(ctx, managedNamespace).Execute() if err != nil { return err @@ -284,12 +284,12 @@ func removeManagedNamespaces(ctx context.Context, connV2 *admin.APIClient, remov return nil } -func newCustomZoneMapping(tfMap map[string]any) *admin.ZoneMapping { +func newCustomZoneMapping(tfMap map[string]any) *admin20240530.ZoneMapping { if tfMap == nil { return nil } - apiObject := &admin.ZoneMapping{ + apiObject := &admin20240530.ZoneMapping{ Location: tfMap["location"].(string), Zone: tfMap["zone"].(string), } @@ -297,12 +297,12 @@ func newCustomZoneMapping(tfMap map[string]any) *admin.ZoneMapping { return apiObject } -func newCustomZoneMappings(tfList []any) *[]admin.ZoneMapping { +func newCustomZoneMappings(tfList []any) *[]admin20240530.ZoneMapping { if len(tfList) == 0 { return nil } - apiObjects := make([]admin.ZoneMapping, len(tfList)) + apiObjects := make([]admin20240530.ZoneMapping, len(tfList)) if len(tfList) > 0 { for i, tfMapRaw := range tfList { if tfMap, ok := tfMapRaw.(map[string]any); ok { diff --git a/internal/service/ldapconfiguration/resource_ldap_configuration.go b/internal/service/ldapconfiguration/resource_ldap_configuration.go index a64c54b400..9182281009 100644 --- a/internal/service/ldapconfiguration/resource_ldap_configuration.go +++ b/internal/service/ldapconfiguration/resource_ldap_configuration.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/ldapverify/resource_ldap_verify.go b/internal/service/ldapverify/resource_ldap_verify.go index a8ad9cf9a1..e199c63e97 100644 --- a/internal/service/ldapverify/resource_ldap_verify.go +++ b/internal/service/ldapverify/resource_ldap_verify.go @@ -13,7 +13,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/maintenancewindow/resource_maintenance_window.go b/internal/service/maintenancewindow/resource_maintenance_window.go index 85ff7891b6..ca60b6cce1 100644 --- a/internal/service/maintenancewindow/resource_maintenance_window.go +++ b/internal/service/maintenancewindow/resource_maintenance_window.go @@ -10,7 +10,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/spf13/cast" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/networkcontainer/data_source_network_containers.go b/internal/service/networkcontainer/data_source_network_containers.go index 871928b474..ad5218c2cf 100644 --- a/internal/service/networkcontainer/data_source_network_containers.go +++ b/internal/service/networkcontainer/data_source_network_containers.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/networkcontainer/resource_network_container.go b/internal/service/networkcontainer/resource_network_container.go index b185391b36..e404ff7df1 100644 --- a/internal/service/networkcontainer/resource_network_container.go +++ b/internal/service/networkcontainer/resource_network_container.go @@ -17,7 +17,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/spf13/cast" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/networkpeering/data_source_network_peering.go b/internal/service/networkpeering/data_source_network_peering.go index 74ac732407..f596831578 100644 --- a/internal/service/networkpeering/data_source_network_peering.go +++ b/internal/service/networkpeering/data_source_network_peering.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func DataSource() *schema.Resource { diff --git a/internal/service/networkpeering/data_source_network_peerings.go b/internal/service/networkpeering/data_source_network_peerings.go index 3dc1967aa0..5412234217 100644 --- a/internal/service/networkpeering/data_source_network_peerings.go +++ b/internal/service/networkpeering/data_source_network_peerings.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/networkpeering/resource_network_peering.go b/internal/service/networkpeering/resource_network_peering.go index 1058774dd2..23efb04908 100644 --- a/internal/service/networkpeering/resource_network_peering.go +++ b/internal/service/networkpeering/resource_network_peering.go @@ -16,7 +16,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/networkcontainer" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/onlinearchive/resource_online_archive.go b/internal/service/onlinearchive/resource_online_archive.go index 5f9b17b12b..d93371f089 100644 --- a/internal/service/onlinearchive/resource_online_archive.go +++ b/internal/service/onlinearchive/resource_online_archive.go @@ -15,7 +15,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( @@ -633,8 +633,6 @@ func mapCriteria(d *schema.ResourceData) admin.Criteria { } func mapSchedule(d *schema.ResourceData) *admin.OnlineArchiveSchedule { - // scheduleInput := &matlas.OnlineArchiveSchedule{ - // We have to provide schedule.type="DEFAULT" when the schedule block is not provided or removed scheduleInput := &admin.OnlineArchiveSchedule{ Type: scheduleTypeDefault, diff --git a/internal/service/organization/data_source_organization.go b/internal/service/organization/data_source_organization.go index 6f08f3791c..9ff52aa828 100644 --- a/internal/service/organization/data_source_organization.go +++ b/internal/service/organization/data_source_organization.go @@ -13,7 +13,7 @@ import ( func DataSource() *schema.Resource { return &schema.Resource{ - ReadContext: dataSourceMongoDBAtlasOrganizationRead, + ReadContext: dataSourceRead, Schema: map[string]*schema.Schema{ "org_id": { Type: schema.TypeString, @@ -59,8 +59,7 @@ func DataSource() *schema.Resource { } } -func dataSourceMongoDBAtlasOrganizationRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - // Get client connection. +func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { conn := meta.(*config.MongoDBClient).AtlasV2 orgID := d.Get("org_id").(string) diff --git a/internal/service/organization/data_source_organization_test.go b/internal/service/organization/data_source_organization_test.go index 482b915cb3..e7926e8c26 100644 --- a/internal/service/organization/data_source_organization_test.go +++ b/internal/service/organization/data_source_organization_test.go @@ -19,7 +19,7 @@ func TestAccConfigDSOrganization_basic(t *testing.T) { ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, Steps: []resource.TestStep{ { - Config: testAccMongoDBAtlasOrganizationConfigWithDS(orgID), + Config: configWithDS(orgID), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(datasourceName, "name"), resource.TestCheckResourceAttrSet(datasourceName, "id"), @@ -31,7 +31,7 @@ func TestAccConfigDSOrganization_basic(t *testing.T) { }, }) } -func testAccMongoDBAtlasOrganizationConfigWithDS(orgID string) string { +func configWithDS(orgID string) string { config := fmt.Sprintf(` data "mongodbatlas_organization" "test" { diff --git a/internal/service/organization/data_source_organizations.go b/internal/service/organization/data_source_organizations.go index b1d209ef46..484dab350a 100644 --- a/internal/service/organization/data_source_organizations.go +++ b/internal/service/organization/data_source_organizations.go @@ -5,7 +5,7 @@ import ( "fmt" "log" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" @@ -16,7 +16,7 @@ import ( func PluralDataSource() *schema.Resource { return &schema.Resource{ - ReadContext: dataSourceMongoDBAtlasOrganizationsRead, + ReadContext: pluralDataSourceRead, Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -86,8 +86,7 @@ func PluralDataSource() *schema.Resource { } } -func dataSourceMongoDBAtlasOrganizationsRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - // Get client connection. +func pluralDataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { conn := meta.(*config.MongoDBClient).AtlasV2 organizationOptions := &admin.ListOrganizationsApiParams{ diff --git a/internal/service/organization/data_source_organizations_test.go b/internal/service/organization/data_source_organizations_test.go index 5cd9e3a23a..9894031f63 100644 --- a/internal/service/organization/data_source_organizations_test.go +++ b/internal/service/organization/data_source_organizations_test.go @@ -17,7 +17,7 @@ func TestAccConfigDSOrganizations_basic(t *testing.T) { ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, Steps: []resource.TestStep{ { - Config: testAccMongoDBAtlasOrganizationsConfigWithDS(), + Config: configWithPluralDS(), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(datasourceName, "results.#"), resource.TestCheckResourceAttrSet(datasourceName, "results.0.name"), @@ -39,7 +39,7 @@ func TestAccConfigDSOrganizations_withPagination(t *testing.T) { ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, Steps: []resource.TestStep{ { - Config: testAccMongoDBAtlasOrganizationsConfigWithPagination(2, 5), + Config: configWithPagination(2, 5), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(datasourceName, "results.#"), ), @@ -48,14 +48,14 @@ func TestAccConfigDSOrganizations_withPagination(t *testing.T) { }) } -func testAccMongoDBAtlasOrganizationsConfigWithDS() string { +func configWithPluralDS() string { return ` data "mongodbatlas_organizations" "test" { } ` } -func testAccMongoDBAtlasOrganizationsConfigWithPagination(pageNum, itemPage int) string { +func configWithPagination(pageNum, itemPage int) string { return fmt.Sprintf(` data "mongodbatlas_organizations" "test" { page_num = %d diff --git a/internal/service/organization/resource_organization.go b/internal/service/organization/resource_organization.go index 6a7c38fc34..dbeaa71c81 100644 --- a/internal/service/organization/resource_organization.go +++ b/internal/service/organization/resource_organization.go @@ -6,7 +6,7 @@ import ( "log" "net/http" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -18,10 +18,10 @@ import ( func Resource() *schema.Resource { return &schema.Resource{ - CreateContext: resourceMongoDBAtlasOrganizationCreate, - ReadContext: resourceMongoDBAtlasOrganizationRead, - UpdateContext: resourceMongoDBAtlasOrganizationUpdate, - DeleteContext: resourceMongoDBAtlasOrganizationDelete, + CreateContext: resourceCreate, + ReadContext: resourceRead, + UpdateContext: resourceUpdate, + DeleteContext: resourceDelete, Importer: nil, // import is not supported. See CLOUDP-215155 Schema: map[string]*schema.Schema{ "org_owner_id": { @@ -80,7 +80,7 @@ func Resource() *schema.Resource { } } -func resourceMongoDBAtlasOrganizationCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { +func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { if err := ValidateAPIKeyIsOrgOwner(conversion.ExpandStringList(d.Get("role_names").(*schema.Set).List())); err != nil { return diag.FromErr(err) } @@ -104,7 +104,7 @@ func resourceMongoDBAtlasOrganizationCreate(ctx context.Context, d *schema.Resou PublicKey: *organization.ApiKey.PublicKey, PrivateKey: *organization.ApiKey.PrivateKey, BaseURL: meta.(*config.MongoDBClient).Config.BaseURL, - TerraformVersion: meta.(*config.Config).TerraformVersion, + TerraformVersion: meta.(*config.MongoDBClient).Config.TerraformVersion, } clients, _ := cfg.NewClient(ctx) @@ -136,16 +136,16 @@ func resourceMongoDBAtlasOrganizationCreate(ctx context.Context, d *schema.Resou "org_id": organization.Organization.GetId(), })) - return resourceMongoDBAtlasOrganizationRead(ctx, d, meta) + return resourceRead(ctx, d, meta) } -func resourceMongoDBAtlasOrganizationRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { +func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { // Get client connection. cfg := config.Config{ PublicKey: d.Get("public_key").(string), PrivateKey: d.Get("private_key").(string), BaseURL: meta.(*config.MongoDBClient).Config.BaseURL, - TerraformVersion: meta.(*config.Config).TerraformVersion, + TerraformVersion: meta.(*config.MongoDBClient).Config.TerraformVersion, } clients, _ := cfg.NewClient(ctx) @@ -189,13 +189,13 @@ func resourceMongoDBAtlasOrganizationRead(ctx context.Context, d *schema.Resourc return nil } -func resourceMongoDBAtlasOrganizationUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { +func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { // Get client connection. cfg := config.Config{ PublicKey: d.Get("public_key").(string), PrivateKey: d.Get("private_key").(string), BaseURL: meta.(*config.MongoDBClient).Config.BaseURL, - TerraformVersion: meta.(*config.Config).TerraformVersion, + TerraformVersion: meta.(*config.MongoDBClient).Config.TerraformVersion, } clients, _ := cfg.NewClient(ctx) @@ -218,16 +218,16 @@ func resourceMongoDBAtlasOrganizationUpdate(ctx context.Context, d *schema.Resou } } - return resourceMongoDBAtlasOrganizationRead(ctx, d, meta) + return resourceRead(ctx, d, meta) } -func resourceMongoDBAtlasOrganizationDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { +func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { // Get client connection. cfg := config.Config{ PublicKey: d.Get("public_key").(string), PrivateKey: d.Get("private_key").(string), BaseURL: meta.(*config.MongoDBClient).Config.BaseURL, - TerraformVersion: meta.(*config.Config).TerraformVersion, + TerraformVersion: meta.(*config.MongoDBClient).Config.TerraformVersion, } clients, _ := cfg.NewClient(ctx) diff --git a/internal/service/organization/resource_organization_migration_test.go b/internal/service/organization/resource_organization_migration_test.go index 469b3311be..03c771fadf 100644 --- a/internal/service/organization/resource_organization_migration_test.go +++ b/internal/service/organization/resource_organization_migration_test.go @@ -26,7 +26,7 @@ func TestMigConfigRSOrganization_Basic(t *testing.T) { Steps: []resource.TestStep{ { ExternalProviders: mig.ExternalProviders(), - Config: testAccMongoDBAtlasOrganizationConfigBasic(orgOwnerID, name, description, roleName), + Config: configBasic(orgOwnerID, name, description, roleName), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(resourceName, "org_id"), resource.TestCheckResourceAttrSet(resourceName, "description"), @@ -35,7 +35,7 @@ func TestMigConfigRSOrganization_Basic(t *testing.T) { }, { ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - Config: testAccMongoDBAtlasOrganizationConfigBasic(orgOwnerID, name, description, roleName), + Config: configBasic(orgOwnerID, name, description, roleName), ConfigPlanChecks: resource.ConfigPlanChecks{ PreApply: []plancheck.PlanCheck{ acc.DebugPlan(), diff --git a/internal/service/organization/resource_organization_test.go b/internal/service/organization/resource_organization_test.go index 22095111ab..7b65af7ec8 100644 --- a/internal/service/organization/resource_organization_test.go +++ b/internal/service/organization/resource_organization_test.go @@ -7,7 +7,7 @@ import ( "regexp" "testing" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" @@ -33,12 +33,12 @@ func TestAccConfigRSOrganization_Basic(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acc.PreCheck(t) }, ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - CheckDestroy: testAccCheckMongoDBAtlasOrganizationDestroy, + CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: testAccMongoDBAtlasOrganizationConfigBasic(orgOwnerID, name, description, roleName), + Config: configBasic(orgOwnerID, name, description, roleName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasOrganizationExists(resourceName), + checkExists(resourceName), resource.TestCheckResourceAttrSet(resourceName, "org_id"), resource.TestCheckResourceAttr(resourceName, "description", description), resource.TestCheckResourceAttr(resourceName, "api_access_list_required", "false"), @@ -47,9 +47,9 @@ func TestAccConfigRSOrganization_Basic(t *testing.T) { ), }, { - Config: testAccMongoDBAtlasOrganizationConfigBasic(orgOwnerID, updatedName, description, roleName), + Config: configBasic(orgOwnerID, updatedName, description, roleName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasOrganizationExists(resourceName), + checkExists(resourceName), resource.TestCheckResourceAttrSet(resourceName, "org_id"), resource.TestCheckResourceAttr(resourceName, "description", description), resource.TestCheckResourceAttr(resourceName, "api_access_list_required", "false"), @@ -74,10 +74,10 @@ func TestAccConfigRSOrganization_BasicAccess(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acc.PreCheck(t) }, ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - CheckDestroy: testAccCheckMongoDBAtlasOrganizationDestroy, + CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: testAccMongoDBAtlasOrganizationConfigBasic(orgOwnerID, name, description, roleName), + Config: configBasic(orgOwnerID, name, description, roleName), ExpectError: regexp.MustCompile("API Key must have the ORG_OWNER role"), }, }, @@ -106,12 +106,12 @@ func TestAccConfigRSOrganization_Settings(t *testing.T) { resource.Test(t, resource.TestCase{ PreCheck: func() { acc.PreCheck(t) }, ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - CheckDestroy: testAccCheckMongoDBAtlasOrganizationDestroy, + CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: testAccMongoDBAtlasOrganizationConfigWithSettings(orgOwnerID, name, description, roleName, settingsConfig), + Config: configWithSettings(orgOwnerID, name, description, roleName, settingsConfig), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasOrganizationExists(resourceName), + checkExists(resourceName), resource.TestCheckResourceAttrSet(resourceName, "org_id"), resource.TestCheckResourceAttr(resourceName, "description", description), resource.TestCheckResourceAttr(resourceName, "api_access_list_required", "false"), @@ -120,9 +120,9 @@ func TestAccConfigRSOrganization_Settings(t *testing.T) { ), }, { - Config: testAccMongoDBAtlasOrganizationConfigWithSettings(orgOwnerID, name, description, roleName, settingsConfigUpdated), + Config: configWithSettings(orgOwnerID, name, description, roleName, settingsConfigUpdated), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasOrganizationExists(resourceName), + checkExists(resourceName), resource.TestCheckResourceAttrSet(resourceName, "org_id"), resource.TestCheckResourceAttr(resourceName, "description", description), resource.TestCheckResourceAttr(resourceName, "api_access_list_required", "false"), @@ -131,9 +131,9 @@ func TestAccConfigRSOrganization_Settings(t *testing.T) { ), }, { - Config: testAccMongoDBAtlasOrganizationConfigBasic(orgOwnerID, "org-name-updated", description, roleName), + Config: configBasic(orgOwnerID, "org-name-updated", description, roleName), Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckMongoDBAtlasOrganizationExists(resourceName), + checkExists(resourceName), resource.TestCheckResourceAttrSet(resourceName, "org_id"), resource.TestCheckResourceAttrSet(resourceName, "description"), resource.TestCheckResourceAttr(resourceName, "description", description), @@ -143,7 +143,7 @@ func TestAccConfigRSOrganization_Settings(t *testing.T) { }) } -func testAccCheckMongoDBAtlasOrganizationExists(resourceName string) resource.TestCheckFunc { +func checkExists(resourceName string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -174,7 +174,7 @@ func testAccCheckMongoDBAtlasOrganizationExists(resourceName string) resource.Te } } -func testAccCheckMongoDBAtlasOrganizationDestroy(s *terraform.State) error { +func checkDestroy(s *terraform.State) error { for _, rs := range s.RootModule().Resources { if rs.Type != "mongodbatlas_organization" { continue @@ -200,7 +200,7 @@ func testAccCheckMongoDBAtlasOrganizationDestroy(s *terraform.State) error { return nil } -func testAccMongoDBAtlasOrganizationConfigBasic(orgOwnerID, name, description, roleNames string) string { +func configBasic(orgOwnerID, name, description, roleNames string) string { return fmt.Sprintf(` resource "mongodbatlas_organization" "test" { org_owner_id = "%s" @@ -211,7 +211,7 @@ func testAccMongoDBAtlasOrganizationConfigBasic(orgOwnerID, name, description, r `, orgOwnerID, name, description, roleNames) } -func testAccMongoDBAtlasOrganizationConfigWithSettings(orgOwnerID, name, description, roleNames, settingsConfig string) string { +func configWithSettings(orgOwnerID, name, description, roleNames, settingsConfig string) string { return fmt.Sprintf(` resource "mongodbatlas_organization" "test" { org_owner_id = "%s" diff --git a/internal/service/orginvitation/resource_org_invitation.go b/internal/service/orginvitation/resource_org_invitation.go index fb64f43946..bcdc8c7c16 100644 --- a/internal/service/orginvitation/resource_org_invitation.go +++ b/internal/service/orginvitation/resource_org_invitation.go @@ -10,7 +10,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { diff --git a/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode.go b/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode.go index f3eb5f7c95..af79008157 100644 --- a/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode.go +++ b/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode.go @@ -12,7 +12,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) type permCtxKey string diff --git a/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode_test.go b/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode_test.go index 343f9671ca..27c490da81 100644 --- a/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode_test.go +++ b/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode_test.go @@ -17,6 +17,7 @@ func TestAccPrivateEndpointRegionalMode_basic(t *testing.T) { } func TestAccPrivateEndpointRegionalMode_conn(t *testing.T) { + acc.SkipTestForCI(t) // needs AWS configuration var ( endpointResourceSuffix = "atlasple" resourceSuffix = "atlasrm" @@ -26,9 +27,9 @@ func TestAccPrivateEndpointRegionalMode_conn(t *testing.T) { providerName = "AWS" region = os.Getenv("AWS_REGION_LOWERCASE") privatelinkEndpointServiceResourceName = fmt.Sprintf("mongodbatlas_privatelink_endpoint_service.%s", endpointResourceSuffix) - spec1 = acc.ReplicationSpecRequest{Region: os.Getenv("AWS_REGION_UPPERCASE"), ProviderName: providerName, ZoneName: "Zone 1", DiskSizeGb: 80} - spec2 = acc.ReplicationSpecRequest{Region: "US_WEST_2", ProviderName: providerName, ZoneName: "Zone 2", DiskSizeGb: 80} - clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{Geosharded: true, ReplicationSpecs: []acc.ReplicationSpecRequest{spec1, spec2}}) + spec1 = acc.ReplicationSpecRequest{Region: os.Getenv("AWS_REGION_UPPERCASE"), ProviderName: providerName, ZoneName: "Zone 1"} + spec2 = acc.ReplicationSpecRequest{Region: "US_WEST_2", ProviderName: providerName, ZoneName: "Zone 2"} + clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{Geosharded: true, DiskSizeGb: 80, ReplicationSpecs: []acc.ReplicationSpecRequest{spec1, spec2}}) projectID = clusterInfo.ProjectID clusterResourceName = clusterInfo.ResourceName clusterDataName = "data.mongodbatlas_advanced_cluster.test" @@ -168,7 +169,7 @@ func checkExists(resourceName string) resource.TestCheckFunc { return fmt.Errorf("no ID is set") } projectID := rs.Primary.ID - _, _, err := acc.Conn().PrivateEndpoints.GetRegionalizedPrivateEndpointSetting(context.Background(), projectID) + _, _, err := acc.ConnV2().PrivateEndpointServicesApi.GetRegionalizedPrivateEndpointSetting(context.Background(), projectID).Execute() if err == nil { return nil } @@ -181,7 +182,7 @@ func checkDestroy(s *terraform.State) error { if rs.Type != "mongodbatlas_private_endpoint_regional_mode" { continue } - setting, _, _ := acc.Conn().PrivateEndpoints.GetRegionalizedPrivateEndpointSetting(context.Background(), rs.Primary.ID) + setting, _, _ := acc.ConnV2().PrivateEndpointServicesApi.GetRegionalizedPrivateEndpointSetting(context.Background(), rs.Primary.ID).Execute() if setting != nil && setting.Enabled != false { return fmt.Errorf("Regionalized private endpoint setting for project %q was not properly disabled", rs.Primary.ID) } diff --git a/internal/service/privatelinkendpoint/resource_privatelink_endpoint.go b/internal/service/privatelinkendpoint/resource_privatelink_endpoint.go index 168bcbe263..261638f6a8 100644 --- a/internal/service/privatelinkendpoint/resource_privatelink_endpoint.go +++ b/internal/service/privatelinkendpoint/resource_privatelink_endpoint.go @@ -15,7 +15,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go b/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go index 729332facc..cf58e60062 100644 --- a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go +++ b/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go @@ -8,7 +8,7 @@ import ( "strings" "time" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" diff --git a/internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service.go b/internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service.go index 7c0ccbd942..4e8269d0bb 100644 --- a/internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service.go +++ b/internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service.go @@ -17,7 +17,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/privatelinkendpointservicedatafederationonlinearchive/data_source_privatelink_endpoint_service_data_federation_online_archives.go b/internal/service/privatelinkendpointservicedatafederationonlinearchive/data_source_privatelink_endpoint_service_data_federation_online_archives.go index ac86d494a2..e7df9475d4 100644 --- a/internal/service/privatelinkendpointservicedatafederationonlinearchive/data_source_privatelink_endpoint_service_data_federation_online_archives.go +++ b/internal/service/privatelinkendpointservicedatafederationonlinearchive/data_source_privatelink_endpoint_service_data_federation_online_archives.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/datalakepipeline" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const errorPrivateEndpointServiceDataFederationOnlineArchiveList = "error reading Private Endpoings for projectId %s: %s" diff --git a/internal/service/privatelinkendpointservicedatafederationonlinearchive/resource_privatelink_endpoint_service_data_federation_online_archive.go b/internal/service/privatelinkendpointservicedatafederationonlinearchive/resource_privatelink_endpoint_service_data_federation_online_archive.go index 70a41e734f..0ae4f26ef8 100644 --- a/internal/service/privatelinkendpointservicedatafederationonlinearchive/resource_privatelink_endpoint_service_data_federation_online_archive.go +++ b/internal/service/privatelinkendpointservicedatafederationonlinearchive/resource_privatelink_endpoint_service_data_federation_online_archive.go @@ -8,7 +8,7 @@ import ( "strings" "time" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" diff --git a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go b/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go index 58bc37361f..c162674bb5 100644 --- a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go +++ b/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go @@ -7,7 +7,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go b/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go index ed335d3a18..4f87ceecfc 100644 --- a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go +++ b/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go @@ -8,7 +8,7 @@ import ( "strings" "time" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" diff --git a/internal/service/project/data_source_project.go b/internal/service/project/data_source_project.go index 2cbc63330f..30bd51cdf8 100644 --- a/internal/service/project/data_source_project.go +++ b/internal/service/project/data_source_project.go @@ -4,7 +4,7 @@ import ( "context" "fmt" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" "github.com/hashicorp/terraform-plugin-framework/datasource" diff --git a/internal/service/project/data_source_projects.go b/internal/service/project/data_source_projects.go index eff493d60c..e2d17eb7a4 100644 --- a/internal/service/project/data_source_projects.go +++ b/internal/service/project/data_source_projects.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const projectsDataSourceName = "projects" diff --git a/internal/service/project/model_project.go b/internal/service/project/model_project.go index 628bbb687e..2a1ffd8e3b 100644 --- a/internal/service/project/model_project.go +++ b/internal/service/project/model_project.go @@ -3,7 +3,7 @@ package project import ( "context" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-framework/attr" "github.com/hashicorp/terraform-plugin-framework/diag" diff --git a/internal/service/project/model_project_test.go b/internal/service/project/model_project_test.go index 81dca2d660..ec139f2309 100644 --- a/internal/service/project/model_project_test.go +++ b/internal/service/project/model_project_test.go @@ -4,7 +4,7 @@ import ( "context" "testing" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-framework/attr" "github.com/hashicorp/terraform-plugin-framework/types" diff --git a/internal/service/project/resource_project.go b/internal/service/project/resource_project.go index 3a1fd19035..bc7671807d 100644 --- a/internal/service/project/resource_project.go +++ b/internal/service/project/resource_project.go @@ -9,7 +9,7 @@ import ( "sort" "time" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-framework/attr" "github.com/hashicorp/terraform-plugin-framework/path" @@ -130,7 +130,7 @@ var TfLimitObjectType = types.ObjectType{AttrTypes: map[string]attr.Type{ // Resources that need to be cleaned up before a project can be deleted type AtlasProjectDependants struct { - AdvancedClusters *admin.PaginatedClusterDescription20250101 + AdvancedClusters *admin.PaginatedClusterDescription20240805 } func (r *projectRS) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { diff --git a/internal/service/project/resource_project_migration_test.go b/internal/service/project/resource_project_migration_test.go index ec7ed63e1c..76b5042f63 100644 --- a/internal/service/project/resource_project_migration_test.go +++ b/internal/service/project/resource_project_migration_test.go @@ -7,7 +7,7 @@ import ( "strings" "testing" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" diff --git a/internal/service/project/resource_project_test.go b/internal/service/project/resource_project_test.go index 3e22e5b985..ef63f2852b 100644 --- a/internal/service/project/resource_project_test.go +++ b/internal/service/project/resource_project_test.go @@ -11,8 +11,8 @@ import ( "strings" "testing" - "go.mongodb.org/atlas-sdk/v20240530002/admin" - "go.mongodb.org/atlas-sdk/v20240530002/mockadmin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" + "go.mongodb.org/atlas-sdk/v20240805001/mockadmin" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-testing/helper/resource" @@ -451,7 +451,7 @@ func TestResourceProjectDependentsDeletingRefreshFunc(t *testing.T) { { name: "Error not from the API", mockResponses: AdvancedClusterDescriptionResponse{ - AdvancedClusterDescription: &admin.PaginatedClusterDescription20250101{}, + AdvancedClusterDescription: &admin.PaginatedClusterDescription20240805{}, Err: errors.New("Non-API error"), }, expectedError: true, @@ -459,7 +459,7 @@ func TestResourceProjectDependentsDeletingRefreshFunc(t *testing.T) { { name: "Error from the API", mockResponses: AdvancedClusterDescriptionResponse{ - AdvancedClusterDescription: &admin.PaginatedClusterDescription20250101{}, + AdvancedClusterDescription: &admin.PaginatedClusterDescription20240805{}, Err: &admin.GenericOpenAPIError{}, }, expectedError: true, @@ -467,9 +467,9 @@ func TestResourceProjectDependentsDeletingRefreshFunc(t *testing.T) { { name: "Successful API call", mockResponses: AdvancedClusterDescriptionResponse{ - AdvancedClusterDescription: &admin.PaginatedClusterDescription20250101{ + AdvancedClusterDescription: &admin.PaginatedClusterDescription20240805{ TotalCount: conversion.IntPtr(2), - Results: &[]admin.ClusterDescription20250101{ + Results: &[]admin.ClusterDescription20240805{ {StateName: conversion.StringPtr("IDLE")}, {StateName: conversion.StringPtr("DELETING")}, }, @@ -1259,7 +1259,7 @@ type DeleteProjectLimitResponse struct { Err error } type AdvancedClusterDescriptionResponse struct { - AdvancedClusterDescription *admin.PaginatedClusterDescription20250101 + AdvancedClusterDescription *admin.PaginatedClusterDescription20240805 HTTPResponse *http.Response Err error } diff --git a/internal/service/projectapikey/data_source_project_api_key.go b/internal/service/projectapikey/data_source_project_api_key.go index 29c8aa410e..eb335115a5 100644 --- a/internal/service/projectapikey/data_source_project_api_key.go +++ b/internal/service/projectapikey/data_source_project_api_key.go @@ -12,7 +12,7 @@ import ( func DataSource() *schema.Resource { return &schema.Resource{ - ReadContext: dataSourceMongoDBAtlasProjectAPIKeyRead, + ReadContext: dataSourceRead, Schema: map[string]*schema.Schema{ "project_id": { Type: schema.TypeString, @@ -57,35 +57,34 @@ func DataSource() *schema.Resource { } } -func dataSourceMongoDBAtlasProjectAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - // Get client connection. - conn := meta.(*config.MongoDBClient).Atlas +func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { + connV2 := meta.(*config.MongoDBClient).AtlasV2 projectID := d.Get("project_id").(string) apiKeyID := d.Get("api_key_id").(string) - projectAPIKeys, _, err := conn.ProjectAPIKeys.List(ctx, projectID, nil) + projectAPIKeys, _, err := connV2.ProgrammaticAPIKeysApi.ListProjectApiKeys(ctx, projectID).Execute() if err != nil { return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) } - for _, val := range projectAPIKeys { - if val.ID != apiKeyID { + for _, val := range projectAPIKeys.GetResults() { + if val.GetId() != apiKeyID { continue } - if err := d.Set("description", val.Desc); err != nil { + if err := d.Set("description", val.GetDesc()); err != nil { return diag.FromErr(fmt.Errorf("error setting `description`: %s", err)) } - if err := d.Set("public_key", val.PublicKey); err != nil { + if err := d.Set("public_key", val.GetPublicKey()); err != nil { return diag.FromErr(fmt.Errorf("error setting `public_key`: %s", err)) } - if err := d.Set("private_key", val.PrivateKey); err != nil { + if err := d.Set("private_key", val.GetPrivateKey()); err != nil { return diag.FromErr(fmt.Errorf("error setting `private_key`: %s", err)) } - if projectAssignments, err := newProjectAssignment(ctx, conn, apiKeyID); err == nil { + if projectAssignments, err := newProjectAssignment(ctx, connV2, apiKeyID); err == nil { if err := d.Set("project_assignment", projectAssignments); err != nil { return diag.Errorf(ErrorProjectSetting, `project_assignment`, projectID, err) } diff --git a/internal/service/projectapikey/data_source_project_api_keys.go b/internal/service/projectapikey/data_source_project_api_keys.go index 175839363c..55af1f551b 100644 --- a/internal/service/projectapikey/data_source_project_api_keys.go +++ b/internal/service/projectapikey/data_source_project_api_keys.go @@ -8,13 +8,12 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - - matlas "go.mongodb.org/atlas/mongodbatlas" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { return &schema.Resource{ - ReadContext: dataSourceMongoDBAtlasProjectAPIKeysRead, + ReadContext: pluralDataSourceRead, Schema: map[string]*schema.Schema{ "project_id": { Type: schema.TypeString, @@ -75,22 +74,19 @@ func PluralDataSource() *schema.Resource { } } -func dataSourceMongoDBAtlasProjectAPIKeysRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - // Get client connection. - conn := meta.(*config.MongoDBClient).Atlas - options := &matlas.ListOptions{ - PageNum: d.Get("page_num").(int), - ItemsPerPage: d.Get("items_per_page").(int), - } +func pluralDataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { + connV2 := meta.(*config.MongoDBClient).AtlasV2 + pageNum := d.Get("page_num").(int) + itemsPerPage := d.Get("items_per_page").(int) projectID := d.Get("project_id").(string) - apiKeys, _, err := conn.ProjectAPIKeys.List(ctx, projectID, options) + apiKeys, _, err := connV2.ProgrammaticAPIKeysApi.ListProjectApiKeys(ctx, projectID).PageNum(pageNum).ItemsPerPage(itemsPerPage).Execute() if err != nil { return diag.FromErr(fmt.Errorf("error getting api keys information: %s", err)) } - results, err := flattenProjectAPIKeys(ctx, conn, projectID, apiKeys) + results, err := flattenProjectAPIKeys(ctx, connV2, apiKeys.GetResults()) if err != nil { diag.FromErr(fmt.Errorf("error setting `results`: %s", err)) } @@ -104,7 +100,7 @@ func dataSourceMongoDBAtlasProjectAPIKeysRead(ctx context.Context, d *schema.Res return nil } -func flattenProjectAPIKeys(ctx context.Context, conn *matlas.Client, projectID string, apiKeys []matlas.APIKey) ([]map[string]any, error) { +func flattenProjectAPIKeys(ctx context.Context, connV2 *admin.APIClient, apiKeys []admin.ApiKeyUserDetails) ([]map[string]any, error) { var results []map[string]any if len(apiKeys) == 0 { @@ -114,13 +110,13 @@ func flattenProjectAPIKeys(ctx context.Context, conn *matlas.Client, projectID s results = make([]map[string]any, len(apiKeys)) for k, apiKey := range apiKeys { results[k] = map[string]any{ - "api_key_id": apiKey.ID, - "description": apiKey.Desc, - "public_key": apiKey.PublicKey, - "private_key": apiKey.PrivateKey, + "api_key_id": apiKey.GetId(), + "description": apiKey.GetDesc(), + "public_key": apiKey.GetPublicKey(), + "private_key": apiKey.GetPrivateKey(), } - projectAssignment, err := newProjectAssignment(ctx, conn, apiKey.ID) + projectAssignment, err := newProjectAssignment(ctx, connV2, apiKey.GetId()) if err != nil { return nil, err } diff --git a/internal/service/projectapikey/resource_project_api_key.go b/internal/service/projectapikey/resource_project_api_key.go index 518733b8f4..f4a6b12c1d 100644 --- a/internal/service/projectapikey/resource_project_api_key.go +++ b/internal/service/projectapikey/resource_project_api_key.go @@ -11,8 +11,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" - matlas "go.mongodb.org/atlas/mongodbatlas" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( @@ -21,12 +20,12 @@ const ( func Resource() *schema.Resource { return &schema.Resource{ - CreateContext: resourceMongoDBAtlasProjectAPIKeyCreate, - ReadContext: resourceMongoDBAtlasProjectAPIKeyRead, - UpdateContext: resourceMongoDBAtlasProjectAPIKeyUpdate, - DeleteContext: resourceMongoDBAtlasProjectAPIKeyDelete, + CreateContext: resourceCreate, + ReadContext: resourceRead, + UpdateContext: resourceUpdate, + DeleteContext: resourceDelete, Importer: &schema.ResourceImporter{ - StateContext: resourceMongoDBAtlasProjectAPIKeyImportState, + StateContext: resourceImportState, }, Schema: map[string]*schema.Schema{ "api_key_id": { @@ -77,37 +76,36 @@ type APIProjectAssignmentKeyInput struct { const errorNoProjectAssignmentDefined = "could not obtain a project id as no assignments are defined" -func resourceMongoDBAtlasProjectAPIKeyCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(*config.MongoDBClient).Atlas +func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { + connV2 := meta.(*config.MongoDBClient).AtlasV2 - var apiKey *matlas.APIKey + var apiKey *admin.ApiKeyUserDetails + var resp *http.Response var err error - var resp *matlas.Response - createRequest := new(matlas.APIKeyInput) - createRequest.Desc = d.Get("description").(string) + createRequest := &admin.CreateAtlasProjectApiKey{ + Desc: d.Get("description").(string), + } + if projectAssignments, ok := d.GetOk("project_assignment"); ok { projectAssignmentList := ExpandProjectAssignmentSet(projectAssignments.(*schema.Set)) // creates api key using project id of first defined project assignment firstAssignment := projectAssignmentList[0] createRequest.Roles = firstAssignment.RoleNames - apiKey, resp, err = conn.ProjectAPIKeys.Create(ctx, firstAssignment.ProjectID, createRequest) + apiKey, resp, err = connV2.ProgrammaticAPIKeysApi.CreateProjectApiKey(ctx, firstAssignment.ProjectID, createRequest).Execute() if err != nil { if resp != nil && resp.StatusCode == http.StatusNotFound { d.SetId("") return nil } - return diag.FromErr(err) } // assign created api key to remaining project assignments for _, apiKeyList := range projectAssignmentList[1:] { - createRequest.Roles = apiKeyList.RoleNames - _, err := conn.ProjectAPIKeys.Assign(ctx, apiKeyList.ProjectID, apiKey.ID, &matlas.AssignAPIKey{ - Roles: createRequest.Roles, - }) + assignment := []admin.UserAccessRoleAssignment{{Roles: &apiKeyList.RoleNames}} + _, _, err := connV2.ProgrammaticAPIKeysApi.AddProjectApiKey(ctx, apiKeyList.ProjectID, apiKey.GetId(), &assignment).Execute() if err != nil { if resp != nil && resp.StatusCode == http.StatusNotFound { d.SetId("") @@ -117,24 +115,23 @@ func resourceMongoDBAtlasProjectAPIKeyCreate(ctx context.Context, d *schema.Reso } } - if err := d.Set("public_key", apiKey.PublicKey); err != nil { + if err := d.Set("public_key", apiKey.GetPublicKey()); err != nil { return diag.FromErr(fmt.Errorf("error setting `public_key`: %s", err)) } - if err := d.Set("private_key", apiKey.PrivateKey); err != nil { + if err := d.Set("private_key", apiKey.GetPrivateKey()); err != nil { return diag.FromErr(fmt.Errorf("error setting `private_key`: %s", err)) } d.SetId(conversion.EncodeStateID(map[string]string{ - "api_key_id": apiKey.ID, + "api_key_id": apiKey.GetId(), })) - return resourceMongoDBAtlasProjectAPIKeyRead(ctx, d, meta) + return resourceRead(ctx, d, meta) } -func resourceMongoDBAtlasProjectAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - // Get client connection. - conn := meta.(*config.MongoDBClient).Atlas +func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { + connV2 := meta.(*config.MongoDBClient).AtlasV2 ids := conversion.DecodeStateID(d.Id()) apiKeyID := ids["api_key_id"] @@ -143,30 +140,30 @@ func resourceMongoDBAtlasProjectAPIKeyRead(ctx context.Context, d *schema.Resour return diag.FromErr(fmt.Errorf("could not obtain a project id from state: %s", err)) } - projectAPIKeys, _, err := conn.ProjectAPIKeys.List(ctx, *firstProjectID, nil) + projectAPIKeys, _, err := connV2.ProgrammaticAPIKeysApi.ListProjectApiKeys(ctx, *firstProjectID).Execute() if err != nil { return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) } apiKeyIsPresent := false - for _, val := range projectAPIKeys { - if val.ID != apiKeyID { + for _, val := range projectAPIKeys.GetResults() { + if val.GetId() != apiKeyID { continue } apiKeyIsPresent = true - if err := d.Set("api_key_id", val.ID); err != nil { + if err := d.Set("api_key_id", val.GetId()); err != nil { return diag.FromErr(fmt.Errorf("error setting `api_key_id`: %s", err)) } - if err := d.Set("description", val.Desc); err != nil { + if err := d.Set("description", val.GetDesc()); err != nil { return diag.FromErr(fmt.Errorf("error setting `description`: %s", err)) } - if err := d.Set("public_key", val.PublicKey); err != nil { + if err := d.Set("public_key", val.GetPublicKey()); err != nil { return diag.FromErr(fmt.Errorf("error setting `public_key`: %s", err)) } - if projectAssignments, err := newProjectAssignment(ctx, conn, apiKeyID); err == nil { + if projectAssignments, err := newProjectAssignment(ctx, connV2, apiKeyID); err == nil { if err := d.Set("project_assignment", projectAssignments); err != nil { return diag.Errorf("error setting `project_assignment` : %s", err) } @@ -181,8 +178,7 @@ func resourceMongoDBAtlasProjectAPIKeyRead(ctx context.Context, d *schema.Resour return nil } -func resourceMongoDBAtlasProjectAPIKeyUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(*config.MongoDBClient).Atlas +func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { connV2 := meta.(*config.MongoDBClient).AtlasV2 ids := conversion.DecodeStateID(d.Id()) @@ -197,9 +193,8 @@ func resourceMongoDBAtlasProjectAPIKeyUpdate(ctx context.Context, d *schema.Reso for _, apiKey := range newAssignments { projectID := apiKey.(map[string]any)["project_id"].(string) roles := conversion.ExpandStringList(apiKey.(map[string]any)["role_names"].(*schema.Set).List()) - _, err := conn.ProjectAPIKeys.Assign(ctx, projectID, apiKeyID, &matlas.AssignAPIKey{ - Roles: roles, - }) + assignment := []admin.UserAccessRoleAssignment{{Roles: &roles}} + _, _, err := connV2.ProgrammaticAPIKeysApi.AddProjectApiKey(ctx, projectID, apiKeyID, &assignment).Execute() if err != nil { return diag.Errorf("error assigning api_keys into the project(%s): %s", projectID, err) } @@ -209,7 +204,7 @@ func resourceMongoDBAtlasProjectAPIKeyUpdate(ctx context.Context, d *schema.Reso // Removing projects assignments for _, apiKey := range removedAssignments { projectID := apiKey.(map[string]any)["project_id"].(string) - _, err := conn.ProjectAPIKeys.Unassign(ctx, projectID, apiKeyID) + _, _, err := connV2.ProgrammaticAPIKeysApi.RemoveProjectApiKey(ctx, projectID, apiKeyID).Execute() if err != nil && strings.Contains(err.Error(), "GROUP_NOT_FOUND") { continue // allows removing assignment for a project that has been deleted } @@ -222,9 +217,8 @@ func resourceMongoDBAtlasProjectAPIKeyUpdate(ctx context.Context, d *schema.Reso for _, apiKey := range changedAssignments { projectID := apiKey.(map[string]any)["project_id"].(string) roles := conversion.ExpandStringList(apiKey.(map[string]any)["role_names"].(*schema.Set).List()) - _, err := conn.ProjectAPIKeys.Assign(ctx, projectID, apiKeyID, &matlas.AssignAPIKey{ - Roles: roles, - }) + assignment := []admin.UserAccessRoleAssignment{{Roles: &roles}} + _, _, err := connV2.ProgrammaticAPIKeysApi.AddProjectApiKey(ctx, projectID, apiKeyID, &assignment).Execute() if err != nil { return diag.Errorf("error updating role names for the api_key(%s): %s", apiKey, err) } @@ -245,11 +239,11 @@ func resourceMongoDBAtlasProjectAPIKeyUpdate(ctx context.Context, d *schema.Reso } } - return resourceMongoDBAtlasProjectAPIKeyRead(ctx, d, meta) + return resourceRead(ctx, d, meta) } -func resourceMongoDBAtlasProjectAPIKeyDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { - conn := meta.(*config.MongoDBClient).Atlas +func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { + connV2 := meta.(*config.MongoDBClient).AtlasV2 ids := conversion.DecodeStateID(d.Id()) apiKeyID := ids["api_key_id"] var orgID string @@ -259,42 +253,40 @@ func resourceMongoDBAtlasProjectAPIKeyDelete(ctx context.Context, d *schema.Reso return diag.FromErr(fmt.Errorf("could not obtain a project id from state: %s", err)) } - projectAPIKeys, _, err := conn.ProjectAPIKeys.List(ctx, *firstProjectID, nil) + projectAPIKeys, _, err := connV2.ProgrammaticAPIKeysApi.ListProjectApiKeys(ctx, *firstProjectID).Execute() if err != nil { return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) } - for _, val := range projectAPIKeys { - if val.ID == apiKeyID { - for i, role := range val.Roles { - if strings.HasPrefix(role.RoleName, "ORG_") { - orgID = val.Roles[i].OrgID + for _, val := range projectAPIKeys.GetResults() { + if val.GetId() == apiKeyID { + for i, role := range val.GetRoles() { + if strings.HasPrefix(role.GetRoleName(), "ORG_") { + orgID = val.GetRoles()[i].GetOrgId() } } } } - options := &matlas.ListOptions{} - - apiKeyOrgList, _, err := conn.Root.List(ctx, options) + apiKeyOrgList, _, err := connV2.RootApi.GetSystemStatus(ctx).Execute() if err != nil { return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) } - projectAssignments, err := getAPIProjectAssignments(ctx, conn, apiKeyOrgList, apiKeyID) + projectAssignments, err := getAPIProjectAssignments(ctx, connV2, apiKeyOrgList, apiKeyID) if err != nil { return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) } for _, apiKey := range projectAssignments { - _, err = conn.ProjectAPIKeys.Unassign(ctx, apiKey.ProjectID, apiKeyID) + _, _, err = connV2.ProgrammaticAPIKeysApi.RemoveProjectApiKey(ctx, apiKey.ProjectID, apiKeyID).Execute() if err != nil { return diag.FromErr(fmt.Errorf("error deleting project api key: %s", err)) } } if orgID != "" { - if _, err = conn.APIKeys.Delete(ctx, orgID, apiKeyID); err != nil { + if _, _, err = connV2.ProgrammaticAPIKeysApi.DeleteApiKey(ctx, orgID, apiKeyID).Execute(); err != nil { return diag.FromErr(fmt.Errorf("error unable to delete Key (%s): %s", apiKeyID, err)) } } @@ -303,8 +295,8 @@ func resourceMongoDBAtlasProjectAPIKeyDelete(ctx context.Context, d *schema.Reso return nil } -func resourceMongoDBAtlasProjectAPIKeyImportState(ctx context.Context, d *schema.ResourceData, meta any) ([]*schema.ResourceData, error) { - conn := meta.(*config.MongoDBClient).Atlas +func resourceImportState(ctx context.Context, d *schema.ResourceData, meta any) ([]*schema.ResourceData, error) { + connV2 := meta.(*config.MongoDBClient).AtlasV2 parts := strings.SplitN(d.Id(), "-", 2) if len(parts) != 2 { @@ -314,28 +306,28 @@ func resourceMongoDBAtlasProjectAPIKeyImportState(ctx context.Context, d *schema projectID := parts[0] apiKeyID := parts[1] - projectAPIKeys, _, err := conn.ProjectAPIKeys.List(ctx, projectID, nil) + projectAPIKeys, _, err := connV2.ProgrammaticAPIKeysApi.ListProjectApiKeys(ctx, projectID).Execute() if err != nil { return nil, fmt.Errorf("couldn't import api key %s in project %s, error: %s", projectID, apiKeyID, err) } - for _, val := range projectAPIKeys { - if val.ID == apiKeyID { - if err := d.Set("description", val.Desc); err != nil { + for _, val := range projectAPIKeys.GetResults() { + if val.GetId() == apiKeyID { + if err := d.Set("description", val.GetDesc()); err != nil { return nil, fmt.Errorf("error setting `description`: %s", err) } - if err := d.Set("public_key", val.PublicKey); err != nil { + if err := d.Set("public_key", val.GetPublicKey()); err != nil { return nil, fmt.Errorf("error setting `public_key`: %s", err) } - if projectAssignments, err := newProjectAssignment(ctx, conn, apiKeyID); err == nil { + if projectAssignments, err := newProjectAssignment(ctx, connV2, apiKeyID); err == nil { if err := d.Set("project_assignment", projectAssignments); err != nil { return nil, fmt.Errorf("error setting `project_assignment`: %s", err) } } d.SetId(conversion.EncodeStateID(map[string]string{ - "api_key_id": val.ID, + "api_key_id": val.GetId(), })) } } @@ -353,7 +345,7 @@ func getFirstProjectIDFromAssignments(d *schema.ResourceData) (*string, error) { return nil, errors.New(errorNoProjectAssignmentDefined) } -func flattenProjectAPIKeyRoles(projectID string, apiKeyRoles []matlas.AtlasRole) []string { +func flattenProjectAPIKeyRoles(projectID string, apiKeyRoles []admin.CloudAccessRoleAssignment) []string { if len(apiKeyRoles) == 0 { return nil } @@ -361,8 +353,8 @@ func flattenProjectAPIKeyRoles(projectID string, apiKeyRoles []matlas.AtlasRole) flattenedOrgRoles := []string{} for _, role := range apiKeyRoles { - if strings.HasPrefix(role.RoleName, "GROUP_") && role.GroupID == projectID { - flattenedOrgRoles = append(flattenedOrgRoles, role.RoleName) + if strings.HasPrefix(role.GetRoleName(), "GROUP_") && role.GetGroupId() == projectID { + flattenedOrgRoles = append(flattenedOrgRoles, role.GetRoleName()) } } @@ -383,26 +375,28 @@ func ExpandProjectAssignmentSet(projectAssignments *schema.Set) []*APIProjectAss return res } -func newProjectAssignment(ctx context.Context, conn *matlas.Client, apiKeyID string) ([]map[string]any, error) { - apiKeyOrgList, _, err := conn.Root.List(ctx, nil) +func newProjectAssignment(ctx context.Context, connV2 *admin.APIClient, apiKeyID string) ([]map[string]any, error) { + apiKeyOrgList, _, err := connV2.RootApi.GetSystemStatus(ctx).Execute() if err != nil { return nil, fmt.Errorf("error getting api key information: %s", err) } - projectAssignments, err := getAPIProjectAssignments(ctx, conn, apiKeyOrgList, apiKeyID) + projectAssignments, err := getAPIProjectAssignments(ctx, connV2, apiKeyOrgList, apiKeyID) if err != nil { return nil, fmt.Errorf("error getting api key information: %s", err) } var results []map[string]any - var atlasRoles []matlas.AtlasRole - var atlasRole matlas.AtlasRole + var atlasRoles []admin.CloudAccessRoleAssignment if len(projectAssignments) > 0 { results = make([]map[string]any, len(projectAssignments)) for k, apiKey := range projectAssignments { for _, roleName := range apiKey.RoleNames { - atlasRole.GroupID = apiKey.ProjectID - atlasRole.RoleName = roleName + atlasRole := admin.CloudAccessRoleAssignment{ + GroupId: &apiKey.ProjectID, + RoleName: &roleName, + } + atlasRoles = append(atlasRoles, atlasRole) } results[k] = map[string]any{ @@ -442,31 +436,32 @@ func getStateProjectAssignmentAPIKeys(d *schema.ResourceData) (newAssignments, c return } -func getAPIProjectAssignments(ctx context.Context, conn *matlas.Client, apiKeyOrgList *matlas.Root, apiKeyID string) ([]APIProjectAssignmentKeyInput, error) { +func getAPIProjectAssignments(ctx context.Context, connV2 *admin.APIClient, apiKeyOrgList *admin.SystemStatus, apiKeyID string) ([]APIProjectAssignmentKeyInput, error) { projectAssignments := []APIProjectAssignmentKeyInput{} - for idx, role := range apiKeyOrgList.APIKey.Roles { - if strings.HasPrefix(role.RoleName, "ORG_") { - orgKeys, _, err := conn.APIKeys.List(ctx, apiKeyOrgList.APIKey.Roles[idx].OrgID, nil) - if err != nil { - return nil, fmt.Errorf("error getting api key information: %s", err) - } - for _, val := range orgKeys { - if val.ID == apiKeyID { - for _, r := range val.Roles { - temp := new(APIProjectAssignmentKeyInput) - if strings.HasPrefix(r.RoleName, "GROUP_") { - temp.ProjectID = r.GroupID - for _, l := range val.Roles { - if l.GroupID == temp.ProjectID { - temp.RoleNames = append(temp.RoleNames, l.RoleName) - } + for idx, role := range apiKeyOrgList.ApiKey.GetRoles() { + if !strings.HasPrefix(*role.RoleName, "ORG_") { + continue + } + roles := apiKeyOrgList.ApiKey.GetRoles() + orgKeys, _, err := connV2.ProgrammaticAPIKeysApi.ListApiKeys(ctx, *roles[idx].OrgId).Execute() + if err != nil { + return nil, fmt.Errorf("error getting api key information: %s", err) + } + for _, val := range orgKeys.GetResults() { + if val.GetId() == apiKeyID { + for _, r := range val.GetRoles() { + temp := new(APIProjectAssignmentKeyInput) + if strings.HasPrefix(r.GetRoleName(), "GROUP_") { + temp.ProjectID = r.GetGroupId() + for _, l := range val.GetRoles() { + if l.GetGroupId() == temp.ProjectID { + temp.RoleNames = append(temp.RoleNames, l.GetRoleName()) } - projectAssignments = append(projectAssignments, *temp) } + projectAssignments = append(projectAssignments, *temp) } } } - break } } return projectAssignments, nil diff --git a/internal/service/projectapikey/resource_project_api_key_test.go b/internal/service/projectapikey/resource_project_api_key_test.go index 8654cfcce7..481b3c89c8 100644 --- a/internal/service/projectapikey/resource_project_api_key_test.go +++ b/internal/service/projectapikey/resource_project_api_key_test.go @@ -12,7 +12,6 @@ import ( "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - matlas "go.mongodb.org/atlas/mongodbatlas" ) const ( @@ -236,13 +235,13 @@ func TestAccProjectAPIKey_invalidRole(t *testing.T) { } func deleteAPIKeyManually(orgID, descriptionPrefix string) error { - list, _, err := acc.Conn().APIKeys.List(context.Background(), orgID, &matlas.ListOptions{}) + list, _, err := acc.ConnV2().ProgrammaticAPIKeysApi.ListApiKeys(context.Background(), orgID).Execute() if err != nil { return err } - for _, key := range list { - if strings.HasPrefix(key.Desc, descriptionPrefix) { - if _, err := acc.Conn().APIKeys.Delete(context.Background(), orgID, key.ID); err != nil { + for _, key := range list.GetResults() { + if strings.HasPrefix(key.GetDesc(), descriptionPrefix) { + if _, _, err := acc.ConnV2().ProgrammaticAPIKeysApi.DeleteApiKey(context.Background(), orgID, key.GetId()).Execute(); err != nil { return err } } @@ -256,13 +255,13 @@ func checkDestroy(projectID string) resource.TestCheckFunc { if rs.Type != "mongodbatlas_project_api_key" { continue } - projectAPIKeys, _, err := acc.Conn().ProjectAPIKeys.List(context.Background(), projectID, nil) + projectAPIKeys, _, err := acc.ConnV2().ProgrammaticAPIKeysApi.ListProjectApiKeys(context.Background(), projectID).Execute() if err != nil { return nil } ids := conversion.DecodeStateID(rs.Primary.ID) - for _, val := range projectAPIKeys { - if val.ID == ids["api_key_id"] { + for _, val := range projectAPIKeys.GetResults() { + if val.GetId() == ids["api_key_id"] { return fmt.Errorf("Project API Key (%s) still exists", ids["role_name"]) } } diff --git a/internal/service/projectinvitation/resource_project_invitation.go b/internal/service/projectinvitation/resource_project_invitation.go index 4172d0081e..8ca1b7c199 100644 --- a/internal/service/projectinvitation/resource_project_invitation.go +++ b/internal/service/projectinvitation/resource_project_invitation.go @@ -11,7 +11,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func Resource() *schema.Resource { diff --git a/internal/service/projectipaccesslist/model_project_ip_access_list.go b/internal/service/projectipaccesslist/model_project_ip_access_list.go index a33e77c34f..12c7bae998 100644 --- a/internal/service/projectipaccesslist/model_project_ip_access_list.go +++ b/internal/service/projectipaccesslist/model_project_ip_access_list.go @@ -6,7 +6,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/diag" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func NewMongoDBProjectIPAccessList(projectIPAccessListModel *TfProjectIPAccessListModel) *[]admin.NetworkPermissionEntry { diff --git a/internal/service/projectipaccesslist/model_project_ip_access_list_test.go b/internal/service/projectipaccesslist/model_project_ip_access_list_test.go index 282939b0a6..e51f4a4787 100644 --- a/internal/service/projectipaccesslist/model_project_ip_access_list_test.go +++ b/internal/service/projectipaccesslist/model_project_ip_access_list_test.go @@ -9,7 +9,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/projectipaccesslist" "github.com/stretchr/testify/assert" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) var ( diff --git a/internal/service/projectipaccesslist/resource_project_ip_access_list.go b/internal/service/projectipaccesslist/resource_project_ip_access_list.go index 07b91ffdfc..144d587ce1 100644 --- a/internal/service/projectipaccesslist/resource_project_ip_access_list.go +++ b/internal/service/projectipaccesslist/resource_project_ip_access_list.go @@ -7,7 +7,7 @@ import ( "strings" "time" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts" "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" diff --git a/internal/service/pushbasedlogexport/model.go b/internal/service/pushbasedlogexport/model.go index c32f52e514..0238196c0b 100644 --- a/internal/service/pushbasedlogexport/model.go +++ b/internal/service/pushbasedlogexport/model.go @@ -3,7 +3,7 @@ package pushbasedlogexport import ( "context" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts" "github.com/hashicorp/terraform-plugin-framework/diag" diff --git a/internal/service/pushbasedlogexport/model_test.go b/internal/service/pushbasedlogexport/model_test.go index 10e1678d18..c0523a6c00 100644 --- a/internal/service/pushbasedlogexport/model_test.go +++ b/internal/service/pushbasedlogexport/model_test.go @@ -5,7 +5,7 @@ import ( "testing" "time" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts" "github.com/hashicorp/terraform-plugin-framework/types" diff --git a/internal/service/pushbasedlogexport/resource.go b/internal/service/pushbasedlogexport/resource.go index aad810d34c..dfebae9189 100644 --- a/internal/service/pushbasedlogexport/resource.go +++ b/internal/service/pushbasedlogexport/resource.go @@ -7,7 +7,7 @@ import ( "slices" "time" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-framework/path" "github.com/hashicorp/terraform-plugin-framework/resource" diff --git a/internal/service/pushbasedlogexport/state_transition.go b/internal/service/pushbasedlogexport/state_transition.go index 3286736b13..e8c1283339 100644 --- a/internal/service/pushbasedlogexport/state_transition.go +++ b/internal/service/pushbasedlogexport/state_transition.go @@ -5,7 +5,7 @@ import ( "errors" "fmt" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-log/tflog" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" diff --git a/internal/service/pushbasedlogexport/state_transition_test.go b/internal/service/pushbasedlogexport/state_transition_test.go index 137d774d6e..d49f0757b3 100644 --- a/internal/service/pushbasedlogexport/state_transition_test.go +++ b/internal/service/pushbasedlogexport/state_transition_test.go @@ -7,8 +7,8 @@ import ( "testing" "time" - "go.mongodb.org/atlas-sdk/v20240530002/admin" - "go.mongodb.org/atlas-sdk/v20240530002/mockadmin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" + "go.mongodb.org/atlas-sdk/v20240805001/mockadmin" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/mock" diff --git a/internal/service/searchdeployment/model_search_deployment.go b/internal/service/searchdeployment/model_search_deployment.go index c6f80c5f1f..8548aacf19 100644 --- a/internal/service/searchdeployment/model_search_deployment.go +++ b/internal/service/searchdeployment/model_search_deployment.go @@ -6,7 +6,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts" "github.com/hashicorp/terraform-plugin-framework/diag" "github.com/hashicorp/terraform-plugin-framework/types" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func NewSearchDeploymentReq(ctx context.Context, searchDeploymentPlan *TFSearchDeploymentRSModel) admin.ApiSearchDeploymentRequest { diff --git a/internal/service/searchdeployment/model_search_deployment_test.go b/internal/service/searchdeployment/model_search_deployment_test.go index 643c3dd458..e82b8a6ff7 100644 --- a/internal/service/searchdeployment/model_search_deployment_test.go +++ b/internal/service/searchdeployment/model_search_deployment_test.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-framework/types/basetypes" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/searchdeployment" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) type sdkToTFModelTestCase struct { diff --git a/internal/service/searchdeployment/state_transition_search_deployment.go b/internal/service/searchdeployment/state_transition_search_deployment.go index 3ba981c451..98c992be4c 100644 --- a/internal/service/searchdeployment/state_transition_search_deployment.go +++ b/internal/service/searchdeployment/state_transition_search_deployment.go @@ -10,7 +10,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const SearchDeploymentDoesNotExistsError = "ATLAS_SEARCH_DEPLOYMENT_DOES_NOT_EXIST" diff --git a/internal/service/searchdeployment/state_transition_search_deployment_test.go b/internal/service/searchdeployment/state_transition_search_deployment_test.go index 21511e0d95..a004a1e4eb 100644 --- a/internal/service/searchdeployment/state_transition_search_deployment_test.go +++ b/internal/service/searchdeployment/state_transition_search_deployment_test.go @@ -12,8 +12,8 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/searchdeployment" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/mock" - "go.mongodb.org/atlas-sdk/v20240530002/admin" - "go.mongodb.org/atlas-sdk/v20240530002/mockadmin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" + "go.mongodb.org/atlas-sdk/v20240805001/mockadmin" ) var ( diff --git a/internal/service/searchindex/data_source_search_indexes.go b/internal/service/searchindex/data_source_search_indexes.go index 3cfd89f617..d3bd55bc8f 100644 --- a/internal/service/searchindex/data_source_search_indexes.go +++ b/internal/service/searchindex/data_source_search_indexes.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/searchindex/model_search_index.go b/internal/service/searchindex/model_search_index.go index 6b5adfbbb4..40f7fb4d8c 100644 --- a/internal/service/searchindex/model_search_index.go +++ b/internal/service/searchindex/model_search_index.go @@ -12,7 +12,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func flattenSearchIndexSynonyms(synonyms []admin.SearchSynonymMappingDefinition) []map[string]any { diff --git a/internal/service/searchindex/resource_search_index.go b/internal/service/searchindex/resource_search_index.go index 0139101588..559202413b 100644 --- a/internal/service/searchindex/resource_search_index.go +++ b/internal/service/searchindex/resource_search_index.go @@ -13,7 +13,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/serverlessinstance/data_source_serverless_instances.go b/internal/service/serverlessinstance/data_source_serverless_instances.go index a55498593a..52f089258e 100644 --- a/internal/service/serverlessinstance/data_source_serverless_instances.go +++ b/internal/service/serverlessinstance/data_source_serverless_instances.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/serverlessinstance/resource_serverless_instance.go b/internal/service/serverlessinstance/resource_serverless_instance.go index 2f7a525db2..828a7eaa03 100644 --- a/internal/service/serverlessinstance/resource_serverless_instance.go +++ b/internal/service/serverlessinstance/resource_serverless_instance.go @@ -15,7 +15,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/serverlessinstance/resource_serverless_instance_test.go b/internal/service/serverlessinstance/resource_serverless_instance_test.go index c7602623d9..a527d70629 100644 --- a/internal/service/serverlessinstance/resource_serverless_instance_test.go +++ b/internal/service/serverlessinstance/resource_serverless_instance_test.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-testing/terraform" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/sharedtier/data_source_cloud_shared_tier_restore_jobs.go b/internal/service/sharedtier/data_source_cloud_shared_tier_restore_jobs.go index ac5219b683..112ecf1086 100644 --- a/internal/service/sharedtier/data_source_cloud_shared_tier_restore_jobs.go +++ b/internal/service/sharedtier/data_source_cloud_shared_tier_restore_jobs.go @@ -4,7 +4,7 @@ import ( "context" "fmt" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" diff --git a/internal/service/sharedtier/data_source_shared_tier_snapshots.go b/internal/service/sharedtier/data_source_shared_tier_snapshots.go index ff83218e5e..7654136b99 100644 --- a/internal/service/sharedtier/data_source_shared_tier_snapshots.go +++ b/internal/service/sharedtier/data_source_shared_tier_snapshots.go @@ -4,7 +4,7 @@ import ( "context" "fmt" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" diff --git a/internal/service/streamconnection/data_source_stream_connections.go b/internal/service/streamconnection/data_source_stream_connections.go index 5b4835dd4b..3800fc1052 100644 --- a/internal/service/streamconnection/data_source_stream_connections.go +++ b/internal/service/streamconnection/data_source_stream_connections.go @@ -10,7 +10,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/dsschema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) var _ datasource.DataSource = &streamConnectionsDS{} diff --git a/internal/service/streamconnection/data_source_stream_connections_test.go b/internal/service/streamconnection/data_source_stream_connections_test.go index ca480ae389..47af9736cc 100644 --- a/internal/service/streamconnection/data_source_stream_connections_test.go +++ b/internal/service/streamconnection/data_source_stream_connections_test.go @@ -6,7 +6,7 @@ import ( "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func TestAccStreamDSStreamConnections_basic(t *testing.T) { diff --git a/internal/service/streamconnection/model_stream_connection.go b/internal/service/streamconnection/model_stream_connection.go index 0c2a0ece7d..142efd7146 100644 --- a/internal/service/streamconnection/model_stream_connection.go +++ b/internal/service/streamconnection/model_stream_connection.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types/basetypes" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func NewStreamConnectionReq(ctx context.Context, plan *TFStreamConnectionModel) (*admin.StreamsConnection, diag.Diagnostics) { diff --git a/internal/service/streamconnection/model_stream_connection_test.go b/internal/service/streamconnection/model_stream_connection_test.go index 16ef34747d..c60e122983 100644 --- a/internal/service/streamconnection/model_stream_connection_test.go +++ b/internal/service/streamconnection/model_stream_connection_test.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamconnection" "github.com/stretchr/testify/assert" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/streaminstance/data_source_stream_instances.go b/internal/service/streaminstance/data_source_stream_instances.go index b2cff18b7b..898ffc3ae3 100644 --- a/internal/service/streaminstance/data_source_stream_instances.go +++ b/internal/service/streaminstance/data_source_stream_instances.go @@ -10,7 +10,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/dsschema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) var _ datasource.DataSource = &streamInstancesDS{} diff --git a/internal/service/streaminstance/data_source_stream_instances_test.go b/internal/service/streaminstance/data_source_stream_instances_test.go index 9ea31f3118..37b952ad9b 100644 --- a/internal/service/streaminstance/data_source_stream_instances_test.go +++ b/internal/service/streaminstance/data_source_stream_instances_test.go @@ -6,7 +6,7 @@ import ( "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func TestAccStreamDSStreamInstances_basic(t *testing.T) { diff --git a/internal/service/streaminstance/model_stream_instance.go b/internal/service/streaminstance/model_stream_instance.go index a50a3253ec..e11f7f3c06 100644 --- a/internal/service/streaminstance/model_stream_instance.go +++ b/internal/service/streaminstance/model_stream_instance.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types/basetypes" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func NewStreamInstanceCreateReq(ctx context.Context, plan *TFStreamInstanceModel) (*admin.StreamsTenant, diag.Diagnostics) { diff --git a/internal/service/streaminstance/model_stream_instance_test.go b/internal/service/streaminstance/model_stream_instance_test.go index 126baeb093..94d69cb194 100644 --- a/internal/service/streaminstance/model_stream_instance_test.go +++ b/internal/service/streaminstance/model_stream_instance_test.go @@ -7,7 +7,7 @@ import ( "github.com/hashicorp/terraform-plugin-framework/types" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streaminstance" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/team/data_source_team.go b/internal/service/team/data_source_team.go index 99017170f2..6ac8288b76 100644 --- a/internal/service/team/data_source_team.go +++ b/internal/service/team/data_source_team.go @@ -11,7 +11,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func DataSource() *schema.Resource { diff --git a/internal/service/team/resource_team.go b/internal/service/team/resource_team.go index a9c423e629..3fd175f037 100644 --- a/internal/service/team/resource_team.go +++ b/internal/service/team/resource_team.go @@ -15,7 +15,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/service/thirdpartyintegration/data_source_third_party_integrations.go b/internal/service/thirdpartyintegration/data_source_third_party_integrations.go index daf79ed180..a4d5fcca9f 100644 --- a/internal/service/thirdpartyintegration/data_source_third_party_integrations.go +++ b/internal/service/thirdpartyintegration/data_source_third_party_integrations.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func PluralDataSource() *schema.Resource { diff --git a/internal/service/thirdpartyintegration/resource_third_party_integration_test.go b/internal/service/thirdpartyintegration/resource_third_party_integration_test.go index a4f9f78105..c73f0f534c 100644 --- a/internal/service/thirdpartyintegration/resource_third_party_integration_test.go +++ b/internal/service/thirdpartyintegration/resource_third_party_integration_test.go @@ -342,7 +342,7 @@ func checkDestroy(s *terraform.State) error { if attrs["type"] == "" { return fmt.Errorf("no type is set") } - _, _, err := acc.Conn().Integrations.Get(context.Background(), attrs["project_id"], attrs["type"]) + _, _, err := acc.ConnV2().ThirdPartyIntegrationsApi.GetThirdPartyIntegration(context.Background(), attrs["project_id"], attrs["type"]).Execute() if err == nil { return fmt.Errorf("third party integration service (%s) still exists", attrs["type"]) } @@ -496,7 +496,7 @@ func checkExists(resourceName string) resource.TestCheckFunc { if attrs["type"] == "" { return fmt.Errorf("no type is set") } - if _, _, err := acc.Conn().Integrations.Get(context.Background(), attrs["project_id"], attrs["type"]); err == nil { + if _, _, err := acc.ConnV2().ThirdPartyIntegrationsApi.GetThirdPartyIntegration(context.Background(), attrs["project_id"], attrs["type"]).Execute(); err == nil { return nil } return fmt.Errorf("third party integration (%s) does not exist", attrs["project_id"]) diff --git a/internal/service/x509authenticationdatabaseuser/resource_x509_authentication_database_user.go b/internal/service/x509authenticationdatabaseuser/resource_x509_authentication_database_user.go index 7b734cb61c..0044516409 100644 --- a/internal/service/x509authenticationdatabaseuser/resource_x509_authentication_database_user.go +++ b/internal/service/x509authenticationdatabaseuser/resource_x509_authentication_database_user.go @@ -11,7 +11,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/spf13/cast" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/testutil/acc/advanced_cluster.go b/internal/testutil/acc/advanced_cluster.go index 136a84430c..95897cef6d 100644 --- a/internal/testutil/acc/advanced_cluster.go +++ b/internal/testutil/acc/advanced_cluster.go @@ -8,7 +8,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) var ( diff --git a/internal/testutil/acc/atlas.go b/internal/testutil/acc/atlas.go index 9e3c821d7f..f75fde0ee5 100644 --- a/internal/testutil/acc/atlas.go +++ b/internal/testutil/acc/atlas.go @@ -10,7 +10,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster" "github.com/stretchr/testify/require" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func createProject(tb testing.TB, name string) string { @@ -57,19 +57,19 @@ func deleteCluster(projectID, name string) { } } -func clusterReq(name, projectID string) admin.ClusterDescription20250101 { - return admin.ClusterDescription20250101{ +func clusterReq(name, projectID string) admin.ClusterDescription20240805 { + return admin.ClusterDescription20240805{ Name: admin.PtrString(name), GroupId: admin.PtrString(projectID), ClusterType: admin.PtrString("REPLICASET"), - ReplicationSpecs: &[]admin.ReplicationSpec20250101{ + ReplicationSpecs: &[]admin.ReplicationSpec20240805{ { - RegionConfigs: &[]admin.CloudRegionConfig20250101{ + RegionConfigs: &[]admin.CloudRegionConfig20240805{ { ProviderName: admin.PtrString(constant.AWS), RegionName: admin.PtrString(constant.UsWest2), Priority: admin.PtrInt(7), - ElectableSpecs: &admin.HardwareSpec20250101{ + ElectableSpecs: &admin.HardwareSpec20240805{ InstanceSize: admin.PtrString(constant.M10), NodeCount: admin.PtrInt(3), }, diff --git a/internal/testutil/acc/cluster.go b/internal/testutil/acc/cluster.go index d6b1d0c49e..615a5430a1 100644 --- a/internal/testutil/acc/cluster.go +++ b/internal/testutil/acc/cluster.go @@ -7,7 +7,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) // ClusterRequest contains configuration for a cluster where all fields are optional and AddDefaults is used for required fields. @@ -21,6 +21,7 @@ type ClusterRequest struct { ClusterName string MongoDBMajorVersion string ReplicationSpecs []ReplicationSpecRequest + DiskSizeGb int CloudBackup bool Geosharded bool RetainBackupsEnabled bool @@ -111,7 +112,6 @@ type ReplicationSpecRequest struct { NodeCount int NodeCountReadOnly int Priority int - DiskSizeGb int AutoScalingDiskGbEnabled bool } @@ -136,9 +136,9 @@ func (r *ReplicationSpecRequest) AddDefaults() { } } -func (r *ReplicationSpecRequest) AllRegionConfigs() []admin.CloudRegionConfig20250101 { +func (r *ReplicationSpecRequest) AllRegionConfigs() []admin.CloudRegionConfig20240805 { config := cloudRegionConfig(*r) - configs := []admin.CloudRegionConfig20250101{config} + configs := []admin.CloudRegionConfig20240805{config} for i := range r.ExtraRegionConfigs { extra := r.ExtraRegionConfigs[i] configs = append(configs, cloudRegionConfig(extra)) @@ -146,42 +146,37 @@ func (r *ReplicationSpecRequest) AllRegionConfigs() []admin.CloudRegionConfig202 return configs } -func replicationSpec(req *ReplicationSpecRequest) admin.ReplicationSpec20250101 { +func replicationSpec(req *ReplicationSpecRequest) admin.ReplicationSpec20240805 { if req == nil { req = new(ReplicationSpecRequest) } req.AddDefaults() regionConfigs := req.AllRegionConfigs() - return admin.ReplicationSpec20250101{ + return admin.ReplicationSpec20240805{ ZoneName: &req.ZoneName, RegionConfigs: ®ionConfigs, } } -func cloudRegionConfig(req ReplicationSpecRequest) admin.CloudRegionConfig20250101 { +func cloudRegionConfig(req ReplicationSpecRequest) admin.CloudRegionConfig20240805 { req.AddDefaults() - var readOnly admin.DedicatedHardwareSpec20250101 + var readOnly admin.DedicatedHardwareSpec20240805 if req.NodeCountReadOnly != 0 { - readOnly = admin.DedicatedHardwareSpec20250101{ + readOnly = admin.DedicatedHardwareSpec20240805{ NodeCount: &req.NodeCountReadOnly, InstanceSize: &req.InstanceSize, } } - electableSpec := admin.HardwareSpec20250101{ - InstanceSize: &req.InstanceSize, - NodeCount: &req.NodeCount, - EbsVolumeType: conversion.StringPtr(req.EbsVolumeType), - } - if req.DiskSizeGb != 0 { - diskSizeGb := float64(req.DiskSizeGb) - electableSpec.DiskSizeGB = &diskSizeGb - } - return admin.CloudRegionConfig20250101{ - RegionName: &req.Region, - Priority: &req.Priority, - ProviderName: &req.ProviderName, - ElectableSpecs: &electableSpec, - ReadOnlySpecs: &readOnly, + return admin.CloudRegionConfig20240805{ + RegionName: &req.Region, + Priority: &req.Priority, + ProviderName: &req.ProviderName, + ElectableSpecs: &admin.HardwareSpec20240805{ + InstanceSize: &req.InstanceSize, + NodeCount: &req.NodeCount, + EbsVolumeType: conversion.StringPtr(req.EbsVolumeType), + }, + ReadOnlySpecs: &readOnly, AutoScaling: &admin.AdvancedAutoScalingSettings{ DiskGB: &admin.DiskGBAutoScaling{Enabled: &req.AutoScalingDiskGbEnabled}, }, diff --git a/internal/testutil/acc/config_cluster.go b/internal/testutil/acc/config_cluster.go index 227bbe11e1..c21501224d 100644 --- a/internal/testutil/acc/config_cluster.go +++ b/internal/testutil/acc/config_cluster.go @@ -7,7 +7,7 @@ import ( "github.com/hashicorp/hcl/v2/hclwrite" "github.com/zclconf/go-cty/cty" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func ClusterDatasourceHcl(req *ClusterRequest) (configStr, clusterName, resourceName string, err error) { @@ -45,7 +45,7 @@ func ClusterResourceHcl(req *ClusterRequest) (configStr, clusterName, resourceNa projectID := req.ProjectID req.AddDefaults() specRequests := req.ReplicationSpecs - specs := make([]admin.ReplicationSpec20250101, len(specRequests)) + specs := make([]admin.ReplicationSpec20240805, len(specRequests)) for i := range specRequests { specRequest := specRequests[i] specs[i] = replicationSpec(&specRequest) @@ -73,6 +73,9 @@ func ClusterResourceHcl(req *ClusterRequest) (configStr, clusterName, resourceNa } else { clusterRootAttributes["project_id"] = projectID } + if req.DiskSizeGb != 0 { + clusterRootAttributes["disk_size_gb"] = req.DiskSizeGb + } if req.RetainBackupsEnabled { clusterRootAttributes["retain_backups_enabled"] = req.RetainBackupsEnabled } @@ -116,7 +119,7 @@ func ClusterResourceHcl(req *ClusterRequest) (configStr, clusterName, resourceNa return "\n" + string(f.Bytes()), clusterName, clusterResourceName, err } -func writeReplicationSpec(cluster *hclwrite.Body, spec admin.ReplicationSpec20250101) error { +func writeReplicationSpec(cluster *hclwrite.Body, spec admin.ReplicationSpec20240805) error { replicationBlock := cluster.AppendNewBlock("replication_specs", nil).Body() err := addPrimitiveAttributesViaJSON(replicationBlock, spec) if err != nil { diff --git a/internal/testutil/acc/config_cluster_test.go b/internal/testutil/acc/config_cluster_test.go index 1f00e7b395..306c0fc15d 100644 --- a/internal/testutil/acc/config_cluster_test.go +++ b/internal/testutil/acc/config_cluster_test.go @@ -61,7 +61,6 @@ resource "mongodbatlas_advanced_cluster" "cluster_info" { disk_gb_enabled = false } electable_specs { - disk_size_gb = 16 ebs_volume_type = "STANDARD" instance_size = "M30" node_count = 30 @@ -315,15 +314,7 @@ func Test_ClusterResourceHcl(t *testing.T) { MongoDBMajorVersion: "6.0", RetainBackupsEnabled: true, ReplicationSpecs: []acc.ReplicationSpecRequest{ - { - Region: "MY_REGION_1", - ZoneName: "Zone X", - InstanceSize: "M30", - NodeCount: 30, - ProviderName: constant.AZURE, - EbsVolumeType: "STANDARD", - DiskSizeGb: 16, - }, + {Region: "MY_REGION_1", ZoneName: "Zone X", InstanceSize: "M30", NodeCount: 30, ProviderName: constant.AZURE, EbsVolumeType: "STANDARD"}, }, PitEnabled: true, AdvancedConfiguration: map[string]any{ diff --git a/internal/testutil/acc/database_user.go b/internal/testutil/acc/database_user.go index 7710bb1333..4189186e73 100644 --- a/internal/testutil/acc/database_user.go +++ b/internal/testutil/acc/database_user.go @@ -3,7 +3,7 @@ package acc import ( "fmt" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func ConfigDatabaseUserBasic(projectID, username, roleName, keyLabel, valueLabel string) string { diff --git a/internal/testutil/acc/factory.go b/internal/testutil/acc/factory.go index 80b3fb63ea..669d82b929 100644 --- a/internal/testutil/acc/factory.go +++ b/internal/testutil/acc/factory.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform-plugin-go/tfprotov6" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/provider" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) const ( diff --git a/internal/testutil/acc/pre_check.go b/internal/testutil/acc/pre_check.go index 339f092e05..b0fba2919a 100644 --- a/internal/testutil/acc/pre_check.go +++ b/internal/testutil/acc/pre_check.go @@ -296,3 +296,11 @@ func PreCheckS3Bucket(tb testing.TB) { tb.Fatal("`AWS_S3_BUCKET` must be set ") } } + +func PreCheckAzureExportBucket(tb testing.TB) { + tb.Helper() + if os.Getenv("AZURE_SERVICE_URL") == "" || + os.Getenv("AZURE_BLOB_STORAGE_CONTAINER_NAME") == "" { + tb.Fatal("`AZURE_SERVICE_URL` and `AZURE_SERVICE_URL`must be set for Cloud Backup Snapshot Export Bucket acceptance testing") + } +} diff --git a/internal/testutil/acc/project.go b/internal/testutil/acc/project.go index 46e9bd01b7..1f4b1bbe35 100644 --- a/internal/testutil/acc/project.go +++ b/internal/testutil/acc/project.go @@ -6,7 +6,7 @@ import ( "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func CheckDestroyProject(s *terraform.State) error { diff --git a/internal/testutil/acc/serverless.go b/internal/testutil/acc/serverless.go index d9c6501970..0453b5af57 100644 --- a/internal/testutil/acc/serverless.go +++ b/internal/testutil/acc/serverless.go @@ -3,7 +3,7 @@ package acc import ( "fmt" - "go.mongodb.org/atlas-sdk/v20240530002/admin" + "go.mongodb.org/atlas-sdk/v20240805001/admin" ) func ConfigServerlessInstance(projectID, name string, ignoreConnectionStrings bool, autoIndexing *bool, tags []admin.ResourceTag) string { diff --git a/templates/data-source.md.tmpl b/templates/data-source.md.tmpl index 32b76776d1..45b3c38584 100644 --- a/templates/data-source.md.tmpl +++ b/templates/data-source.md.tmpl @@ -28,6 +28,8 @@ {{ tffile "examples/mongodbatlas_federated_settings_org_role_mapping/main.tf" }} {{ else if eq .Name "mongodbatlas_cloud_backup_snapshot" }} {{ tffile "examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf" }} + {{ else if eq .Name "mongodbatlas_cloud_backup_snapshot_export_bucket" }} + {{ tffile "examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/main.tf" }} {{ else if eq .Name "mongodbatlas_api_key" }} {{ tffile (printf "examples/%s/create-and-assign-pak/main.tf" .Name )}} {{ else if eq .Name "mongodbatlas_backup_compliance_policy" }} diff --git a/templates/resources.md.tmpl b/templates/resources.md.tmpl index 8b86768a70..ed9ba98760 100644 --- a/templates/resources.md.tmpl +++ b/templates/resources.md.tmpl @@ -30,6 +30,8 @@ {{ tffile "examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf" }} {{ else if eq .Name "mongodbatlas_api_key" }} {{ tffile (printf "examples/%s/create-and-assign-pak/main.tf" .Name )}} + {{ else if eq .Name "mongodbatlas_cloud_backup_snapshot_export_bucket" }} + {{ tffile "examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws/main.tf" }} {{ else if eq .Name "mongodbatlas_backup_compliance_policy" }} {{ else if eq .Name "mongodbatlas_event_trigger" }} {{ else if eq .Name "mongodbatlas_access_list_api_key" }}