From b609837aca51220541f4545ad2838a86127700b1 Mon Sep 17 00:00:00 2001 From: Espen Albert Date: Fri, 19 Jul 2024 09:33:17 +0100 Subject: [PATCH] chore: Merges master into dev (#2443) * doc: Updates `mongodbatlas_global_cluster_config` doc about self-managed sharding clusters (#2372) * update doc * add link * test: Unifies Azure and GCP networking tests (#2371) * unify Azure and GCP tests * TEMPORARY no update * Revert "TEMPORARY no update" This reverts commit ab60d67dece8f53272b2fad4a68b60b890e7636c. * run in parallel * chore: Updates examples link in index.html.markdown for v1.17.3 release * chore: Updates CHANGELOG.md header for v1.17.3 release * doc: Updates Terraform Compatibility Matrix documentation (#2370) Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * use ComposeAggregateTestCheckFunc (#2375) * chore: Updates asdf to TF 1.9.0 and compatibility matrix body (#2376) * update asdf to TF 1.9.0 * update compatibility message * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * Fix actionlint --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: stale.yaml gh action (#2379) * doc: Updates alert-config examples (#2378) * doc: Update alert-config examples * doc: Removes other references to GROUP_CHARTS_ADMIN * chore: align table * chore: Updates Atlas Go SDK (#2380) * build(deps): bump go.mongodb.org/atlas-sdk * rename DiskBackupSnapshotAWSExportBucket to DiskBackupSnapshotExportBucket * add param to DeleteAtlasSearchDeployment * add LatestDefinition * more LatestDefinition and start using SearchIndexCreateRequest * HasElementsSliceOrMap * update * ToAnySlicePointer * fix update --------- Co-authored-by: lantoli <430982+lantoli@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.8 to 1.54.13 (#2383) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.8 to 1.54.13. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.8...v1.54.13) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump amannn/action-semantic-pull-request from 5.5.2 to 5.5.3 (#2382) Bumps [amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request) from 5.5.2 to 5.5.3. - [Release notes](https://github.com/amannn/action-semantic-pull-request/releases) - [Changelog](https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md) - [Commits](https://github.com/amannn/action-semantic-pull-request/compare/cfb60706e18bc85e8aec535e3c577abe8f70378e...0723387faaf9b38adef4775cd42cfd5155ed6017) --- updated-dependencies: - dependency-name: amannn/action-semantic-pull-request dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * test: Improves tests for mongodbatlas_search_index (#2384) * checkVector * checkBasic * checkWithMapping * checkWithSynonyms * checkAdditional * checkAdditionalAnalyzers and checkAdditionalMappingsFields * remove addAttrChecks and addAttrSetChecks * use commonChecks in all checks * test checks cleanup * chore: Updates nightly tests to TF 1.9.x (#2386) * update nightly tests to TF 1.9.x * use TF var * keep until 1.3.x * Update .github/workflows/update_tf_compatibility_matrix.yml Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * fix: Emptying cloud_back_schedule "copy_settings" (#2387) * test: add test to reproduce Github Issue * fix: update copy_settings on changes (even when empty) * docs: Add changelog entry * chore: fix changelog entry * apply review comments * chore: Updates CHANGELOG.md for #2387 * chore: Updates delete logic for `mongodbatlas_search_deployment` (#2389) * update delete logic * update unit test * refactor: use advanced_cluster instead of cluster (#2392) * fix: Returns error if the analyzers attribute contains unknown fields. (#2394) * fix: Returns error if the analyzers attribute contains unknown fields. * adds changelog file. * Update .changelog/2394.txt Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2394 * chore: Bump github.com/aws/aws-sdk-go from 1.54.13 to 1.54.17 (#2401) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.13 to 1.54.17. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.13...v1.54.17) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-testing (#2400) Bumps [github.com/hashicorp/terraform-plugin-testing](https://github.com/hashicorp/terraform-plugin-testing) from 1.8.0 to 1.9.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-testing/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-testing/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-testing/compare/v1.8.0...v1.9.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-testing dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-framework (#2398) Bumps [github.com/hashicorp/terraform-plugin-framework](https://github.com/hashicorp/terraform-plugin-framework) from 1.9.0 to 1.10.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-framework/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-framework/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-framework/compare/v1.9.0...v1.10.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-framework dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/hashicorp/terraform-plugin-framework-validators (#2399) Bumps [github.com/hashicorp/terraform-plugin-framework-validators](https://github.com/hashicorp/terraform-plugin-framework-validators) from 0.12.0 to 0.13.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-framework-validators/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-framework-validators/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-framework-validators/compare/v0.12.0...v0.13.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-framework-validators dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * test: Uses hclwrite to generate the cluster for GetClusterInfo (#2404) * test: Use hclwrite to generate the cluster for GetClusterInfo * test: fix unit test * refactor: minor improvements * refactor: use Zone 1 as the default ZoneName to make tests pass * refactor: remove num_shards in request and add more tests * fix: use same default region as before * test: Support disk_size_gb for ClusterInfo and add test case for multiple dependencies * refactor: move replication specs to ClusterRequest * test: add support for CloudRegionConfig * add: suggestions from PR comments * refactor: use acc.ReplicationSpecRequest instead of admin.ReplicationSpec * fix: Fixes `disk_iops` attribute for Azure cloud provider in `mongodbatlas_advanced_cluster` resource (#2396) * fix disk_iops in Azure * expand * tests for disk_iops * chore: Updates CHANGELOG.md for #2396 * test: Refactors `mongodbatlas_private_endpoint_regional_mode` to use cluster info (#2403) * test: refactor to use cluster info * test: enable test in CI and fix duplicate zone name * test: use AWS_REGION_UPPERCASE and add pre-checks * fix: use clusterResourceName * test: fix GetClusterInfo call * fix: pre check call * fix: add UPPERCASE/LOWERCASE to network test suite * test: Skip in ci since it is slow and use new GetClusterInfo api * test: Fix the broken test and simpify assert statements * test: enable in CI, after refactorings ~1230s * test: Refactors resource tests to use GetClusterInfo `online_archive` (#2409) * feat: adds support for Tags & AutoScalingDiskGbEnabled * feat: refactor tests to use GetClusterInfo & new SDK * chore: fomatting fix * test: make unit test deterministic * test: onlinearchive force us_east_1 * spelling in comment * test: fix migration test to use package clusterRequest (with correct region) * update .tool-versions (#2417) * feat: Adds `stored_source` attribute to `mongodbatlas_search_index` resource and corresponding data sources (#2388) * fix ds schemas * add changelog * add storedSource to configBasic and checkBasic * update doc about index_id * update boolean test * first implementation of stored_source as string * create model file * marshal * don't allow update * test for objects in stored_source * TestAccSearchIndex_withStoredSourceUpdate * update StoredSource * fix merge * tests for storedSource updates * swap test names * doc * chore: Updates CHANGELOG.md for #2388 * doc: Improves Guides menu (#2408) * add 0.8.2 metadata * update old category and remove unneeded headers * update page_title * fix titles * remove old guide * test: Refactors resource tests to use GetClusterInfo `ldap_configuration` (#2411) * test: Refactors resource tests to use GetClusterInfo ldap_configuration * test: Fix depends_on clause * test: remove unused clusterName and align fields * test: Refactors resource tests to use GetClusterInfo `cloud_backup_snapshot_restore_job` (#2413) * test: Refactors resource tests to use GetClusterInfo `cloud_backup_snapshot_restore_job` * test: fix reference to clusterResourceName * doc: Clarify usage of maintenance window resource (#2418) * test: Refactors resource tests to use GetClusterInfo `cloud_backup_schedule` (#2414) * test: Cluster support PitEnabled * test: Refactors resource tests to use GetClusterInfo `mongodbatlas_cloud_backup_schedule` * apply PR suggestions * test: fix broken test after merging * test: Refactors resource tests to use GetClusterInfo `federated_database_instance` (#2412) * test: Support getting cluster info with project * test: Refactors resource tests to use GetClusterInfo `federated_database_instance` * test: refactor, use a single GetClusterInfo and support AddDefaults * test: use renamed argument in test * doc: Removes docs headers as they are not needed (#2422) * remove unneeded YAML frontmatter headers * small adjustements * change root files * remove from templates * use Deprecated category * apply feedback * test: Refactors resource tests to use GetClusterInfo `backup_compliance_policy` (#2415) * test: Support AdvancedConfiguration, MongoDBMajorVersion, RetainBackupsEnabled, EbsVolumeType in cluster * test: refactor test to use GetClusterInfo * test: Refactors resource tests to use GetClusterInfo `cluster_outage_simulation` (#2423) * test: support Priority and NodeCountReadOnly * test: Refactors resource tests to use GetClusterInfo `cluster_outage_simulation` * test: reuse test case in migration test * chore: increase timeout to ensure test is passing * test: avoid global variables to ensure no duplicate cluster names * revert delete timeout change * test: Fixes DUPLICATE_CLUSTER_NAME failures (#2424) * test: fix DUPLICATE_CLUSTER_NAME online_archive * test: fix DUPLICATE_CLUSTER_NAME backup_snapshot_restore_job * test: Refactors GetClusterInfo (#2426) * test: support creating a datasource when using GetClusterInfo * test: Add documentation for cluster methods * refactor: move out config_cluster to its own file * refactor: move configClusterGlobal to the only usage file * refactor: remove ProjectIDStr field * test: update references for cluster_info fields * chore: missing whitespace * test: fix missing quotes around projectID * Update internal/testutil/acc/cluster.go Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * doc: Updates to new Terraform doc structure (#2425) * move to root doc folder * rename ds and resource folders * change file extension to .md * update doc links * gitignore * releasing instructions * git hook * codeowners * workflow template * gha workflows * scripts * remove website-lint * update references to html.markdown * fix compatibility script matrix * rename rest of files * fix generate doc script using docs-out folder to temporary generate all files and copying only to docs folder the specified resource files * fix typo * chore: Bump github.com/zclconf/go-cty from 1.14.4 to 1.15.0 (#2433) Bumps [github.com/zclconf/go-cty](https://github.com/zclconf/go-cty) from 1.14.4 to 1.15.0. - [Release notes](https://github.com/zclconf/go-cty/releases) - [Changelog](https://github.com/zclconf/go-cty/blob/main/CHANGELOG.md) - [Commits](https://github.com/zclconf/go-cty/compare/v1.14.4...v1.15.0) --- updated-dependencies: - dependency-name: github.com/zclconf/go-cty dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/aws/aws-sdk-go from 1.54.17 to 1.54.19 (#2432) Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.54.17 to 1.54.19. - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](https://github.com/aws/aws-sdk-go/compare/v1.54.17...v1.54.19) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump actions/setup-go from 5.0.1 to 5.0.2 (#2431) Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.0.1 to 5.0.2. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](https://github.com/actions/setup-go/compare/cdcb36043654635271a94b9a6d1392de5bb323a7...0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32) --- updated-dependencies: - dependency-name: actions/setup-go dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump tj-actions/verify-changed-files (#2430) Bumps [tj-actions/verify-changed-files](https://github.com/tj-actions/verify-changed-files) from 11ea2b36f98609331b8dc9c5ad9071ee317c6d28 to 79f398ac63ab46f7f820470c821d830e5c340ef9. - [Release notes](https://github.com/tj-actions/verify-changed-files/releases) - [Changelog](https://github.com/tj-actions/verify-changed-files/blob/main/HISTORY.md) - [Commits](https://github.com/tj-actions/verify-changed-files/compare/11ea2b36f98609331b8dc9c5ad9071ee317c6d28...79f398ac63ab46f7f820470c821d830e5c340ef9) --- updated-dependencies: - dependency-name: tj-actions/verify-changed-files dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * refactor: avoid usage of github.com/go-test/deep (use `reflect.DeepEqual instead`) (#2427) * chore: Deletes modules folder (#2435) * remove modules folder * gitignore * chore: Makes sure doc generation is up-to-date (#2441) * generate doc * split in runs * detect changes * TEMPORARY: change 3 files to trigger doc failures * rename * Revert "TEMPORARY: change 3 files to trigger doc failures" This reverts commit cc36481d9682f46792203662db610806d6593d89. * chore: Enables GitHub Action linter errors in GitHub (#2440) * TEMPORARY: make action linter fail * problem matcher * Revert "TEMPORARY: make action linter fail" This reverts commit 2ea3cd5fee4836f9275f59d5daaf72213e78aabe. * update version (#2439) * doc: Updates examples & docs that use replicaSet clusters (#2428) * update basic examples * fix linter * fix tf-validate * update tflint version * fix validate * remove tf linter exceptions * make linter fail * simplify and show linter errors in GH * tlint problem matcher * problem matcher * minimum severity warning * fix linter * make tf-validate logic easier to be run in local * less verbose tf init * fix /mongodbatlas_network_peering/aws * doc for backup_compliance_policy * fix container_id reference * fix mongodbatlas_network_peering/azure * use temp fodler * fix examples/mongodbatlas_network_peering/gcp * remaining examples * fix mongodbatlas_clusters * fix adv_cluster doc * remaining doc changes * fix typo * fix examples with deprecated arguments * get the first value for containter_id * container_id in doc * address feedback * test: fix cluster config generation without num_shards * test: fix usage of replication_spec.id -> replication_spec.external_id * test: attempt fixing TestAccClusterAdvancedCluster_singleShardedMultiCloud * Revert "test: attempt fixing TestAccClusterAdvancedCluster_singleShardedMultiCloud" This reverts commit 7006935409521c6ed4bac80750331921f91f7943. * Revert "test: fix usage of replication_spec.id -> replication_spec.external_id" .id and .external_id are actually different and won't work, more context in: CLOUDP-262014 This reverts commit 2b730dbf667d5e52484c3ca3a8798d8d9a2b80c8. * test: add extra checks missed by merge conflict for checkSingleShardedMultiCloud * test: skip failing tests with a reference to the ticket * test: avoid deprecation warning to fail the test --------- Signed-off-by: dependabot[bot] Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: svc-apix-bot Co-authored-by: svc-apix-Bot <142542575+svc-apix-Bot@users.noreply.github.com> Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Andrea Angiolillo Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Marco Suma Co-authored-by: Agustin Bettati --- .changelog/2388.txt | 11 + .changelog/2394.txt | 3 + .changelog/2396.txt | 3 + .githooks/pre-commit | 6 - .github/CODEOWNERS | 3 +- .github/ISSUE_TEMPLATE/Bug_Report.md | 2 +- .github/actionlint-matcher.json | 18 + .github/tflint-matcher.json | 19 + .github/workflows/acceptance-tests-runner.yml | 42 +- .../workflows/check-changelog-entry-file.yml | 2 +- .github/workflows/check-migration-guide.yml | 2 +- .github/workflows/code-health.yml | 46 ++- .github/workflows/examples.yml | 37 +- .github/workflows/jira-release-version.yml | 2 +- .github/workflows/notify-docs-team.yml | 2 +- .github/workflows/release.yml | 6 +- .github/workflows/run-script-and-commit.yml | 2 +- .github/workflows/update-sdk.yml | 4 +- .../update_tf_compatibility_matrix.yml | 2 +- .gitignore | 8 +- .tool-versions | 4 +- CHANGELOG.md | 8 + GNUmakefile | 18 +- README.md | 2 +- RELEASING.md | 4 +- contributing/documentation.md | 2 +- .../data-sources/access_list_api_key.md | 8 - .../data-sources/access_list_api_keys.md | 8 - .../data-sources/advanced_cluster.md | 8 - .../data-sources/advanced_clusters.md | 18 +- .../data-sources/alert_configuration.md | 8 - .../data-sources/alert_configurations.md | 8 - .../data-sources/api_key.md | 8 - .../data-sources/api_keys.md | 8 - .../data-sources/atlas_user.md | 8 - .../data-sources/atlas_users.md | 8 - .../data-sources/auditing.md | 8 - .../data-sources/backup_compliance_policy.md | 38 +- .../data-sources/cloud_backup_schedule.md | 38 +- .../data-sources/cloud_backup_snapshot.md | 8 - .../cloud_backup_snapshot_export_bucket.md | 9 +- .../cloud_backup_snapshot_export_buckets.md | 9 +- .../cloud_backup_snapshot_export_job.md | 9 +- .../cloud_backup_snapshot_export_jobs.md | 9 +- .../cloud_backup_snapshot_restore_job.md | 8 - .../cloud_backup_snapshot_restore_jobs.md | 8 - .../data-sources/cloud_backup_snapshots.md | 8 - .../cloud_provider_access_setup.md | 8 - .../cloud_provider_shared_tier_restore_job.md | 9 - ...cloud_provider_shared_tier_restore_jobs.md | 8 - .../cloud_provider_shared_tier_snapshot.md | 8 - .../cloud_provider_shared_tier_snapshots.md | 8 - .../data-sources/cloud_provider_snapshot.md | 6 +- .../cloud_provider_snapshot_backup_policy.md | 47 ++- .../cloud_provider_snapshot_restore_job.md | 6 +- .../cloud_provider_snapshot_restore_jobs.md | 6 +- .../data-sources/cloud_provider_snapshots.md | 6 +- .../data-sources/cluster.md | 8 - .../data-sources/cluster_outage_simulation.md | 8 - .../data-sources/clusters.md | 8 - .../control_plane_ip_addresses.md | 11 +- .../data-sources/custom_db_role.md | 10 +- .../data-sources/custom_db_roles.md | 10 +- .../custom_dns_configuration_cluster_aws.md | 8 - .../data-sources/data_lake_pipeline.md | 35 +- .../data-sources/data_lake_pipeline_run.md | 12 +- .../data-sources/data_lake_pipeline_runs.md | 12 +- .../data-sources/data_lake_pipelines.md | 8 - .../data-sources/database_user.md | 10 +- .../data-sources/database_users.md | 10 +- .../data-sources/event_trigger.md | 10 +- .../data-sources/event_triggers.md | 10 +- .../federated_database_instance.md | 10 +- .../federated_database_instances.md | 8 - .../data-sources/federated_query_limit.md | 10 +- .../data-sources/federated_query_limits.md | 8 - .../data-sources/federated_settings.md | 8 - .../federated_settings_identity_provider.md | 8 - .../federated_settings_identity_providers.md | 8 - .../federated_settings_org_config.md | 8 - .../federated_settings_org_configs.md | 8 - .../federated_settings_org_role_mapping.md | 8 - .../federated_settings_org_role_mappings.md | 8 - .../data-sources/global_cluster_config.md | 8 - .../data-sources/ldap_configuration.md | 8 - .../data-sources/ldap_verify.md | 36 +- .../data-sources/maintenance_window.md | 8 - .../data-sources/network_container.md | 8 - .../data-sources/network_containers.md | 8 - .../data-sources/network_peering.md | 8 - .../data-sources/network_peerings.md | 8 - .../data-sources/online_archive.md | 8 - .../data-sources/online_archives.md | 8 - .../data-sources/org_invitation.md | 8 - .../data-sources/organization.md | 10 +- .../data-sources/organizations.md | 10 +- .../private_endpoint_regional_mode.md | 10 +- .../data-sources/privatelink_endpoint.md | 10 +- .../privatelink_endpoint_service.md | 10 +- ..._service_data_federation_online_archive.md | 8 - ...service_data_federation_online_archives.md | 8 - ...privatelink_endpoint_service_serverless.md | 11 +- .../privatelink_endpoints_service_adl.md | 10 +- ...rivatelink_endpoints_service_serverless.md | 11 +- .../data-sources/project.md | 8 - .../data-sources/project_api_key.md | 8 - .../data-sources/project_api_keys.md | 10 +- .../data-sources/project_invitation.md | 8 - .../data-sources/project_ip_access_list.md | 8 - .../data-sources/projects.md | 10 +- .../data-sources/push_based_log_export.md | 13 +- .../data-sources/roles_org_id.md | 8 - .../data-sources/search_deployment.md | 9 - .../data-sources/search_index.md | 14 +- .../data-sources/search_indexes.md | 15 +- .../data-sources/serverless_instance.md | 10 +- .../data-sources/serverless_instances.md | 10 +- .../data-sources/stream_connection.md | 8 - .../data-sources/stream_connections.md | 8 - .../data-sources/stream_instance.md | 8 - .../data-sources/stream_instances.md | 8 - .../data-sources/team.md | 8 - .../data-sources/teams.md | 8 +- .../data-sources/third_party_integration.md | 10 +- .../data-sources/third_party_integrations.md | 10 +- .../x509_authentication_database_user.md | 10 +- .../guides/0.6.0-upgrade-guide.md | 8 +- .../guides/0.8.0-upgrade-guide.md | 8 +- .../guides/0.8.2-upgrade-guide.md | 9 +- .../guides/0.9.0-upgrade-guide.md | 8 +- .../guides/0.9.1-upgrade-guide.md | 7 +- .../guides/1.0.0-upgrade-guide.md | 7 +- .../guides/1.0.1-upgrade-guide.md | 7 +- .../guides/1.1.0-upgrade-guide.md | 7 +- .../guides/1.10.0-upgrade-guide.md | 7 +- .../guides/1.11.0-upgrade-guide.md | 6 +- .../guides/1.12.0-upgrade-guide.md | 6 +- .../guides/1.13.0-upgrade-guide.md | 6 +- .../guides/1.14.0-upgrade-guide.md | 6 +- .../guides/1.15.0-upgrade-guide.md | 6 +- .../guides/1.16.0-upgrade-guide.md | 6 +- .../guides/1.17.0-upgrade-guide.md | 6 +- .../guides/1.2.0-upgrade-guide.md | 7 +- .../guides/1.3.0-upgrade-guide.md | 7 +- .../guides/1.4.0-upgrade-guide.md | 7 +- .../guides/1.5.0-upgrade-guide.md | 7 +- .../guides/1.6.0-upgrade-guide.md | 7 +- .../guides/1.7.0-upgrade-guide.md | 11 +- .../guides/1.8.0-upgrade-guide.md | 7 +- .../guides/1.9.0-upgrade-guide.md | 7 +- ...ogrammatic-API-Key-upgrade-guide-1.10.0.md | 7 +- .../docs/index.html.markdown => docs/index.md | 8 - .../resources/access_list_api_key.md | 8 - .../resources/advanced_cluster.md | 16 +- .../resources/alert_configuration.md | 8 - .../resources/api_key.md | 8 - .../resources/auditing.md | 8 - .../resources/backup_compliance_policy.md | 42 +- .../resources/cloud_backup_schedule.md | 132 +++--- .../resources/cloud_backup_snapshot.md | 64 +-- .../cloud_backup_snapshot_export_bucket.md | 11 +- .../cloud_backup_snapshot_export_job.md | 11 +- .../cloud_backup_snapshot_restore_job.md | 154 +++---- .../resources/cloud_provider_access.md | 8 - .../resources/cloud_provider_snapshot.md | 64 +-- .../cloud_provider_snapshot_backup_policy.md | 130 +++--- .../cloud_provider_snapshot_restore_job.md | 124 +++--- .../resources/cluster.md | 8 - .../resources/cluster_outage_simulation.md | 8 - .../resources/custom_db_role.md | 8 - .../custom_dns_configuration_cluster_aws.md | 8 - .../resources/data_lake_pipeline.md | 35 +- .../resources/database_user.md | 8 - .../resources/encryption_at_rest.md | 41 +- .../resources/event_trigger.md | 8 - .../resources/federated_database_instance.md | 8 - .../resources/federated_query_limit.md | 8 - .../federated_settings_identity_provider.md | 8 - .../federated_settings_org_config.md | 9 - .../federated_settings_org_role_mapping.md | 8 - .../resources/global_cluster_config.md | 9 - .../resources/ldap_configuration.md | 10 +- .../resources/ldap_verify.md | 36 +- .../resources/maintenance_window.md | 12 +- .../resources/network_container.md | 8 - .../resources/network_peering.md | 175 ++++---- .../resources/online_archive.md | 8 - .../resources/org_invitation.md | 8 - .../resources/organization.md | 8 - .../private_endpoint_regional_mode.md | 8 - .../resources/privatelink_endpoint.md | 8 - .../privatelink_endpoint_serverless.md | 9 - .../resources/privatelink_endpoint_service.md | 8 - ..._service_data_federation_online_archive.md | 8 - ...privatelink_endpoint_service_serverless.md | 9 - .../resources/project.md | 8 - .../resources/project_api_key.md | 8 - .../resources/project_invitation.md | 8 - .../resources/project_ip_access_list.md | 8 - .../resources/push_based_log_export.md | 20 +- .../resources/search_deployment.md | 9 - .../resources/search_index.md | 69 ++-- .../resources/serverless_instance.md | 8 - .../resources/stream_connection.md | 8 - .../resources/stream_instance.md | 8 - .../resources/team.md | 8 - .../resources/teams.md | 8 +- .../resources/third_party_integration.md | 8 - .../x509_authentication_database_user.md | 8 - .../troubleshooting.md | 8 - .../global-cluster/README.md | 2 +- .../multi-cloud/README.md | 2 +- .../versions.tf | 2 +- .../create-and-assign-pak/versions.tf | 2 +- .../versions.tf | 2 +- .../main.tf | 30 +- .../point-in-time/main.tf | 38 +- .../atlas_cluster.tf | 34 +- examples/mongodbatlas_database_user/main.tf | 2 +- .../aws/atlas-cluster/README.md | 10 +- .../aws/atlas-cluster/main.tf | 19 +- .../aws/multi-region-cluster/README.MD | 2 +- .../azure/atlas.tf | 31 +- .../azure/outputs.tf | 2 +- .../mongodbatlas_network_peering/aws/main.tf | 34 +- .../azure/atlas.tf | 35 +- .../azure/variables.tf | 3 - .../gcp/cluster.tf | 52 +-- examples/mongodbatlas_online_archive/main.tf | 11 +- .../aws/cluster/README.md | 4 +- .../aws/cluster/atlas-cluster.tf | 32 +- .../aws/cluster/output.tf | 2 +- .../azure/main.tf | 12 +- .../azure/main.tf | 12 +- .../versions.tf | 4 +- .../third-party-integration.tf | 1 - examples/starter/Readme.md | 4 +- examples/starter/atlas_cluster.tf | 33 +- examples/starter/variables.tf | 4 - go.mod | 27 +- go.sum | 55 +-- .../common/conversion/encode_state_test.go | 10 +- .../advancedcluster/model_advanced_cluster.go | 8 +- ...ce_advanced_cluster_state_upgrader_test.go | 4 +- .../resource_advanced_cluster_test.go | 10 +- .../resource_backup_compliance_policy_test.go | 104 ++--- ...ce_cloud_backup_schedule_migration_test.go | 4 +- .../resource_cloud_backup_schedule_test.go | 146 +++---- .../model_cloud_backup_snapshot_test.go | 6 +- ...ce_cloud_backup_snapshot_migration_test.go | 4 +- .../resource_cloud_backup_snapshot_test.go | 20 +- ...e_cloud_backup_snapshot_export_job_test.go | 2 +- ..._cloud_backup_snapshot_restore_job_test.go | 72 ++-- .../service/cluster/resource_cluster_test.go | 47 ++- ...luster_outage_simulation_migration_test.go | 54 +-- ...resource_cluster_outage_simulation_test.go | 115 +++--- .../eventtrigger/resource_event_trigger.go | 8 +- ...source_federated_database_instance_test.go | 71 ++-- ...ce_global_cluster_config_migration_test.go | 2 +- .../resource_global_cluster_config_test.go | 69 +--- .../resource_ldap_configuration_test.go | 55 ++- .../resource_online_archive_migration_test.go | 19 +- .../resource_online_archive_test.go | 229 +++++------ ...rce_private_endpoint_regional_mode_test.go | 93 ++--- .../searchindex/data_source_search_index.go | 42 +- .../searchindex/data_source_search_indexes.go | 10 +- .../service/searchindex/model_search_index.go | 150 +++++++ .../searchindex/resource_search_index.go | 164 ++------ .../resource_search_index_migration_test.go | 1 + .../searchindex/resource_search_index_test.go | 146 ++++++- internal/testutil/acc/advanced_cluster.go | 45 -- internal/testutil/acc/cluster.go | 216 +++++++--- internal/testutil/acc/config_cluster.go | 160 ++++++++ internal/testutil/acc/config_cluster_test.go | 387 ++++++++++++++++++ internal/testutil/acc/config_formatter.go | 112 ++++- internal/testutil/acc/pre_check.go | 8 + modules/examples/atlas-basic/main.tf | 23 -- modules/examples/atlas-basic/versions.tf | 10 - modules/examples/sagemaker/main.tf | 26 -- modules/examples/sagemaker/versions.tf | 10 - .../README.md | 21 - .../outputs.tf | 10 - .../sagemaker.tf | 280 ------------- .../variables.tf | 75 ---- .../versions.tf | 13 - .../terraform-mongodbatlas-basic/README.md | 42 -- .../terraform-mongodbatlas-basic/aws-vpc.tf | 59 --- modules/terraform-mongodbatlas-basic/main.tf | 114 ------ .../terraform-mongodbatlas-basic/outputs.tf | 1 - .../terraform-mongodbatlas-basic/variables.tf | 217 ---------- .../terraform-mongodbatlas-basic/versions.tf | 14 - scripts/check-upgrade-guide-exists.sh | 2 +- scripts/generate-doc.sh | 30 +- scripts/tf-validate.sh | 37 +- scripts/tflint.sh | 34 -- scripts/update-examples-reference-in-docs.sh | 2 +- scripts/update-tf-compatibility-matrix.sh | 2 +- templates/data-source.md.tmpl | 10 - .../control_plane_ip_addresses.md.tmpl | 11 +- .../push_based_log_export.md.tmpl | 11 +- .../data-sources/search_deployment.md.tmpl | 11 +- templates/resources.md.tmpl | 9 - .../resources/push_based_log_export.md.tmpl | 11 +- templates/resources/search_deployment.md.tmpl | 11 +- website/docs/guides/howto-guide.html.markdown | 107 ----- 305 files changed, 2866 insertions(+), 4386 deletions(-) create mode 100644 .changelog/2388.txt create mode 100644 .changelog/2394.txt create mode 100644 .changelog/2396.txt create mode 100644 .github/actionlint-matcher.json create mode 100644 .github/tflint-matcher.json rename website/docs/d/access_list_api_key.html.markdown => docs/data-sources/access_list_api_key.md (92%) rename website/docs/d/access_list_api_keys.html.markdown => docs/data-sources/access_list_api_keys.md (91%) rename website/docs/d/advanced_cluster.html.markdown => docs/data-sources/advanced_cluster.md (98%) rename website/docs/d/advanced_clusters.html.markdown => docs/data-sources/advanced_clusters.md (97%) rename website/docs/d/alert_configuration.html.markdown => docs/data-sources/alert_configuration.md (98%) rename website/docs/d/alert_configurations.html.markdown => docs/data-sources/alert_configurations.md (98%) rename website/docs/d/api_key.html.markdown => docs/data-sources/api_key.md (92%) rename website/docs/d/api_keys.html.markdown => docs/data-sources/api_keys.md (93%) rename website/docs/d/atlas_user.html.markdown => docs/data-sources/atlas_user.md (94%) rename website/docs/d/atlas_users.html.markdown => docs/data-sources/atlas_users.md (95%) rename website/docs/d/auditing.html.markdown => docs/data-sources/auditing.md (90%) rename website/docs/d/backup_compliance_policy.html.markdown => docs/data-sources/backup_compliance_policy.md (94%) rename website/docs/d/cloud_backup_schedule.html.markdown => docs/data-sources/cloud_backup_schedule.md (93%) rename website/docs/d/cloud_backup_snapshot.html.markdown => docs/data-sources/cloud_backup_snapshot.md (93%) rename website/docs/d/cloud_backup_snapshot_export_bucket.html.markdown => docs/data-sources/cloud_backup_snapshot_export_bucket.md (85%) rename website/docs/d/cloud_backup_snapshot_export_buckets.html.markdown => docs/data-sources/cloud_backup_snapshot_export_buckets.md (89%) rename website/docs/d/cloud_backup_snapshot_export_job.html.markdown => docs/data-sources/cloud_backup_snapshot_export_job.md (93%) rename website/docs/d/cloud_backup_snapshot_export_jobs.html.markdown => docs/data-sources/cloud_backup_snapshot_export_jobs.md (94%) rename website/docs/d/cloud_backup_snapshot_restore_job.html.markdown => docs/data-sources/cloud_backup_snapshot_restore_job.md (93%) rename website/docs/d/cloud_backup_snapshot_restore_jobs.html.markdown => docs/data-sources/cloud_backup_snapshot_restore_jobs.md (93%) rename website/docs/d/cloud_backup_snapshots.html.markdown => docs/data-sources/cloud_backup_snapshots.md (93%) rename website/docs/d/cloud_provider_access_setup.markdown => docs/data-sources/cloud_provider_access_setup.md (92%) rename website/docs/d/cloud_provider_shared_tier_restore_job.html.markdown => docs/data-sources/cloud_provider_shared_tier_restore_job.md (91%) rename website/docs/d/cloud_provider_shared_tier_restore_jobs.html.markdown => docs/data-sources/cloud_provider_shared_tier_restore_jobs.md (92%) rename website/docs/d/cloud_provider_shared_tier_snapshot.html.markdown => docs/data-sources/cloud_provider_shared_tier_snapshot.md (89%) rename website/docs/d/cloud_provider_shared_tier_snapshots.html.markdown => docs/data-sources/cloud_provider_shared_tier_snapshots.md (90%) rename website/docs/d/cloud_provider_snapshot.html.markdown => docs/data-sources/cloud_provider_snapshot.md (92%) rename website/docs/d/cloud_provider_snapshot_backup_policy.html.markdown => docs/data-sources/cloud_provider_snapshot_backup_policy.md (73%) rename website/docs/d/cloud_provider_snapshot_restore_job.html.markdown => docs/data-sources/cloud_provider_snapshot_restore_job.md (93%) rename website/docs/d/cloud_provider_snapshot_restore_jobs.html.markdown => docs/data-sources/cloud_provider_snapshot_restore_jobs.md (94%) rename website/docs/d/cloud_provider_snapshots.html.markdown => docs/data-sources/cloud_provider_snapshots.md (92%) rename website/docs/d/cluster.html.markdown => docs/data-sources/cluster.md (99%) rename website/docs/d/cluster_outage_simulation.html.markdown => docs/data-sources/cluster_outage_simulation.md (92%) rename website/docs/d/clusters.html.markdown => docs/data-sources/clusters.md (99%) rename website/docs/d/control_plane_ip_addresses.html.markdown => docs/data-sources/control_plane_ip_addresses.md (88%) rename website/docs/d/custom_db_role.html.markdown => docs/data-sources/custom_db_role.md (90%) rename website/docs/d/custom_db_roles.html.markdown => docs/data-sources/custom_db_roles.md (89%) rename website/docs/d/custom_dns_configuration_cluster_aws.html.markdown => docs/data-sources/custom_dns_configuration_cluster_aws.md (80%) rename website/docs/d/data_lake_pipeline.html.markdown => docs/data-sources/data_lake_pipeline.md (90%) rename website/docs/d/data_lake_pipeline_run.html.markdown => docs/data-sources/data_lake_pipeline_run.md (89%) rename website/docs/d/data_lake_pipeline_runs.html.markdown => docs/data-sources/data_lake_pipeline_runs.md (88%) rename website/docs/d/data_lake_pipelines.html.markdown => docs/data-sources/data_lake_pipelines.md (96%) rename website/docs/d/database_user.html.markdown => docs/data-sources/database_user.md (94%) rename website/docs/d/database_users.html.markdown => docs/data-sources/database_users.md (94%) rename website/docs/d/event_trigger.html.markdown => docs/data-sources/event_trigger.md (94%) rename website/docs/d/event_triggers.html.markdown => docs/data-sources/event_triggers.md (93%) rename website/docs/d/federated_database_instance.html.markdown => docs/data-sources/federated_database_instance.md (97%) rename website/docs/d/federated_database_instances.html.markdown => docs/data-sources/federated_database_instances.md (97%) rename website/docs/d/federated_query_limit.html.markdown => docs/data-sources/federated_query_limit.md (89%) rename website/docs/d/federated_query_limits.html.markdown => docs/data-sources/federated_query_limits.md (89%) rename website/docs/d/federated_settings.html.markdown => docs/data-sources/federated_settings.md (85%) rename website/docs/d/federated_settings_identity_provider.html.markdown => docs/data-sources/federated_settings_identity_provider.md (95%) rename website/docs/d/federated_settings_identity_providers.html.markdown => docs/data-sources/federated_settings_identity_providers.md (95%) rename website/docs/d/federated_settings_org_config.html.markdown => docs/data-sources/federated_settings_org_config.md (94%) rename website/docs/d/federated_settings_org_configs.html.markdown => docs/data-sources/federated_settings_org_configs.md (94%) rename website/docs/d/federated_settings_org_role_mapping.html.markdown => docs/data-sources/federated_settings_org_role_mapping.md (90%) rename website/docs/d/federated_settings_org_role_mappings.html.markdown => docs/data-sources/federated_settings_org_role_mappings.md (91%) rename website/docs/d/global_cluster_config.html.markdown => docs/data-sources/global_cluster_config.md (94%) rename website/docs/d/ldap_configuration.html.markdown => docs/data-sources/ldap_configuration.md (92%) rename website/docs/d/ldap_verify.html.markdown => docs/data-sources/ldap_verify.md (79%) rename website/docs/d/maintenance_window.html.markdown => docs/data-sources/maintenance_window.md (90%) rename website/docs/d/network_container.html.markdown => docs/data-sources/network_container.md (93%) rename website/docs/d/network_containers.html.markdown => docs/data-sources/network_containers.md (92%) rename website/docs/d/network_peering.html.markdown => docs/data-sources/network_peering.md (94%) rename website/docs/d/network_peerings.html.markdown => docs/data-sources/network_peerings.md (93%) rename website/docs/d/online_archive.html.markdown => docs/data-sources/online_archive.md (96%) rename website/docs/d/online_archives.html.markdown => docs/data-sources/online_archives.md (96%) rename website/docs/d/org_invitation.html.markdown => docs/data-sources/org_invitation.md (90%) rename website/docs/d/organization.html.markdown => docs/data-sources/organization.md (84%) rename website/docs/d/organizations.html.markdown => docs/data-sources/organizations.md (86%) rename website/docs/d/private_endpoint_regional_mode.html.markdown => docs/data-sources/private_endpoint_regional_mode.md (71%) rename website/docs/d/privatelink_endpoint.html.markdown => docs/data-sources/privatelink_endpoint.md (89%) rename website/docs/d/privatelink_endpoint_service.html.markdown => docs/data-sources/privatelink_endpoint_service.md (94%) rename website/docs/d/privatelink_endpoint_service_data_federation_online_archive.html.markdown => docs/data-sources/privatelink_endpoint_service_data_federation_online_archive.md (88%) rename website/docs/d/privatelink_endpoint_service_data_federation_online_archives.html.markdown => docs/data-sources/privatelink_endpoint_service_data_federation_online_archives.md (88%) rename website/docs/d/privatelink_endpoint_service_serverless.html.markdown => docs/data-sources/privatelink_endpoint_service_serverless.md (92%) rename website/docs/d/privatelink_endpoints_service_adl.html.markdown => docs/data-sources/privatelink_endpoints_service_adl.md (86%) rename website/docs/d/privatelink_endpoints_service_serverless.html.markdown => docs/data-sources/privatelink_endpoints_service_serverless.md (91%) rename website/docs/d/project.html.markdown => docs/data-sources/project.md (97%) rename website/docs/d/project_api_key.html.markdown => docs/data-sources/project_api_key.md (92%) rename website/docs/d/project_api_keys.html.markdown => docs/data-sources/project_api_keys.md (90%) rename website/docs/d/project_invitation.html.markdown => docs/data-sources/project_invitation.md (90%) rename website/docs/d/project_ip_access_list.html.markdown => docs/data-sources/project_ip_access_list.md (94%) rename website/docs/d/projects.html.markdown => docs/data-sources/projects.md (95%) rename website/docs/d/push_based_log_export.html.markdown => docs/data-sources/push_based_log_export.md (90%) rename website/docs/d/roles_org_id.html.markdown => docs/data-sources/roles_org_id.md (80%) rename website/docs/d/search_deployment.html.markdown => docs/data-sources/search_deployment.md (92%) rename website/docs/d/search_index.html.markdown => docs/data-sources/search_index.md (83%) rename website/docs/d/search_indexes.html.markdown => docs/data-sources/search_indexes.md (83%) rename website/docs/d/serverless_instance.html.markdown => docs/data-sources/serverless_instance.md (92%) rename website/docs/d/serverless_instances.html.markdown => docs/data-sources/serverless_instances.md (89%) rename website/docs/d/stream_connection.html.markdown => docs/data-sources/stream_connection.md (93%) rename website/docs/d/stream_connections.html.markdown => docs/data-sources/stream_connections.md (93%) rename website/docs/d/stream_instance.html.markdown => docs/data-sources/stream_instance.md (91%) rename website/docs/d/stream_instances.html.markdown => docs/data-sources/stream_instances.md (93%) rename website/docs/d/team.html.markdown => docs/data-sources/team.md (91%) rename website/docs/d/teams.html.markdown => docs/data-sources/teams.md (71%) rename website/docs/d/third_party_integration.markdown => docs/data-sources/third_party_integration.md (88%) rename website/docs/d/third_party_integrations.markdown => docs/data-sources/third_party_integrations.md (88%) rename website/docs/d/x509_authentication_database_user.html.markdown => docs/data-sources/x509_authentication_database_user.md (91%) rename website/docs/guides/0.6.0-upgrade-guide.html.markdown => docs/guides/0.6.0-upgrade-guide.md (94%) rename website/docs/guides/0.8.0-upgrade-guide.html.markdown => docs/guides/0.8.0-upgrade-guide.md (97%) rename website/docs/guides/0.8.2-upgrade-guide.html.markdown => docs/guides/0.8.2-upgrade-guide.md (93%) rename website/docs/guides/0.9.0-upgrade-guide.html.markdown => docs/guides/0.9.0-upgrade-guide.md (83%) rename website/docs/guides/0.9.1-upgrade-guide.html.markdown => docs/guides/0.9.1-upgrade-guide.md (94%) rename website/docs/guides/1.0.0-upgrade-guide.html.markdown => docs/guides/1.0.0-upgrade-guide.md (98%) rename website/docs/guides/1.0.1-upgrade-guide.html.markdown => docs/guides/1.0.1-upgrade-guide.md (89%) rename website/docs/guides/1.1.0-upgrade-guide.html.markdown => docs/guides/1.1.0-upgrade-guide.md (96%) rename website/docs/guides/1.10.0-upgrade-guide.html.markdown => docs/guides/1.10.0-upgrade-guide.md (97%) rename website/docs/guides/1.11.0-upgrade-guide.html.markdown => docs/guides/1.11.0-upgrade-guide.md (90%) rename website/docs/guides/1.12.0-upgrade-guide.html.markdown => docs/guides/1.12.0-upgrade-guide.md (92%) rename website/docs/guides/1.13.0-upgrade-guide.html.markdown => docs/guides/1.13.0-upgrade-guide.md (88%) rename website/docs/guides/1.14.0-upgrade-guide.html.markdown => docs/guides/1.14.0-upgrade-guide.md (93%) rename website/docs/guides/1.15.0-upgrade-guide.html.markdown => docs/guides/1.15.0-upgrade-guide.md (96%) rename website/docs/guides/1.16.0-upgrade-guide.html.markdown => docs/guides/1.16.0-upgrade-guide.md (93%) rename website/docs/guides/1.17.0-upgrade-guide.html.markdown => docs/guides/1.17.0-upgrade-guide.md (94%) rename website/docs/guides/1.2.0-upgrade-guide.html.markdown => docs/guides/1.2.0-upgrade-guide.md (85%) rename website/docs/guides/1.3.0-upgrade-guide.html.markdown => docs/guides/1.3.0-upgrade-guide.md (84%) rename website/docs/guides/1.4.0-upgrade-guide.html.markdown => docs/guides/1.4.0-upgrade-guide.md (91%) rename website/docs/guides/1.5.0-upgrade-guide.html.markdown => docs/guides/1.5.0-upgrade-guide.md (88%) rename website/docs/guides/1.6.0-upgrade-guide.html.markdown => docs/guides/1.6.0-upgrade-guide.md (86%) rename website/docs/guides/1.7.0-upgrade-guide.html.markdown => docs/guides/1.7.0-upgrade-guide.md (71%) rename website/docs/guides/1.8.0-upgrade-guide.html.markdown => docs/guides/1.8.0-upgrade-guide.md (95%) rename website/docs/guides/1.9.0-upgrade-guide.html.markdown => docs/guides/1.9.0-upgrade-guide.md (85%) rename website/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.html.markdown => docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.md (96%) rename website/docs/index.html.markdown => docs/index.md (97%) rename website/docs/r/access_list_api_key.html.markdown => docs/resources/access_list_api_key.md (92%) rename website/docs/r/advanced_cluster.html.markdown => docs/resources/advanced_cluster.md (98%) rename website/docs/r/alert_configuration.html.markdown => docs/resources/alert_configuration.md (98%) rename website/docs/r/api_key.html.markdown => docs/resources/api_key.md (91%) rename website/docs/r/auditing.html.markdown => docs/resources/auditing.md (92%) rename website/docs/r/backup_compliance_policy.html.markdown => docs/resources/backup_compliance_policy.md (95%) rename website/docs/r/cloud_backup_schedule.html.markdown => docs/resources/cloud_backup_schedule.md (84%) rename website/docs/r/cloud_backup_snapshot.html.markdown => docs/resources/cloud_backup_snapshot.md (77%) rename website/docs/r/cloud_backup_snapshot_export_bucket.html.markdown => docs/resources/cloud_backup_snapshot_export_bucket.md (80%) rename website/docs/r/cloud_backup_snapshot_export_job.html.markdown => docs/resources/cloud_backup_snapshot_export_job.md (94%) rename website/docs/r/cloud_backup_snapshot_restore_job.html.markdown => docs/resources/cloud_backup_snapshot_restore_job.md (73%) rename website/docs/r/cloud_provider_access.markdown => docs/resources/cloud_provider_access.md (96%) rename website/docs/r/cloud_provider_snapshot.html.markdown => docs/resources/cloud_provider_snapshot.md (71%) rename website/docs/r/cloud_provider_snapshot_backup_policy.html.markdown => docs/resources/cloud_provider_snapshot_backup_policy.md (62%) rename website/docs/r/cloud_provider_snapshot_restore_job.html.markdown => docs/resources/cloud_provider_snapshot_restore_job.md (69%) rename website/docs/r/cluster.html.markdown => docs/resources/cluster.md (99%) rename website/docs/r/cluster_outage_simulation.html.markdown => docs/resources/cluster_outage_simulation.md (94%) rename website/docs/r/custom_db_role.html.markdown => docs/resources/custom_db_role.md (96%) rename website/docs/r/custom_dns_configuration_cluster_aws.markdown => docs/resources/custom_dns_configuration_cluster_aws.md (84%) rename website/docs/r/data_lake_pipeline.html.markdown => docs/resources/data_lake_pipeline.md (90%) rename website/docs/r/database_user.html.markdown => docs/resources/database_user.md (98%) rename website/docs/r/encryption_at_rest.html.markdown => docs/resources/encryption_at_rest.md (92%) rename website/docs/r/event_trigger.html.markdown => docs/resources/event_trigger.md (97%) rename website/docs/r/federated_database_instance.html.markdown => docs/resources/federated_database_instance.md (98%) rename website/docs/r/federated_query_limit.html.markdown => docs/resources/federated_query_limit.md (92%) rename website/docs/r/federated_settings_identity_provider.html.markdown => docs/resources/federated_settings_identity_provider.md (94%) rename website/docs/r/federated_settings_org_config.html.markdown => docs/resources/federated_settings_org_config.md (93%) rename website/docs/r/federated_settings_org_role_mapping.html.markdown => docs/resources/federated_settings_org_role_mapping.md (90%) rename website/docs/r/global_cluster_config.html.markdown => docs/resources/global_cluster_config.md (95%) rename website/docs/r/ldap_configuration.html.markdown => docs/resources/ldap_configuration.md (89%) rename website/docs/r/ldap_verify.html.markdown => docs/resources/ldap_verify.md (83%) rename website/docs/r/maintenance_window.html.markdown => docs/resources/maintenance_window.md (89%) rename website/docs/r/network_container.html.markdown => docs/resources/network_container.md (96%) rename website/docs/r/network_peering.html.markdown => docs/resources/network_peering.md (81%) rename website/docs/r/online_archive.html.markdown => docs/resources/online_archive.md (97%) rename website/docs/r/org_invitation.html.markdown => docs/resources/org_invitation.md (93%) rename website/docs/r/organization.html.markdown => docs/resources/organization.md (95%) rename website/docs/r/private_endpoint_regional_mode.html.markdown => docs/resources/private_endpoint_regional_mode.md (96%) rename website/docs/r/privatelink_endpoint.html.markdown => docs/resources/privatelink_endpoint.md (96%) rename website/docs/r/privatelink_endpoint_serverless.html.markdown => docs/resources/privatelink_endpoint_serverless.md (92%) rename website/docs/r/privatelink_endpoint_service.html.markdown => docs/resources/privatelink_endpoint_service.md (97%) rename website/docs/r/privatelink_endpoint_service_data_federation_online_archive.html.markdown => docs/resources/privatelink_endpoint_service_data_federation_online_archive.md (91%) rename website/docs/r/privatelink_endpoint_service_serverless.html.markdown => docs/resources/privatelink_endpoint_service_serverless.md (96%) rename website/docs/r/project.html.markdown => docs/resources/project.md (98%) rename website/docs/r/project_api_key.html.markdown => docs/resources/project_api_key.md (88%) rename website/docs/r/project_invitation.html.markdown => docs/resources/project_invitation.md (93%) rename website/docs/r/project_ip_access_list.html.markdown => docs/resources/project_ip_access_list.md (94%) rename website/docs/r/push_based_log_export.html.markdown => docs/resources/push_based_log_export.md (94%) rename website/docs/r/search_deployment.html.markdown => docs/resources/search_deployment.md (95%) rename website/docs/r/search_index.html.markdown => docs/resources/search_index.md (95%) rename website/docs/r/serverless_instance.html.markdown => docs/resources/serverless_instance.md (96%) rename website/docs/r/stream_connection.html.markdown => docs/resources/stream_connection.md (95%) rename website/docs/r/stream_instance.html.markdown => docs/resources/stream_instance.md (93%) rename website/docs/r/team.html.markdown => docs/resources/team.md (91%) rename website/docs/r/teams.html.markdown => docs/resources/teams.md (61%) rename website/docs/r/third_party_integration.markdown => docs/resources/third_party_integration.md (92%) rename website/docs/r/x509_authentication_database_user.html.markdown => docs/resources/x509_authentication_database_user.md (95%) rename website/docs/troubleshooting.html.markdown => docs/troubleshooting.md (84%) create mode 100644 internal/service/searchindex/model_search_index.go create mode 100644 internal/testutil/acc/config_cluster.go create mode 100644 internal/testutil/acc/config_cluster_test.go delete mode 100644 modules/examples/atlas-basic/main.tf delete mode 100644 modules/examples/atlas-basic/versions.tf delete mode 100644 modules/examples/sagemaker/main.tf delete mode 100644 modules/examples/sagemaker/versions.tf delete mode 100644 modules/terraform-mongodbatlas-amazon-sagemaker-integration/README.md delete mode 100644 modules/terraform-mongodbatlas-amazon-sagemaker-integration/outputs.tf delete mode 100644 modules/terraform-mongodbatlas-amazon-sagemaker-integration/sagemaker.tf delete mode 100644 modules/terraform-mongodbatlas-amazon-sagemaker-integration/variables.tf delete mode 100644 modules/terraform-mongodbatlas-amazon-sagemaker-integration/versions.tf delete mode 100644 modules/terraform-mongodbatlas-basic/README.md delete mode 100644 modules/terraform-mongodbatlas-basic/aws-vpc.tf delete mode 100644 modules/terraform-mongodbatlas-basic/main.tf delete mode 100644 modules/terraform-mongodbatlas-basic/outputs.tf delete mode 100644 modules/terraform-mongodbatlas-basic/variables.tf delete mode 100644 modules/terraform-mongodbatlas-basic/versions.tf delete mode 100755 scripts/tflint.sh delete mode 100644 website/docs/guides/howto-guide.html.markdown diff --git a/.changelog/2388.txt b/.changelog/2388.txt new file mode 100644 index 0000000000..14807c8714 --- /dev/null +++ b/.changelog/2388.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +resource/mongodbatlas_search_index: Adds attribute `stored_source` +``` + +```release-note:enhancement +data-source/mongodbatlas_search_index: Adds attribute `stored_source` +``` + +```release-note:enhancement +data-source/mongodbatlas_search_indexes: Adds attribute `stored_source` +``` diff --git a/.changelog/2394.txt b/.changelog/2394.txt new file mode 100644 index 0000000000..6afb5599ae --- /dev/null +++ b/.changelog/2394.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/mongodbatlas_search_index: Returns error if the `analyzers` attribute contains unknown fields +``` diff --git a/.changelog/2396.txt b/.changelog/2396.txt new file mode 100644 index 0000000000..5bb53f7fda --- /dev/null +++ b/.changelog/2396.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/mongodbatlas_advanced_cluster: Fixes `disk_iops` attribute for Azure cloud provider +``` diff --git a/.githooks/pre-commit b/.githooks/pre-commit index 44f4adf85a..56888b51ce 100755 --- a/.githooks/pre-commit +++ b/.githooks/pre-commit @@ -26,9 +26,3 @@ if [ -n "$STAGED_TF_FILES" ]; then echo "Checking the format of Terraform files" make tflint fi - -STAGED_WEBSITES_FILES=$(git diff --cached --name-only | grep "website/") -if [ -n "$STAGED_WEBSITES_FILES" ]; then - echo "Checking the format of website files" - make website-lint -fi diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 6a9258efdb..7690854475 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -1,7 +1,6 @@ # Maintained by the MongoDB APIx-Integrations team * @mongodb/APIx-Integrations - # Changelog entries reviewed by Docs Cloud Team /.changelog/ @mongodb/docs-cloud-team -/website/ @mongodb/docs-cloud-team +/docs/ @mongodb/docs-cloud-team diff --git a/.github/ISSUE_TEMPLATE/Bug_Report.md b/.github/ISSUE_TEMPLATE/Bug_Report.md index d165962905..367d472c18 100644 --- a/.github/ISSUE_TEMPLATE/Bug_Report.md +++ b/.github/ISSUE_TEMPLATE/Bug_Report.md @@ -19,7 +19,7 @@ Our support will prioritise issues that contain all the required information tha ### Terraform CLI and Terraform MongoDB Atlas Provider Version -Please ensure your issue is reproducible on a supported Terraform version. You may review our [Terraform version compatibility matrix](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/website/docs/index.html.markdown#hashicorp-terraform-version-compatibility-matrix) to know more. +Please ensure your issue is reproducible on a supported Terraform version. You may review our [Terraform version compatibility matrix](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/index.md#hashicorp-terraform-version-compatibility-matrix) to know more. diff --git a/website/docs/d/roles_org_id.html.markdown b/docs/data-sources/roles_org_id.md similarity index 80% rename from website/docs/d/roles_org_id.html.markdown rename to docs/data-sources/roles_org_id.md index 899e582696..be4e87b7de 100644 --- a/website/docs/d/roles_org_id.html.markdown +++ b/docs/data-sources/roles_org_id.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: roles_org_id" -sidebar_current: "docs-mongodbatlas-datasource-roles-org-id" -description: |- - Describes a Roles Org ID. ---- - # Data Source: mongodbatlas_roles_org_id `mongodbatlas_roles_org_id` describes a MongoDB Atlas Roles Org ID. This represents a Roles Org ID. diff --git a/website/docs/d/search_deployment.html.markdown b/docs/data-sources/search_deployment.md similarity index 92% rename from website/docs/d/search_deployment.html.markdown rename to docs/data-sources/search_deployment.md index 568211492b..92e24e3b98 100644 --- a/website/docs/d/search_deployment.html.markdown +++ b/docs/data-sources/search_deployment.md @@ -1,14 +1,5 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: mongodbatlas_search_deployment" -sidebar_current: "docs-mongodbatlas-datasource-search-deployment" -description: |- - "Provides a Search Deployment data source." ---- - # Data Source: mongodbatlas_search_deployment - `mongodbatlas_search_deployment` describes a search node deployment. ## Example Usages diff --git a/website/docs/d/search_index.html.markdown b/docs/data-sources/search_index.md similarity index 83% rename from website/docs/d/search_index.html.markdown rename to docs/data-sources/search_index.md index ebe0b7fc81..cd3bf0255f 100644 --- a/website/docs/d/search_index.html.markdown +++ b/docs/data-sources/search_index.md @@ -1,14 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: search index" -sidebar_current: "docs-mongodbatlas-datasource-search-index" -description: |- -Describes a Search Index. ---- - # Data Source: mongodbatlas_search_index -`mongodbatlas_search_index` describe a single search indexes. This represents a single search index that have been created. +`mongodbatlas_search_index` describes a single search indexes. This represents a single search index that have been created. > **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. @@ -45,8 +37,6 @@ data "mongodbatlas_search_index" "test" { * `synonyms.#.name` - Name of the [synonym mapping definition](https://docs.atlas.mongodb.com/reference/atlas-search/synonyms/#std-label-synonyms-ref). * `synonyms.#.source_collection` - Name of the source MongoDB collection for the synonyms. * `synonyms.#.analyzer` - Name of the [analyzer](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/#std-label-analyzers-ref) to use with this synonym mapping. - - - +* `stored_source` - String that can be "true" (store all fields), "false" (default, don't store any field), or a JSON string that contains the list of fields to store (include) or not store (exclude) on Atlas Search. To learn more, see [Stored Source Fields](https://www.mongodb.com/docs/atlas/atlas-search/stored-source-definition/). For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/atlas-search/) - [and MongoDB Atlas API - Search](https://docs.atlas.mongodb.com/reference/api/atlas-search/) Documentation for more information. diff --git a/website/docs/d/search_indexes.html.markdown b/docs/data-sources/search_indexes.md similarity index 83% rename from website/docs/d/search_indexes.html.markdown rename to docs/data-sources/search_indexes.md index 6f31eca1f1..abc56a6e0d 100644 --- a/website/docs/d/search_indexes.html.markdown +++ b/docs/data-sources/search_indexes.md @@ -1,14 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: search indexes" -sidebar_current: "docs-mongodbatlas-datasource-search-indexes" -description: |- -Describes a Search Indexes. ---- - # Data Source: mongodbatlas_search_indexes -`mongodbatlas_search_indexes` describe all search indexes. This represents search indexes that have been created. +`mongodbatlas_search_indexes` describes all search indexes. This represents search indexes that have been created. > **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. @@ -37,6 +29,7 @@ data "mongodbatlas_search_indexes" "test" { ### Results +* `index_id` - The unique identifier of the Atlas Search index. * `name` - Name of the index. * `status` - Current status of the index. * `analyzer` - [Analyzer](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/#std-label-analyzers-ref) to use when creating the index. @@ -50,8 +43,6 @@ data "mongodbatlas_search_indexes" "test" { * `synonyms.#.name` - Name of the [synonym mapping definition](https://docs.atlas.mongodb.com/reference/atlas-search/synonyms/#std-label-synonyms-ref). * `synonyms.#.source_collection` - Name of the source MongoDB collection for the synonyms. * `synonyms.#.analyzer` - Name of the [analyzer](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/#std-label-analyzers-ref) to use with this synonym mapping. - - - +* `stored_source` - String that can be "true" (store all fields), "false" (default, don't store any field), or a JSON string that contains the list of fields to store (include) or not store (exclude) on Atlas Search. To learn more, see [Stored Source Fields](https://www.mongodb.com/docs/atlas/atlas-search/stored-source-definition/). For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/atlas-search/) - [and MongoDB Atlas API - Search](https://docs.atlas.mongodb.com/reference/api/atlas-search/) Documentation for more information. diff --git a/website/docs/d/serverless_instance.html.markdown b/docs/data-sources/serverless_instance.md similarity index 92% rename from website/docs/d/serverless_instance.html.markdown rename to docs/data-sources/serverless_instance.md index de3257683d..48a0be84d7 100644 --- a/website/docs/d/serverless_instance.html.markdown +++ b/docs/data-sources/serverless_instance.md @@ -1,14 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: serverless instance" -sidebar_current: "docs-mongodbatlas-datasource-serverless-instance" -description: |- -Provides a Serverless Instance. ---- - # Data Source: mongodbatlas_serverless_instance -`mongodbatlas_serverless_instance` describe a single serverless instance. This represents a single serverless instance that have been created. +`mongodbatlas_serverless_instance` describes a single serverless instance. This represents a single serverless instance that have been created. > **NOTE:** Serverless instances do not support some Atlas features at this time. For a full list of unsupported features, see [Serverless Instance Limitations](https://docs.atlas.mongodb.com/reference/serverless-instance-limitations/). diff --git a/website/docs/d/serverless_instances.html.markdown b/docs/data-sources/serverless_instances.md similarity index 89% rename from website/docs/d/serverless_instances.html.markdown rename to docs/data-sources/serverless_instances.md index 403a1a94f5..5dfb38816f 100644 --- a/website/docs/d/serverless_instances.html.markdown +++ b/docs/data-sources/serverless_instances.md @@ -1,14 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: serverless instances" -sidebar_current: "docs-mongodbatlas-datasource-serverless-instances" -description: |- -Describes a Serverless Instances. ---- - # Data Source: mongodbatlas_serverless_instances -`mongodbatlas_serverless_instances` describe all serverless instances. This represents serverless instances that have been created for the specified group id. +`mongodbatlas_serverless_instances` describes all serverless instances. This represents serverless instances that have been created for the specified group id. > **NOTE:** Serverless instances do not support some Atlas features at this time. For a full list of unsupported features, see [Serverless Instance Limitations](https://docs.atlas.mongodb.com/reference/serverless-instance-limitations/). diff --git a/website/docs/d/stream_connection.html.markdown b/docs/data-sources/stream_connection.md similarity index 93% rename from website/docs/d/stream_connection.html.markdown rename to docs/data-sources/stream_connection.md index 33221592d3..242837f186 100644 --- a/website/docs/d/stream_connection.html.markdown +++ b/docs/data-sources/stream_connection.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: stream connection" -sidebar_current: "docs-mongodbatlas-datasource-stream-connection" -description: |- - Describes an Atlas Stream Processing connection. ---- - # Data Source: mongodbatlas_stream_connection `mongodbatlas_stream_connection` describes a stream connection. diff --git a/website/docs/d/stream_connections.html.markdown b/docs/data-sources/stream_connections.md similarity index 93% rename from website/docs/d/stream_connections.html.markdown rename to docs/data-sources/stream_connections.md index b80a31d912..6bdbfe2261 100644 --- a/website/docs/d/stream_connections.html.markdown +++ b/docs/data-sources/stream_connections.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: stream connections" -sidebar_current: "docs-mongodbatlas-datasource-stream-connections" -description: |- - Describes all connections of the Atlas Stream Processing instance for the specified project. ---- - # Data Source: mongodbatlas_stream_connections `mongodbatlas_stream_connections` describes all connections of a stream instance for the specified project. diff --git a/website/docs/d/stream_instance.html.markdown b/docs/data-sources/stream_instance.md similarity index 91% rename from website/docs/d/stream_instance.html.markdown rename to docs/data-sources/stream_instance.md index a848fc46f4..8da78e5110 100644 --- a/website/docs/d/stream_instance.html.markdown +++ b/docs/data-sources/stream_instance.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: stream instance" -sidebar_current: "docs-mongodbatlas-datasource-stream-instance" -description: |- - Describes a Stream Instance. ---- - # Data Source: mongodbatlas_stream_instance `mongodbatlas_stream_instance` describes a stream instance. diff --git a/website/docs/d/stream_instances.html.markdown b/docs/data-sources/stream_instances.md similarity index 93% rename from website/docs/d/stream_instances.html.markdown rename to docs/data-sources/stream_instances.md index 0c9197aa8a..f02a878763 100644 --- a/website/docs/d/stream_instances.html.markdown +++ b/docs/data-sources/stream_instances.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: stream instances" -sidebar_current: "docs-mongodbatlas-datasource-stream-instances" -description: |- - Describes stream instances of a project. ---- - # Data Source: mongodbatlas_stream_instances `mongodbatlas_stream_instances` describes the stream instances defined in a project. diff --git a/website/docs/d/team.html.markdown b/docs/data-sources/team.md similarity index 91% rename from website/docs/d/team.html.markdown rename to docs/data-sources/team.md index 96547da436..a15e880541 100644 --- a/website/docs/d/team.html.markdown +++ b/docs/data-sources/team.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: team" -sidebar_current: "docs-mongodbatlas-datasource-team" -description: |- - Describes a Team. ---- - # Data Source: mongodbatlas_team `mongodbatlas_team` describes a Team. The resource requires your Organization ID, Project ID and Team ID. diff --git a/website/docs/d/teams.html.markdown b/docs/data-sources/teams.md similarity index 71% rename from website/docs/d/teams.html.markdown rename to docs/data-sources/teams.md index 0aa4ede9c8..139c3ff5f0 100644 --- a/website/docs/d/teams.html.markdown +++ b/docs/data-sources/teams.md @@ -1,11 +1,9 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: teams" -sidebar_current: "docs-mongodbatlas-datasource-teams" -description: |- - Describes a Team. +subcategory: "Deprecated" --- +**WARNING:** This datasource is deprecated, use `mongodbatlas_team` + # Data Source: mongodbatlas_teams This data source is deprecated. Please transition to using `mongodbatlas_team` which defines the same underlying implementation, aligning the name of the data source with the implementation which fetches a single team. diff --git a/website/docs/d/third_party_integration.markdown b/docs/data-sources/third_party_integration.md similarity index 88% rename from website/docs/d/third_party_integration.markdown rename to docs/data-sources/third_party_integration.md index 0d4894f046..bd2c9c25aa 100644 --- a/website/docs/d/third_party_integration.markdown +++ b/docs/data-sources/third_party_integration.md @@ -1,14 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: third_party_integration" -sidebar_current: "docs-mongodbatlas-datasource-third-party-integration" -description: |- - Describes all Third-Party Integration Settings in the project. ---- - # Data Source: mongodbatlas_third_party_integration -`mongodbatlas_third_party_integration` describe a Third-Party Integration Settings for the given type. +`mongodbatlas_third_party_integration` describes a Third-Party Integration Settings for the given type. -> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. diff --git a/website/docs/d/third_party_integrations.markdown b/docs/data-sources/third_party_integrations.md similarity index 88% rename from website/docs/d/third_party_integrations.markdown rename to docs/data-sources/third_party_integrations.md index caccd881bc..c177cc3490 100644 --- a/website/docs/d/third_party_integrations.markdown +++ b/docs/data-sources/third_party_integrations.md @@ -1,14 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: third_party_integrations" -sidebar_current: "docs-mongodbatlas-datasource-third-party-integrations" -description: |- - Describes all Third-Party Integration Settings in the project. ---- - # Data Source: mongodbatlas_third_party_integrations -`mongodbatlas_third_party_integrations` describe all Third-Party Integration Settings. This represents two Third-Party services `PAGER_DUTY` and `DATADOG` +`mongodbatlas_third_party_integrations` describes all Third-Party Integration Settings. This represents two Third-Party services `PAGER_DUTY` and `DATADOG` applied across the project. -> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. diff --git a/website/docs/d/x509_authentication_database_user.html.markdown b/docs/data-sources/x509_authentication_database_user.md similarity index 91% rename from website/docs/d/x509_authentication_database_user.html.markdown rename to docs/data-sources/x509_authentication_database_user.md index a08f1eb1a5..e3a9509289 100644 --- a/website/docs/d/x509_authentication_database_user.html.markdown +++ b/docs/data-sources/x509_authentication_database_user.md @@ -1,14 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: x509_authentication_database_user" -sidebar_current: "docs-mongodbatlas-datasource-x509-authentication-database-user" -description: |- - Describes a Custom DB Role. ---- - # Data Source: mongodbatlas_x509_authentication_database_user -`mongodbatlas_x509_authentication_database_user` describe a X509 Authentication Database User. This represents a X509 Authentication Database User. +`mongodbatlas_x509_authentication_database_user` describes a X509 Authentication Database User. This represents a X509 Authentication Database User. -> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation. diff --git a/website/docs/guides/0.6.0-upgrade-guide.html.markdown b/docs/guides/0.6.0-upgrade-guide.md similarity index 94% rename from website/docs/guides/0.6.0-upgrade-guide.html.markdown rename to docs/guides/0.6.0-upgrade-guide.md index 2dd24ae9d9..f8afc7ba5b 100644 --- a/website/docs/guides/0.6.0-upgrade-guide.html.markdown +++ b/docs/guides/0.6.0-upgrade-guide.md @@ -1,10 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 0.6.0: Upgrade Guide" -sidebar_current: "docs-mongodbatlas-guides-060-upgrade-guide" -description: |- - MongoDB Atlas Provider 0.6.0: Upgrade Guide - +page_title: "Upgrade Guide 0.6.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 0.6.0: Upgrade Guide diff --git a/website/docs/guides/0.8.0-upgrade-guide.html.markdown b/docs/guides/0.8.0-upgrade-guide.md similarity index 97% rename from website/docs/guides/0.8.0-upgrade-guide.html.markdown rename to docs/guides/0.8.0-upgrade-guide.md index fd782f6830..4ed1e4974f 100644 --- a/website/docs/guides/0.8.0-upgrade-guide.html.markdown +++ b/docs/guides/0.8.0-upgrade-guide.md @@ -1,10 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 0.8.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-080-upgrade-guide" -description: |- - MongoDB Atlas Provider 0.8.0: Upgrade and Information Guide - +page_title: "Upgrade Guide 0.8.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider v0.8.0: Upgrade and Information Guide diff --git a/website/docs/guides/0.8.2-upgrade-guide.html.markdown b/docs/guides/0.8.2-upgrade-guide.md similarity index 93% rename from website/docs/guides/0.8.2-upgrade-guide.html.markdown rename to docs/guides/0.8.2-upgrade-guide.md index d0741793ed..3b152e9d4f 100644 --- a/website/docs/guides/0.8.2-upgrade-guide.html.markdown +++ b/docs/guides/0.8.2-upgrade-guide.md @@ -1,4 +1,11 @@ -## 0.8.2 Upgrade Guide for Privatelink users +--- +page_title: "Upgrade Guide 0.8.2" +subcategory: "Older Guides" +--- + +# MongoDB Atlas Provider v0.8.2: Upgrade and Information Guide + +## Upgrade Guide for Privatelink users ### Resources are impacted that were created with versions ***v0.8.0/v0.8.1*** ### Fixed in [#398](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/398) diff --git a/website/docs/guides/0.9.0-upgrade-guide.html.markdown b/docs/guides/0.9.0-upgrade-guide.md similarity index 83% rename from website/docs/guides/0.9.0-upgrade-guide.html.markdown rename to docs/guides/0.9.0-upgrade-guide.md index 2afa51cd85..9337516895 100644 --- a/website/docs/guides/0.9.0-upgrade-guide.html.markdown +++ b/docs/guides/0.9.0-upgrade-guide.md @@ -1,10 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 0.9.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-090-upgrade-guide" -description: |- - MongoDB Atlas Provider 0.9.0: Upgrade and Information Guide - +page_title: "Upgrade Guide 0.9.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider v0.9.0: Upgrade and Information Guide diff --git a/website/docs/guides/0.9.1-upgrade-guide.html.markdown b/docs/guides/0.9.1-upgrade-guide.md similarity index 94% rename from website/docs/guides/0.9.1-upgrade-guide.html.markdown rename to docs/guides/0.9.1-upgrade-guide.md index 7d6dfcb342..093bd52671 100644 --- a/website/docs/guides/0.9.1-upgrade-guide.html.markdown +++ b/docs/guides/0.9.1-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 0.9.1: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-091-upgrade-guide" -description: |- - MongoDB Atlas Provider 0.9.1: Upgrade and Information Guide +page_title: "Upgrade Guide 0.9.1" +subcategory: "Older Guides" --- # MongoDB Atlas Provider v0.9.1: Upgrade and Information Guide diff --git a/website/docs/guides/1.0.0-upgrade-guide.html.markdown b/docs/guides/1.0.0-upgrade-guide.md similarity index 98% rename from website/docs/guides/1.0.0-upgrade-guide.html.markdown rename to docs/guides/1.0.0-upgrade-guide.md index 76daa8ce30..e87b53d822 100644 --- a/website/docs/guides/1.0.0-upgrade-guide.html.markdown +++ b/docs/guides/1.0.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.0.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-100-upgrade-guide" -description: |- -MongoDB Atlas Provider 0.1.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.0.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.0.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.0.1-upgrade-guide.html.markdown b/docs/guides/1.0.1-upgrade-guide.md similarity index 89% rename from website/docs/guides/1.0.1-upgrade-guide.html.markdown rename to docs/guides/1.0.1-upgrade-guide.md index 077af1caaa..9f50f47b53 100644 --- a/website/docs/guides/1.0.1-upgrade-guide.html.markdown +++ b/docs/guides/1.0.1-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.0.1: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-101-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.0.1: Upgrade and Information Guide +page_title: "Upgrade Guide 1.0.1" +subcategory: "Older Guides" --- # MongoDB Atlas Provider v1.0.1: Upgrade and Information Guide diff --git a/website/docs/guides/1.1.0-upgrade-guide.html.markdown b/docs/guides/1.1.0-upgrade-guide.md similarity index 96% rename from website/docs/guides/1.1.0-upgrade-guide.html.markdown rename to docs/guides/1.1.0-upgrade-guide.md index 0e479426d0..fdc673dc0d 100644 --- a/website/docs/guides/1.1.0-upgrade-guide.html.markdown +++ b/docs/guides/1.1.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.1.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-110-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.1.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.1.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.1.0/1.1.1: Upgrade and Information Guide diff --git a/website/docs/guides/1.10.0-upgrade-guide.html.markdown b/docs/guides/1.10.0-upgrade-guide.md similarity index 97% rename from website/docs/guides/1.10.0-upgrade-guide.html.markdown rename to docs/guides/1.10.0-upgrade-guide.md index 74c6ca645b..a35cfaeee3 100644 --- a/website/docs/guides/1.10.0-upgrade-guide.html.markdown +++ b/docs/guides/1.10.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.10.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-1100-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.10.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.10.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.10.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.11.0-upgrade-guide.html.markdown b/docs/guides/1.11.0-upgrade-guide.md similarity index 90% rename from website/docs/guides/1.11.0-upgrade-guide.html.markdown rename to docs/guides/1.11.0-upgrade-guide.md index 5437cfb71e..83597a9f8d 100644 --- a/website/docs/guides/1.11.0-upgrade-guide.html.markdown +++ b/docs/guides/1.11.0-upgrade-guide.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.11.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-1110-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.11.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.11.0" --- # MongoDB Atlas Provider 1.11.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.12.0-upgrade-guide.html.markdown b/docs/guides/1.12.0-upgrade-guide.md similarity index 92% rename from website/docs/guides/1.12.0-upgrade-guide.html.markdown rename to docs/guides/1.12.0-upgrade-guide.md index ebccb06159..98f63a3aea 100644 --- a/website/docs/guides/1.12.0-upgrade-guide.html.markdown +++ b/docs/guides/1.12.0-upgrade-guide.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.12.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-1120-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.12.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.12.0" --- # MongoDB Atlas Provider 1.12.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.13.0-upgrade-guide.html.markdown b/docs/guides/1.13.0-upgrade-guide.md similarity index 88% rename from website/docs/guides/1.13.0-upgrade-guide.html.markdown rename to docs/guides/1.13.0-upgrade-guide.md index db59bab421..fcd697d4f1 100644 --- a/website/docs/guides/1.13.0-upgrade-guide.html.markdown +++ b/docs/guides/1.13.0-upgrade-guide.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.13.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-1130-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.13.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.13.0" --- # MongoDB Atlas Provider 1.13.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.14.0-upgrade-guide.html.markdown b/docs/guides/1.14.0-upgrade-guide.md similarity index 93% rename from website/docs/guides/1.14.0-upgrade-guide.html.markdown rename to docs/guides/1.14.0-upgrade-guide.md index b584817d21..eb1422dd59 100644 --- a/website/docs/guides/1.14.0-upgrade-guide.html.markdown +++ b/docs/guides/1.14.0-upgrade-guide.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.14.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-1140-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.14.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.14.0" --- # MongoDB Atlas Provider 1.14.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.15.0-upgrade-guide.html.markdown b/docs/guides/1.15.0-upgrade-guide.md similarity index 96% rename from website/docs/guides/1.15.0-upgrade-guide.html.markdown rename to docs/guides/1.15.0-upgrade-guide.md index 12a9cf8a59..95dd886a1b 100644 --- a/website/docs/guides/1.15.0-upgrade-guide.html.markdown +++ b/docs/guides/1.15.0-upgrade-guide.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.15.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-1150-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.15.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.15.0" --- # MongoDB Atlas Provider 1.15.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.16.0-upgrade-guide.html.markdown b/docs/guides/1.16.0-upgrade-guide.md similarity index 93% rename from website/docs/guides/1.16.0-upgrade-guide.html.markdown rename to docs/guides/1.16.0-upgrade-guide.md index 9702d35a43..e93e7ddbb8 100644 --- a/website/docs/guides/1.16.0-upgrade-guide.html.markdown +++ b/docs/guides/1.16.0-upgrade-guide.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.16.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-1160-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.16.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.16.0" --- # MongoDB Atlas Provider 1.16.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.17.0-upgrade-guide.html.markdown b/docs/guides/1.17.0-upgrade-guide.md similarity index 94% rename from website/docs/guides/1.17.0-upgrade-guide.html.markdown rename to docs/guides/1.17.0-upgrade-guide.md index 40536dc7f5..56e931c4b6 100644 --- a/website/docs/guides/1.17.0-upgrade-guide.html.markdown +++ b/docs/guides/1.17.0-upgrade-guide.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.17.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-1170-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.17.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.17.0" --- # MongoDB Atlas Provider 1.17.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.2.0-upgrade-guide.html.markdown b/docs/guides/1.2.0-upgrade-guide.md similarity index 85% rename from website/docs/guides/1.2.0-upgrade-guide.html.markdown rename to docs/guides/1.2.0-upgrade-guide.md index 7c349a0f7f..f46f18af2d 100644 --- a/website/docs/guides/1.2.0-upgrade-guide.html.markdown +++ b/docs/guides/1.2.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.2.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-120-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.2.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.2.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.2.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.3.0-upgrade-guide.html.markdown b/docs/guides/1.3.0-upgrade-guide.md similarity index 84% rename from website/docs/guides/1.3.0-upgrade-guide.html.markdown rename to docs/guides/1.3.0-upgrade-guide.md index a15108703c..34e4d6b692 100644 --- a/website/docs/guides/1.3.0-upgrade-guide.html.markdown +++ b/docs/guides/1.3.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.3.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-130-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.3.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.3.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.3.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.4.0-upgrade-guide.html.markdown b/docs/guides/1.4.0-upgrade-guide.md similarity index 91% rename from website/docs/guides/1.4.0-upgrade-guide.html.markdown rename to docs/guides/1.4.0-upgrade-guide.md index f39efd2a41..ac38a87f62 100644 --- a/website/docs/guides/1.4.0-upgrade-guide.html.markdown +++ b/docs/guides/1.4.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.4.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-140-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.4.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.4.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.4.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.5.0-upgrade-guide.html.markdown b/docs/guides/1.5.0-upgrade-guide.md similarity index 88% rename from website/docs/guides/1.5.0-upgrade-guide.html.markdown rename to docs/guides/1.5.0-upgrade-guide.md index f9d14e12ac..2e305bd48f 100644 --- a/website/docs/guides/1.5.0-upgrade-guide.html.markdown +++ b/docs/guides/1.5.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.5.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-150-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.5.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.5.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.5.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.6.0-upgrade-guide.html.markdown b/docs/guides/1.6.0-upgrade-guide.md similarity index 86% rename from website/docs/guides/1.6.0-upgrade-guide.html.markdown rename to docs/guides/1.6.0-upgrade-guide.md index bead7c2ba5..57dd04b2c2 100644 --- a/website/docs/guides/1.6.0-upgrade-guide.html.markdown +++ b/docs/guides/1.6.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.6.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-160-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.6.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.6.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.6.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.7.0-upgrade-guide.html.markdown b/docs/guides/1.7.0-upgrade-guide.md similarity index 71% rename from website/docs/guides/1.7.0-upgrade-guide.html.markdown rename to docs/guides/1.7.0-upgrade-guide.md index 1526910f07..ee9988f593 100644 --- a/website/docs/guides/1.7.0-upgrade-guide.html.markdown +++ b/docs/guides/1.7.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.7.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-170-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.7.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.7.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.7.0: Upgrade and Information Guide @@ -11,12 +8,10 @@ MongoDB Atlas Provider 1.7.0: Upgrade and Information Guide The Terraform MongoDB Atlas Provider version 1.7.0 has one main new and exciting feature. New Features: -* You can now [`authenticate with AWS Secrets Manager (AWS SM)`](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/website/docs/index.html.markdown#aws-secrets-manager) - +* You can now [`authenticate with AWS Secrets Manager (AWS SM)`](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/index.md#aws-secrets-manager) See the [CHANGELOG](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/CHANGELOG.md) for more details. - ### Helpful Links * [Report bugs](https://github.com/mongodb/terraform-provider-mongodbatlas/issues) diff --git a/website/docs/guides/1.8.0-upgrade-guide.html.markdown b/docs/guides/1.8.0-upgrade-guide.md similarity index 95% rename from website/docs/guides/1.8.0-upgrade-guide.html.markdown rename to docs/guides/1.8.0-upgrade-guide.md index 1313cda516..a10c0ac787 100644 --- a/website/docs/guides/1.8.0-upgrade-guide.html.markdown +++ b/docs/guides/1.8.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.8.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-180-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.8.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.8.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.8.0: Upgrade and Information Guide diff --git a/website/docs/guides/1.9.0-upgrade-guide.html.markdown b/docs/guides/1.9.0-upgrade-guide.md similarity index 85% rename from website/docs/guides/1.9.0-upgrade-guide.html.markdown rename to docs/guides/1.9.0-upgrade-guide.md index 508f708560..cd5133a922 100644 --- a/website/docs/guides/1.9.0-upgrade-guide.html.markdown +++ b/docs/guides/1.9.0-upgrade-guide.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider 1.9.0: Upgrade and Information Guide" -sidebar_current: "docs-mongodbatlas-guides-190-upgrade-guide" -description: |- -MongoDB Atlas Provider 1.9.0: Upgrade and Information Guide +page_title: "Upgrade Guide 1.9.0" +subcategory: "Older Guides" --- # MongoDB Atlas Provider 1.9.0: Upgrade and Information Guide diff --git a/website/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.html.markdown b/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.md similarity index 96% rename from website/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.html.markdown rename to docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.md index 38a206791f..eec249e566 100644 --- a/website/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.html.markdown +++ b/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.md @@ -1,9 +1,6 @@ --- -layout: "mongodbatlas" -page_title: "Upgrade Guide for Terraform MongoDB Atlas Provider Programmatic API Key Resource in v1.10.0" -sidebar_current: "docs-mongodbatlas-guides-Programmatic-API-Key-upgrade-guide" -description: |- -MongoDB Atlas Provider : Upgrade and Information Guide +page_title: "Upgrade Guide 1.10.0 for Programmatic API Key" +subcategory: "Older Guides" --- # MongoDB Atlas Provider: Programmatic API Key Upgrade Guide in v1.10.0 diff --git a/website/docs/index.html.markdown b/docs/index.md similarity index 97% rename from website/docs/index.html.markdown rename to docs/index.md index fcf4c541a7..7bef25fda4 100644 --- a/website/docs/index.html.markdown +++ b/docs/index.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "Provider: MongoDB Atlas" -sidebar_current: "docs-mongodbatlas-index" -description: |- - The MongoDB Atlas provider is used to interact with the resources supported by MongoDB Atlas. The provider needs to be configured with the proper credentials before it can be used. ---- - # MongoDB Atlas Provider You can use the MongoDB Atlas provider to interact with the resources supported by [MongoDB Atlas](https://www.mongodb.com/cloud/atlas). diff --git a/website/docs/r/access_list_api_key.html.markdown b/docs/resources/access_list_api_key.md similarity index 92% rename from website/docs/r/access_list_api_key.html.markdown rename to docs/resources/access_list_api_key.md index e096b1de7e..9a80ac4eb6 100644 --- a/website/docs/r/access_list_api_key.html.markdown +++ b/docs/resources/access_list_api_key.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: access_list_api_key" -sidebar_current: "docs-mongodbatlas-resource-access_list-api-key" -description: |- - Creates the access list entries for the specified Atlas Organization API Key. ---- - # Resource: mongodbatlas_access_list_api_key `mongodbatlas_access_list_api_key` provides an IP Access List entry resource. The access list grants access from IPs, CIDRs or AWS Security Groups (if VPC Peering is enabled) to clusters within the Project. diff --git a/website/docs/r/advanced_cluster.html.markdown b/docs/resources/advanced_cluster.md similarity index 98% rename from website/docs/r/advanced_cluster.html.markdown rename to docs/resources/advanced_cluster.md index ee313e0f53..a7f21cc844 100644 --- a/website/docs/r/advanced_cluster.html.markdown +++ b/docs/resources/advanced_cluster.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: advanced_cluster" -sidebar_current: "docs-mongodbatlas-resource-advanced-cluster" -description: |- - Provides an Advanced Cluster resource. ---- - # Resource: mongodbatlas_advanced_cluster `mongodbatlas_advanced_cluster` provides an Advanced Cluster resource. The resource lets you create, edit and delete advanced clusters. The resource requires your Project ID. @@ -311,21 +303,21 @@ resource "mongodbatlas_advanced_cluster" "cluster" { Standard ```terraform output "standard" { - value = mongodbatlas_cluster.cluster-test.connection_strings[0].standard + value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].standard } # Example return string: standard = "mongodb://cluster-atlas-shard-00-00.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-01.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-02.ygo1m.mongodb.net:27017/?ssl=true&authSource=admin&replicaSet=atlas-12diht-shard-0" ``` Standard srv ```terraform output "standard_srv" { - value = mongodbatlas_cluster.cluster-test.connection_strings[0].standard_srv + value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].standard_srv } # Example return string: standard_srv = "mongodb+srv://cluster-atlas.ygo1m.mongodb.net" ``` Private with Network peering and Custom DNS AWS enabled ```terraform output "private" { - value = mongodbatlas_cluster.cluster-test.connection_strings[0].private + value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].private } # Example return string: private = "mongodb://cluster-atlas-shard-00-00-pri.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-01-pri.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-02-pri.ygo1m.mongodb.net:27017/?ssl=true&authSource=admin&replicaSet=atlas-12diht-shard-0" private = "mongodb+srv://cluster-atlas-pri.ygo1m.mongodb.net" @@ -333,7 +325,7 @@ private = "mongodb+srv://cluster-atlas-pri.ygo1m.mongodb.net" Private srv with Network peering and Custom DNS AWS enabled ```terraform output "private_srv" { - value = mongodbatlas_cluster.cluster-test.connection_strings[0].private_srv + value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].private_srv } # Example return string: private_srv = "mongodb+srv://cluster-atlas-pri.ygo1m.mongodb.net" ``` diff --git a/website/docs/r/alert_configuration.html.markdown b/docs/resources/alert_configuration.md similarity index 98% rename from website/docs/r/alert_configuration.html.markdown rename to docs/resources/alert_configuration.md index 7dc7373cf8..fe3df3f8d6 100644 --- a/website/docs/r/alert_configuration.html.markdown +++ b/docs/resources/alert_configuration.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: alert_configuration" -sidebar_current: "docs-mongodbatlas-resource-alert-configuration" -description: |- - Provides an Alert Configuration resource. ---- - # Resource: mongodbatlas_alert_configuration `mongodbatlas_alert_configuration` provides an Alert Configuration resource to define the conditions that trigger an alert and the methods of notification within a MongoDB Atlas project. diff --git a/website/docs/r/api_key.html.markdown b/docs/resources/api_key.md similarity index 91% rename from website/docs/r/api_key.html.markdown rename to docs/resources/api_key.md index 415953a1fd..13c5d7b555 100644 --- a/website/docs/r/api_key.html.markdown +++ b/docs/resources/api_key.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: api_key" -sidebar_current: "docs-mongodbatlas-resource-api-key" -description: |- - Provides a API Key resource. ---- - # Resource: mongodbatlas_api_key `mongodbatlas_api_key` provides a Organization API key resource. This allows an Organizational API key to be created. diff --git a/website/docs/r/auditing.html.markdown b/docs/resources/auditing.md similarity index 92% rename from website/docs/r/auditing.html.markdown rename to docs/resources/auditing.md index e0a1c816ee..444375770d 100644 --- a/website/docs/r/auditing.html.markdown +++ b/docs/resources/auditing.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: auditing" -sidebar_current: "docs-mongodbatlas-resource-auditing" -description: |- - Provides a Auditing resource. ---- - # Resource: mongodbatlas_auditing `mongodbatlas_auditing` provides an Auditing resource. This allows auditing to be created. diff --git a/website/docs/r/backup_compliance_policy.html.markdown b/docs/resources/backup_compliance_policy.md similarity index 95% rename from website/docs/r/backup_compliance_policy.html.markdown rename to docs/resources/backup_compliance_policy.md index 06c8f6b5e8..cc2320df87 100644 --- a/website/docs/r/backup_compliance_policy.html.markdown +++ b/docs/resources/backup_compliance_policy.md @@ -1,10 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: backup_compliance_policy" -sidebar_current: "docs-mongodbatlas-resource-backup-compliance-policy" -description: |- - Provides a Backup Compliance Policy resource. ---- # Resource: mongodbatlas_backup_compliance_policy `mongodbatlas_backup_compliance_policy` provides a resource that enables you to set up a Backup Compliance Policy resource. [Backup Compliance Policy ](https://www.mongodb.com/docs/atlas/backup/cloud-backup/backup-compliance-policy) prevents any user, regardless of role, from modifying or deleting specific cluster settings, backups, and backup configurations. When enabled, the Backup Compliance Policy will be applied as the minimum policy for all clusters and backups in the project. It can only be disabled by contacting MongoDB support. This feature is only supported for cluster tiers M10+. @@ -25,8 +18,8 @@ We first suggest disabling `mongodbatlas_backup_compliance_policy` resource, whi * For example, replace: ``` resource "mongodbatlas_cloud_backup_schedule" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name ... } ``` @@ -44,21 +37,28 @@ We first suggest disabling `mongodbatlas_backup_compliance_policy` resource, whi ## Example Usage ```terraform -resource "mongodbatlas_cluster" "my_cluster" { - project_id = "" - name = "clusterTest" - - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_CENTRAL_1" - provider_instance_size_name = "M10" - cloud_backup = true // enable cloud backup snapshots +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "clusterTest" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = var.region + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_cloud_backup_schedule" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name reference_hour_of_day = 3 reference_minute_of_hour = 45 diff --git a/website/docs/r/cloud_backup_schedule.html.markdown b/docs/resources/cloud_backup_schedule.md similarity index 84% rename from website/docs/r/cloud_backup_schedule.html.markdown rename to docs/resources/cloud_backup_schedule.md index 65fad8bd4a..4643b4c524 100644 --- a/website/docs/r/cloud_backup_schedule.html.markdown +++ b/docs/resources/cloud_backup_schedule.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cloud_backup_schedule" -sidebar_current: "docs-mongodbatlas-resource-cloud-backup-schedule" -description: |- - Provides a Cloud Backup Schedule resource. ---- - # Resource: mongodbatlas_cloud_backup_schedule `mongodbatlas_cloud_backup_schedule` provides a cloud backup schedule resource. The resource lets you create, read, update and delete a cloud backup schedule. @@ -14,7 +6,7 @@ description: |- -> **NOTE:** If Backup Compliance Policy is enabled for the project for which this backup schedule is defined, you cannot modify the backup schedule for an individual cluster below the minimum requirements set in the Backup Compliance Policy. See [Backup Compliance Policy Prohibited Actions and Considerations](https://www.mongodb.com/docs/atlas/backup/cloud-backup/backup-compliance-policy/#configure-a-backup-compliance-policy). --> **NOTE:** When creating a backup schedule you **must either** use the `depends_on` clause to indicate the cluster to which it refers **or** specify the values of `project_id` and `cluster_name` as reference of the cluster resource (e.g. `cluster_name = mongodbatlas_cluster.my_cluster.name` - see the example below). Failure in doing so will result in an error when executing the plan. +-> **NOTE:** When creating a backup schedule you **must either** use the `depends_on` clause to indicate the cluster to which it refers **or** specify the values of `project_id` and `cluster_name` as reference of the cluster resource (e.g. `cluster_name = mongodbatlas_advanced_cluster.my_cluster.name` - see the example below). Failure in doing so will result in an error when executing the plan. In the Terraform MongoDB Atlas Provider 1.0.0 we have re-architected the way in which Cloud Backup Policies are manged with Terraform to significantly reduce the complexity. Due to this change we've provided multiple examples below to help express how this new resource functions. @@ -24,20 +16,28 @@ In the Terraform MongoDB Atlas Provider 1.0.0 we have re-architected the way in You can create a new cluster with `cloud_backup` enabled and then immediately overwrite the default cloud backup policy that Atlas creates by default at the same time with this example. ```terraform -resource "mongodbatlas_cluster" "my_cluster" { - project_id = "" - name = "clusterTest" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_CENTRAL_1" - provider_instance_size_name = "M10" - cloud_backup = true // must be enabled in order to use cloud_backup_schedule resource +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "clusterTest" + cluster_type = "REPLICASET" + backup_enabled = true # must be enabled in order to use cloud_backup_schedule resource + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_CENTRAL_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_cloud_backup_schedule" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name reference_hour_of_day = 3 reference_minute_of_hour = 45 @@ -63,20 +63,28 @@ resource "mongodbatlas_cloud_backup_schedule" "test" { You can enable `cloud_backup` in the Cluster resource and then use the `cloud_backup_schedule` resource with no policy items to remove the default policy that Atlas creates when you enable Cloud Backup. This allows you to then create a policy when you are ready to via Terraform. ```terraform -resource "mongodbatlas_cluster" "my_cluster" { - project_id = "" - name = "clusterTest" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_CENTRAL_1" - provider_instance_size_name = "M10" - cloud_backup = true // must be enabled in order to use cloud_backup_schedule resource +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "clusterTest" + cluster_type = "REPLICASET" + backup_enabled = true # must be enabled in order to use cloud_backup_schedule resource + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_CENTRAL_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_cloud_backup_schedule" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name reference_hour_of_day = 3 reference_minute_of_hour = 45 @@ -91,20 +99,28 @@ If you followed the example to Create a Cluster with Cloud Backup Enabled but No The cluster already exists with `cloud_backup` enabled ```terraform -resource "mongodbatlas_cluster" "my_cluster" { - project_id = "" - name = "clusterTest" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_CENTRAL_1" - provider_instance_size_name = "M10" - cloud_backup = true // must be enabled in order to use cloud_backup_schedule resource +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "clusterTest" + cluster_type = "REPLICASET" + backup_enabled = true # must be enabled in order to use cloud_backup_schedule resource + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_CENTRAL_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_cloud_backup_schedule" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name reference_hour_of_day = 3 reference_minute_of_hour = 45 @@ -146,20 +162,28 @@ resource "mongodbatlas_cloud_backup_schedule" "test" { You can enable `cloud_backup` in the Cluster resource and then use the `cloud_backup_schedule` resource with a basic policy for Cloud Backup. ```terraform -resource "mongodbatlas_cluster" "my_cluster" { - project_id = "" - name = "clusterTest" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "US_EAST_2" - provider_instance_size_name = "M10" - cloud_backup = true // must be enabled in order to use cloud_backup_schedule resource +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "clusterTest" + cluster_type = "REPLICASET" + backup_enabled = true # must be enabled in order to use cloud_backup_schedule resource + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_CENTRAL_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_cloud_backup_schedule" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name reference_hour_of_day = 3 reference_minute_of_hour = 45 @@ -180,7 +204,7 @@ resource "mongodbatlas_cloud_backup_schedule" "test" { "YEARLY", "ON_DEMAND"] region_name = "US_EAST_1" - replication_spec_id = mongodbatlas_cluster.my_cluster.replication_specs.*.id[0] + replication_spec_id = mongodbatlas_advanced_cluster.my_cluster.replication_specs.*.id[0] should_copy_oplogs = false } diff --git a/website/docs/r/cloud_backup_snapshot.html.markdown b/docs/resources/cloud_backup_snapshot.md similarity index 77% rename from website/docs/r/cloud_backup_snapshot.html.markdown rename to docs/resources/cloud_backup_snapshot.md index 7b49f04a3f..ef67fe7ea9 100644 --- a/website/docs/r/cloud_backup_snapshot.html.markdown +++ b/docs/resources/cloud_backup_snapshot.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cloud_backup_snapshot" -sidebar_current: "docs-mongodbatlas-resource-cloud_backup_snapshot" -description: |- - Provides a Cloud Backup Snapshot resource. ---- - # Resource: mongodbatlas_cloud_backup_snapshot `mongodbatlas_cloud_backup_snapshot` provides a resource to take a cloud backup snapshot on demand. @@ -18,32 +10,40 @@ On-demand snapshots happen immediately, unlike scheduled snapshots which occur a ## Example Usage ```terraform - resource "mongodbatlas_cluster" "my_cluster" { - project_id = "5cf5a45a9ccf6400e60981b6" - name = "MyCluster" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_WEST_2" - provider_instance_size_name = "M10" - cloud_backup = true // enable cloud backup snapshots - } - - resource "mongodbatlas_cloud_backup_snapshot" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name - description = "myDescription" - retention_in_days = 1 - } - - resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" { - project_id = mongodbatlas_cloud_backup_snapshot.test.project_id - cluster_name = mongodbatlas_cloud_backup_snapshot.test.cluster_name - snapshot_id = mongodbatlas_cloud_backup_snapshot.test.snapshot_id - delivery_type_config { - download = true +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } +} + +resource "mongodbatlas_cloud_backup_snapshot" "test" { + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name + description = "myDescription" + retention_in_days = 1 +} + +resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" { + project_id = mongodbatlas_cloud_backup_snapshot.test.project_id + cluster_name = mongodbatlas_cloud_backup_snapshot.test.cluster_name + snapshot_id = mongodbatlas_cloud_backup_snapshot.test.snapshot_id + delivery_type_config { + download = true + } +} ``` ## Argument Reference diff --git a/website/docs/r/cloud_backup_snapshot_export_bucket.html.markdown b/docs/resources/cloud_backup_snapshot_export_bucket.md similarity index 80% rename from website/docs/r/cloud_backup_snapshot_export_bucket.html.markdown rename to docs/resources/cloud_backup_snapshot_export_bucket.md index e3f46f56fe..2ffef835aa 100644 --- a/website/docs/r/cloud_backup_snapshot_export_bucket.html.markdown +++ b/docs/resources/cloud_backup_snapshot_export_bucket.md @@ -1,13 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cloud_backup_snapshot_export_bucket" -sidebar_current: "docs-mongodbatlas-resource-cloud_backup_snapshot_export_bucket" -description: |- - Provides a Cloud Backup Snapshot Export Bucket resource. ---- - # Resource: mongodbatlas_cloud_backup_snapshot_export_bucket -`mongodbatlas_cloud_backup_snapshot_export_bucket` resource allows you to create an export snapshot bucket for the specified project. + +`mongodbatlas_cloud_backup_snapshot_export_bucket` allows you to create an export snapshot bucket for the specified project. -> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. diff --git a/website/docs/r/cloud_backup_snapshot_export_job.html.markdown b/docs/resources/cloud_backup_snapshot_export_job.md similarity index 94% rename from website/docs/r/cloud_backup_snapshot_export_job.html.markdown rename to docs/resources/cloud_backup_snapshot_export_job.md index cae182da9a..2fdc724104 100644 --- a/website/docs/r/cloud_backup_snapshot_export_job.html.markdown +++ b/docs/resources/cloud_backup_snapshot_export_job.md @@ -1,13 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cloud_backup_snapshot_export_job" -sidebar_current: "docs-mongodbatlas-resource-cloud_backup_snapshot_export_job" -description: |- - Provides a Cloud Backup Snapshot Export Job resource. ---- - # Resource: mongodbatlas_cloud_backup_snapshot_export_job -`mongodbatlas_cloud_backup_snapshot_export_job` resource allows you to create a cloud backup snapshot export job for the specified project. + +`mongodbatlas_cloud_backup_snapshot_export_job` allows you to create a cloud backup snapshot export job for the specified project. -> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. diff --git a/website/docs/r/cloud_backup_snapshot_restore_job.html.markdown b/docs/resources/cloud_backup_snapshot_restore_job.md similarity index 73% rename from website/docs/r/cloud_backup_snapshot_restore_job.html.markdown rename to docs/resources/cloud_backup_snapshot_restore_job.md index 0a805146ed..3107922d51 100644 --- a/website/docs/r/cloud_backup_snapshot_restore_job.html.markdown +++ b/docs/resources/cloud_backup_snapshot_restore_job.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cloud_backup_snapshot_restore_job" -sidebar_current: "docs-mongodbatlas-resource-cloud_backup_snapshot_restore_job" -description: |- - Provides a Cloud Backup Snapshot Restore Job resource. ---- - # Resource: mongodbatlas_cloud_backup_snapshot_restore_job `mongodbatlas_cloud_backup_snapshot_restore_job` provides a resource to create a new restore job from a cloud backup snapshot of a specified cluster. The restore job must define one of three delivery types: @@ -27,85 +19,107 @@ description: |- ### Example automated delivery type ```terraform - resource "mongodbatlas_cluster" "my_cluster" { - project_id = "5cf5a45a9ccf6400e60981b6" - name = "MyCluster" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_WEST_2" - provider_instance_size_name = "M10" - cloud_backup = true // enable cloud backup snapshots +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } } +} - resource "mongodbatlas_cloud_provider_snapshot" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name - description = "myDescription" - retention_in_days = 1 - } +resource "mongodbatlas_cloud_provider_snapshot" "test" { + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name + description = "myDescription" + retention_in_days = 1 +} - resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" { - project_id = mongodbatlas_cloud_provider_snapshot.test.project_id - cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name - snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id - delivery_type_config { - automated = true - target_cluster_name = "MyCluster" - target_project_id = "5cf5a45a9ccf6400e60981b6" - } +resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" { + project_id = mongodbatlas_cloud_provider_snapshot.test.project_id + cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name + snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id + delivery_type_config { + automated = true + target_cluster_name = "MyCluster" + target_project_id = "5cf5a45a9ccf6400e60981b6" } +} ``` ### Example download delivery type ```terraform - resource "mongodbatlas_cluster" "my_cluster" { - project_id = "5cf5a45a9ccf6400e60981b6" - name = "MyCluster" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_WEST_2" - provider_instance_size_name = "M10" - cloud_backup = true // enable cloud backup snapshots +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } } +} - resource "mongodbatlas_cloud_provider_snapshot" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name - description = "myDescription" - retention_in_days = 1 - } - - resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" { - project_id = mongodbatlas_cloud_provider_snapshot.test.project_id - cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name - snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id - delivery_type_config { - download = true - } +resource "mongodbatlas_cloud_provider_snapshot" "test" { + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name + description = "myDescription" + retention_in_days = 1 +} + +resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" { + project_id = mongodbatlas_cloud_provider_snapshot.test.project_id + cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name + snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id + delivery_type_config { + download = true } +} ``` ### Example of a point in time restore ``` -resource "mongodbatlas_cluster" "cluster_test" { - project_id = mongodbatlas_project.project_test.id - name = var.cluster_name - - # Provider Settings "block" - provider_name = "AWS" - provider_region_name = "US_EAST_1" - provider_instance_size_name = "M10" - cloud_backup = true # enable cloud provider snapshots - pit_enabled = true +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } - resource "mongodbatlas_cloud_backup_snapshot" "test" { - project_id = mongodbatlas_cluster.cluster_test.project_id - cluster_name = mongodbatlas_cluster.cluster_test.name + project_id = mongodbatlas_advanced_cluster.cluster_test.project_id + cluster_name = mongodbatlas_advanced_cluster.cluster_test.name description = "My description" retention_in_days = "1" } @@ -118,8 +132,8 @@ resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" { delivery_type_config { point_in_time = true - target_cluster_name = mongodbatlas_cluster.cluster_test.name - target_project_id = mongodbatlas_cluster.cluster_test.project_id + target_cluster_name = mongodbatlas_advanced_cluster.cluster_test.name + target_project_id = mongodbatlas_advanced_cluster.cluster_test.project_id point_in_time_utc_seconds = var.point_in_time_utc_seconds } } diff --git a/website/docs/r/cloud_provider_access.markdown b/docs/resources/cloud_provider_access.md similarity index 96% rename from website/docs/r/cloud_provider_access.markdown rename to docs/resources/cloud_provider_access.md index 80eb8f700f..331f250bb8 100644 --- a/website/docs/r/cloud_provider_access.markdown +++ b/docs/resources/cloud_provider_access.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: mongodbatlas_cloud_provider_access" -sidebar_current: "docs-mongodbatlas-resource-cloud-provider-access" -description: |- - Provides a Cloud Provider Access settings resource for registration, authorization, and deauthorization ---- - # Resource: Cloud Provider Access Configuration Paths The Terraform MongoDB Atlas Provider offers the following path to perform an authorization for a cloud provider role - diff --git a/website/docs/r/cloud_provider_snapshot.html.markdown b/docs/resources/cloud_provider_snapshot.md similarity index 71% rename from website/docs/r/cloud_provider_snapshot.html.markdown rename to docs/resources/cloud_provider_snapshot.md index adecd6d2ac..df20876be0 100644 --- a/website/docs/r/cloud_provider_snapshot.html.markdown +++ b/docs/resources/cloud_provider_snapshot.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cloud_provider_snapshot" -sidebar_current: "docs-mongodbatlas-resource-cloud_provider_snapshot" -description: |- - Provides an Cloud Backup Snapshot resource. +subcategory: "Deprecated" --- **WARNING:** This resource is deprecated, use `mongodbatlas_cloud_backup_snapshot` @@ -19,33 +15,41 @@ On-demand snapshots happen immediately, unlike scheduled snapshots which occur a ## Example Usage ```terraform - resource "mongodbatlas_cluster" "my_cluster" { - project_id = "5cf5a45a9ccf6400e60981b6" - name = "MyCluster" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_WEST_2" - provider_instance_size_name = "M10" - cloud_backup = true // enable cloud backup snapshots - } - - resource "mongodbatlas_cloud_provider_snapshot" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name - description = "myDescription" - retention_in_days = 1 - timeout = "10m" - } - - resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" { - project_id = mongodbatlas_cloud_provider_snapshot.test.project_id - cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name - snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id - delivery_type_config { - download = true +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } +} + +resource "mongodbatlas_cloud_provider_snapshot" "test" { + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name + description = "myDescription" + retention_in_days = 1 + timeout = "10m" +} + +resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" { + project_id = mongodbatlas_cloud_provider_snapshot.test.project_id + cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name + snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id + delivery_type_config { + download = true + } +} ``` ## Argument Reference diff --git a/website/docs/r/cloud_provider_snapshot_backup_policy.html.markdown b/docs/resources/cloud_provider_snapshot_backup_policy.md similarity index 62% rename from website/docs/r/cloud_provider_snapshot_backup_policy.html.markdown rename to docs/resources/cloud_provider_snapshot_backup_policy.md index 70c6e07cd1..f28a17a553 100644 --- a/website/docs/r/cloud_provider_snapshot_backup_policy.html.markdown +++ b/docs/resources/cloud_provider_snapshot_backup_policy.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cloud_provider_snapshot_backup_policy" -sidebar_current: "docs-mongodbatlas-resource-cloud-provider-snapshot-backup-policy" -description: |- - Provides a Cloud Backup Snapshot Policy resource. +subcategory: "Deprecated" --- **WARNING:** This resource is deprecated, use `mongodbatlas_cloud_backup_schedule` @@ -21,20 +17,28 @@ When Cloud Backup is enabled for a cluster MongoDB Atlas automatically creates a ## Example Usage - Create a Cluster and Modify the 4 Default Policies Simultaneously ```terraform -resource "mongodbatlas_cluster" "my_cluster" { - project_id = "" - name = "clusterTest" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_CENTRAL_1" - provider_instance_size_name = "M10" - cloud_backup = true // must be enabled in order to use cloud_provider_snapshot_backup_policy resource +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # must be enabled in order to use cloud_provider_snapshot_backup_policy resource + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_CENTRAL_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name reference_hour_of_day = 3 reference_minute_of_hour = 45 @@ -43,10 +47,10 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { //Keep all 4 default policies but modify the units and values //Could also just reflect the policy defaults here for later management policies { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id policy_item { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id frequency_interval = 1 frequency_type = "hourly" retention_unit = "days" @@ -54,7 +58,7 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { } policy_item { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id frequency_interval = 1 frequency_type = "daily" retention_unit = "days" @@ -62,7 +66,7 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { } policy_item { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id frequency_interval = 4 frequency_type = "weekly" retention_unit = "weeks" @@ -70,7 +74,7 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { } policy_item { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id frequency_interval = 5 frequency_type = "monthly" retention_unit = "months" @@ -85,20 +89,28 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { ## Example Usage - Create a Cluster and Modify 3 Default Policies and Remove 1 Default Policy Simultaneously ```terraform -resource "mongodbatlas_cluster" "my_cluster" { - project_id = "" - name = "clusterTest" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_CENTRAL_1" - provider_instance_size_name = "M10" - cloud_backup = true // must be enabled in order to use cloud_provider_snapshot_backup_policy resource +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # must be enabled in order to use cloud_provider_snapshot_backup_policy resource + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_CENTRAL_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name reference_hour_of_day = 3 reference_minute_of_hour = 45 @@ -106,10 +118,10 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { policies { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id policy_item { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id frequency_interval = 1 frequency_type = "hourly" retention_unit = "days" @@ -117,7 +129,7 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { } policy_item { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id frequency_interval = 1 frequency_type = "daily" retention_unit = "days" @@ -126,7 +138,7 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { # Item removed # policy_item { - # id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id + # id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id # frequency_interval = 4 # frequency_type = "weekly" # retention_unit = "weeks" @@ -134,7 +146,7 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { # } policy_item { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id frequency_interval = 5 frequency_type = "monthly" retention_unit = "months" @@ -151,20 +163,28 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { ## Example Usage - Remove 3 Default Policies Items After the Cluster Has Already Been Created and Modify One Policy ```terraform -resource "mongodbatlas_cluster" "my_cluster" { - project_id = "" - name = "clusterTest" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_CENTRAL_1" - provider_instance_size_name = "M10" - cloud_backup = true // must be enabled in order to use cloud_provider_snapshot_backup_policy resource +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # must be enabled in order to use cloud_provider_snapshot_backup_policy resource + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_CENTRAL_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name reference_hour_of_day = 3 reference_minute_of_hour = 45 @@ -172,11 +192,11 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { policies { - id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id + id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id # Item removed # policy_item { - # id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id + # id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id # frequency_interval = 1 # frequency_type = "hourly" # retention_unit = "days" @@ -185,7 +205,7 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { # Item removed # policy_item { - # id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id + # id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id # frequency_interval = 1 # frequency_type = "daily" # retention_unit = "days" @@ -194,7 +214,7 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { # Item removed # policy_item { - # id = mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id + # id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id # frequency_interval = 4 # frequency_type = "weekly" # retention_unit = "weeks" @@ -212,7 +232,7 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { } ``` --> **NOTE:** In this example we decided to remove the first 3 items so we can't use `mongodbatlas_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id` to retrieve the monthly id value of the cluster state due to once the cluster being modified or makes a `terraform refresh` will cause that the three items will remove from the state, so we will get an error due to the index 3 doesn't exists any more and our monthly policy item is moved to the first place of the array. So we use `5f0747cad187d8609a72f546`, which is an example of an id MongoDB Atlas returns for the policy item we want to keep. Here it is hard coded because you need to either use the actual value from the Terraform state or look to map the policy item you want to keep to it's current placement in the state file array. +-> **NOTE:** In this example we decided to remove the first 3 items so we can't use `mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id` to retrieve the monthly id value of the cluster state due to once the cluster being modified or makes a `terraform refresh` will cause that the three items will remove from the state, so we will get an error due to the index 3 doesn't exists any more and our monthly policy item is moved to the first place of the array. So we use `5f0747cad187d8609a72f546`, which is an example of an id MongoDB Atlas returns for the policy item we want to keep. Here it is hard coded because you need to either use the actual value from the Terraform state or look to map the policy item you want to keep to it's current placement in the state file array. ## Argument Reference @@ -225,11 +245,11 @@ resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" { ### Policies * `policies` - (Required) Contains a document for each backup policy item in the desired updated backup policy. -* `policies.#.id` - (Required) Unique identifier of the backup policy that you want to update. policies.#.id is a value obtained via the mongodbatlas_cluster resource. `cloud_backup` of the mongodbatlas_cluster resource must be set to true. See the example above for how to refer to the mongodbatlas_cluster resource for policies.#.id +* `policies.#.id` - (Required) Unique identifier of the backup policy that you want to update. policies.#.id is a value obtained via the mongodbatlas_advanced_cluster resource. `cloud_backup` of the mongodbatlas_advanced_cluster resource must be set to true. See the example above for how to refer to the mongodbatlas_advanced_cluster resource for policies.#.id #### Policy Item * `policies.#.policy_item` - (Required) Array of backup policy items. -* `policies.#.policy_item.#.id` - (Required) Unique identifier of the backup policy item. `policies.#.policy_item.#.id` is a value obtained via the mongodbatlas_cluster resource. `cloud_backup` of the mongodbatlas_cluster resource must be set to true. See the example above for how to refer to the mongodbatlas_cluster resource for policies.#.policy_item.#.id +* `policies.#.policy_item.#.id` - (Required) Unique identifier of the backup policy item. `policies.#.policy_item.#.id` is a value obtained via the mongodbatlas_advanced_cluster resource. `cloud_backup` of the mongodbatlas_advanced_cluster resource must be set to true. See the example above for how to refer to the mongodbatlas_advanced_cluster resource for policies.#.policy_item.#.id * `policies.#.policy_item.#.frequency_interval` - (Required) Desired frequency of the new backup policy item specified by frequencyType. * `policies.#.policy_item.#.frequency_type` - (Required) Frequency associated with the backup policy item. One of the following values: hourly, daily, weekly or monthly. * `policies.#.policy_item.#.retention_unit` - (Required) Scope of the backup policy item: days, weeks, or months. diff --git a/website/docs/r/cloud_provider_snapshot_restore_job.html.markdown b/docs/resources/cloud_provider_snapshot_restore_job.md similarity index 69% rename from website/docs/r/cloud_provider_snapshot_restore_job.html.markdown rename to docs/resources/cloud_provider_snapshot_restore_job.md index 218dae9e61..00d0f36875 100644 --- a/website/docs/r/cloud_provider_snapshot_restore_job.html.markdown +++ b/docs/resources/cloud_provider_snapshot_restore_job.md @@ -1,9 +1,5 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cloud_provider_snapshot_restore_job" -sidebar_current: "docs-mongodbatlas-resource-cloud_provider_snapshot_restore_job" -description: |- - Provides a Cloud Backup Snapshot Restore Job resource. +subcategory: "Deprecated" --- **WARNING:** This resource is deprecated, use `mongodbatlas_cloud_backup_snapshot_restore_job` @@ -27,66 +23,82 @@ description: |- ### Example automated delivery type. ```terraform - resource "mongodbatlas_cluster" "my_cluster" { - project_id = "5cf5a45a9ccf6400e60981b6" - name = "MyCluster" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_WEST_2" - provider_instance_size_name = "M10" - cloud_backup = true // enable cloud backup snapshots - } - - resource "mongodbatlas_cloud_provider_snapshot" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name - description = "myDescription" - retention_in_days = 1 - } - - resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" { - project_id = mongodbatlas_cloud_provider_snapshot.test.project_id - cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name - snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id - delivery_type_config { - automated = true - target_cluster_name = "MyCluster" - target_project_id = "5cf5a45a9ccf6400e60981b6" +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } } - depends_on = [mongodbatlas_cloud_provider_snapshot.test] } +} + +resource "mongodbatlas_cloud_provider_snapshot" "test" { + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name + description = "myDescription" + retention_in_days = 1 +} + +resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" { + project_id = mongodbatlas_cloud_provider_snapshot.test.project_id + cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name + snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id + delivery_type_config { + automated = true + target_cluster_name = "MyCluster" + target_project_id = "5cf5a45a9ccf6400e60981b6" + } + depends_on = [mongodbatlas_cloud_provider_snapshot.test] +} ``` ### Example download delivery type. ```terraform - resource "mongodbatlas_cluster" "my_cluster" { - project_id = "5cf5a45a9ccf6400e60981b6" - name = "MyCluster" - - //Provider Settings "block" - provider_name = "AWS" - provider_region_name = "EU_WEST_2" - provider_instance_size_name = "M10" - cloud_backup = true // enable cloud backup snapshots - } - - resource "mongodbatlas_cloud_provider_snapshot" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name - description = "myDescription" - retention_in_days = 1 - } - - resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" { - project_id = mongodbatlas_cloud_provider_snapshot.test.project_id - cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name - snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id - delivery_type_config { - download = true +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = "" + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } +} + +resource "mongodbatlas_cloud_provider_snapshot" "test" { + project_id = mongodbatlas_advanced_cluster.my_cluster.project_id + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name + description = "myDescription" + retention_in_days = 1 +} + +resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" { + project_id = mongodbatlas_cloud_provider_snapshot.test.project_id + cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name + snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id + delivery_type_config { + download = true + } +} ``` ## Argument Reference diff --git a/website/docs/r/cluster.html.markdown b/docs/resources/cluster.md similarity index 99% rename from website/docs/r/cluster.html.markdown rename to docs/resources/cluster.md index 58778f380f..9faad9b92f 100644 --- a/website/docs/r/cluster.html.markdown +++ b/docs/resources/cluster.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cluster" -sidebar_current: "docs-mongodbatlas-resource-cluster" -description: |- - Provides a Cluster resource. ---- - # Resource: mongodbatlas_cluster `mongodbatlas_cluster` provides a Cluster resource. The resource lets you create, edit and delete clusters. The resource requires your Project ID. diff --git a/website/docs/r/cluster_outage_simulation.html.markdown b/docs/resources/cluster_outage_simulation.md similarity index 94% rename from website/docs/r/cluster_outage_simulation.html.markdown rename to docs/resources/cluster_outage_simulation.md index 0410e4b1e3..ee2a5bc3d3 100644 --- a/website/docs/r/cluster_outage_simulation.html.markdown +++ b/docs/resources/cluster_outage_simulation.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: cluster_outage_simulation" -sidebar_current: "docs-mongodbatlas-resource-federated-database-instance" -description: |- - Provides a Cluster Outage Simulation resource. ---- - # Resource: mongodbatlas_cluster_outage_simulation `mongodbatlas_cluster_outage_simulation` provides a Cluster Outage Simulation resource. For more details see https://www.mongodb.com/docs/atlas/tutorial/test-resilience/simulate-regional-outage/ diff --git a/website/docs/r/custom_db_role.html.markdown b/docs/resources/custom_db_role.md similarity index 96% rename from website/docs/r/custom_db_role.html.markdown rename to docs/resources/custom_db_role.md index abe09dea92..9125e1fb80 100644 --- a/website/docs/r/custom_db_role.html.markdown +++ b/docs/resources/custom_db_role.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: custom_db_role" -sidebar_current: "docs-mongodbatlas-resource-custom-db-role" -description: |- - Provides a Custom DB Role resource. ---- - # Resource: mongodbatlas_custom_db_role `mongodbatlas_custom_db_role` provides a Custom DB Role resource. The customDBRoles resource lets you retrieve, create and modify the custom MongoDB roles in your cluster. Use custom MongoDB roles to specify custom sets of actions which cannot be described by the built-in Atlas database user privileges. diff --git a/website/docs/r/custom_dns_configuration_cluster_aws.markdown b/docs/resources/custom_dns_configuration_cluster_aws.md similarity index 84% rename from website/docs/r/custom_dns_configuration_cluster_aws.markdown rename to docs/resources/custom_dns_configuration_cluster_aws.md index 083932cd28..b9337d9060 100644 --- a/website/docs/r/custom_dns_configuration_cluster_aws.markdown +++ b/docs/resources/custom_dns_configuration_cluster_aws.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: custom_dns_configuration_cluster_aws" -sidebar_current: "docs-mongodbatlas-resource-custom_dns_configuration_cluster_aws" -description: |- - Provides a Custom DNS Configuration for Atlas Clusters on AWS resource. ---- - # Resource: mongodbatlas_custom_dns_configuration_cluster_aws `mongodbatlas_custom_dns_configuration_cluster_aws` provides a Custom DNS Configuration for Atlas Clusters on AWS resource. This represents a Custom DNS Configuration for Atlas Clusters on AWS that can be updated in an Atlas project. diff --git a/website/docs/r/data_lake_pipeline.html.markdown b/docs/resources/data_lake_pipeline.md similarity index 90% rename from website/docs/r/data_lake_pipeline.html.markdown rename to docs/resources/data_lake_pipeline.md index cd531ce932..7b94a14291 100644 --- a/website/docs/r/data_lake_pipeline.html.markdown +++ b/docs/resources/data_lake_pipeline.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: data_lake" -sidebar_current: "docs-mongodbatlas-resource-data-lake" -description: |- - Provides a Data Lake Pipeline resource. ---- - # Resource: mongodbatlas_data_lake_pipeline `mongodbatlas_data_lake_pipeline` provides a Data Lake Pipeline resource. @@ -22,16 +14,23 @@ resource "mongodbatlas_project" "projectTest" { } resource "mongodbatlas_advanced_cluster" "automated_backup_test" { - project_id = "63f4d4a47baeac59406dc131" - name = "automated-backup-test" - - provider_name = "GCP" - provider_region_name = "US_EAST_4" - provider_instance_size_name = "M10" - cloud_backup = true // enable cloud backup snapshots - mongo_db_major_version = "7.0" + project_id = var.project_id + name = "automated-backup-test" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "GCP" + region_name = "US_EAST_4" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } } - +} resource "mongodbatlas_data_lake_pipeline" "pipeline" { project_id = mongodbatlas_project.projectTest.project_id @@ -46,7 +45,7 @@ resource "mongodbatlas_data_lake_pipeline" "pipeline" { source { type = "ON_DEMAND_CPS" - cluster_name = mongodbatlas_cluster.automated_backup_test.name + cluster_name = mongodbatlas_advanced_cluster.automated_backup_test.name database_name = "sample_airbnb" collection_name = "listingsAndReviews" } diff --git a/website/docs/r/database_user.html.markdown b/docs/resources/database_user.md similarity index 98% rename from website/docs/r/database_user.html.markdown rename to docs/resources/database_user.md index 88f39adbbb..8b812f1b2d 100644 --- a/website/docs/r/database_user.html.markdown +++ b/docs/resources/database_user.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: database_user" -sidebar_current: "docs-mongodbatlas-resource-database-user" -description: |- - Provides a Database User resource. ---- - # Resource: mongodbatlas_database_user `mongodbatlas_database_user` provides a Database User resource. This represents a database user which will be applied to all clusters within the project. diff --git a/website/docs/r/encryption_at_rest.html.markdown b/docs/resources/encryption_at_rest.md similarity index 92% rename from website/docs/r/encryption_at_rest.html.markdown rename to docs/resources/encryption_at_rest.md index 9d7acde8c3..ea85a74fa2 100644 --- a/website/docs/r/encryption_at_rest.html.markdown +++ b/docs/resources/encryption_at_rest.md @@ -1,14 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: encryption_at_rest" -sidebar_current: "docs-mongodbatlas-resource-encryption_at_rest" -description: |- - Provides an Encryption At Rest resource. ---- - # Resource: mongodbatlas_encryption_at_rest -`mongodbatlas_encryption_at_rest` Allows management of encryption at rest for an Atlas project with one of the following providers: +`mongodbatlas_encryption_at_rest` allows management of encryption at rest for an Atlas project with one of the following providers: [Amazon Web Services Key Management Service](https://docs.atlas.mongodb.com/security-aws-kms/#security-aws-kms) [Azure Key Vault](https://docs.atlas.mongodb.com/security-azure-kms/#security-azure-kms) @@ -92,25 +84,26 @@ resource "mongodbatlas_encryption_at_rest" "example" { } } -resource "mongodbatlas_cluster" "example_cluster" { - project_id = mongodbatlas_encryption_at_rest.example.project_id - name = "CLUSTER NAME" - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "example_cluster" { + project_id = mongodbatlas_encryption_at_rest.example.project_id + name = "CLUSTER NAME" + cluster_type = "REPLICASET" + backup_enabled = true + encryption_at_rest_provider = "AZURE" + replication_specs { - num_shards = 1 - regions_config { - region_name = "REGION" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "AZURE" + region_name = "REGION" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - - provider_name = "AZURE" - provider_instance_size_name = "M10" - mongo_db_major_version = "7.0" - encryption_at_rest_provider = "AZURE" } + ``` ## Argument Reference diff --git a/website/docs/r/event_trigger.html.markdown b/docs/resources/event_trigger.md similarity index 97% rename from website/docs/r/event_trigger.html.markdown rename to docs/resources/event_trigger.md index bb7f1146ad..3a310b6bd2 100644 --- a/website/docs/r/event_trigger.html.markdown +++ b/docs/resources/event_trigger.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: event_trigger" -sidebar_current: "docs-mongodbatlas-resource-event-trigger" -description: |- - Provides a Event Trigger resource. ---- - # Resource: mongodbatlas_event_trigger `mongodbatlas_event_trigger` provides a Event Trigger resource. diff --git a/website/docs/r/federated_database_instance.html.markdown b/docs/resources/federated_database_instance.md similarity index 98% rename from website/docs/r/federated_database_instance.html.markdown rename to docs/resources/federated_database_instance.md index bff5078e0a..b69cad0a9d 100644 --- a/website/docs/r/federated_database_instance.html.markdown +++ b/docs/resources/federated_database_instance.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: federated_database_instance" -sidebar_current: "docs-mongodbatlas-resource-federated-database-instance" -description: |- - Provides a Federated Database Instance resource. ---- - # Resource: mongodbatlas_federated_database_instance `mongodbatlas_federated_database_instance` provides a Federated Database Instance resource. diff --git a/website/docs/r/federated_query_limit.html.markdown b/docs/resources/federated_query_limit.md similarity index 92% rename from website/docs/r/federated_query_limit.html.markdown rename to docs/resources/federated_query_limit.md index 333ef7c1e1..de011327c2 100644 --- a/website/docs/r/federated_query_limit.html.markdown +++ b/docs/resources/federated_query_limit.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: federated_database_query_limit" -sidebar_current: "docs-mongodbatlas-resource-federated-query-limit" -description: |- - Provides a Federated Database Instance Query Limit. ---- - # Resource: mongodbatlas_federated_query_limit `mongodbatlas_federated_query_limit` provides a Federated Database Instance Query Limits resource. To learn more about Atlas Data Federation see https://www.mongodb.com/docs/atlas/data-federation/overview/. diff --git a/website/docs/r/federated_settings_identity_provider.html.markdown b/docs/resources/federated_settings_identity_provider.md similarity index 94% rename from website/docs/r/federated_settings_identity_provider.html.markdown rename to docs/resources/federated_settings_identity_provider.md index 9be3d2d0a3..cef768d251 100644 --- a/website/docs/r/federated_settings_identity_provider.html.markdown +++ b/docs/resources/federated_settings_identity_provider.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: mongodbatlas_federated_settings_identity_provider" -sidebar_current: "docs-mongodbatlas-federated-settings-identity-provider" -description: |- - Provides a federated settings Identity Provider resource. ---- - # Resource: mongodbatlas_federated_settings_identity_provider `mongodbatlas_federated_settings_identity_provider` provides an Atlas federated settings identity provider resource provides a subset of settings to be maintained post import of the existing resource. diff --git a/website/docs/r/federated_settings_org_config.html.markdown b/docs/resources/federated_settings_org_config.md similarity index 93% rename from website/docs/r/federated_settings_org_config.html.markdown rename to docs/resources/federated_settings_org_config.md index e025b5747d..924c5a3252 100644 --- a/website/docs/r/federated_settings_org_config.html.markdown +++ b/docs/resources/federated_settings_org_config.md @@ -1,16 +1,7 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: mongodbatlas_federated_settings_org_config" -sidebar_current: "docs-mongodbatlas-resource-federated-settings-org-config" -description: |- - Provides a federated settings Organization Configuration. ---- - # Resource: mongodbatlas_federated_settings_org_config `mongodbatlas_federated_settings_org_config` provides an Federated Settings Identity Providers datasource. Atlas Cloud Federated Settings Identity Providers provides federated settings outputs for the configured Identity Providers. - ## Example Usage ~> **IMPORTANT** You **MUST** import this resource before you can manage it with this provider. diff --git a/website/docs/r/federated_settings_org_role_mapping.html.markdown b/docs/resources/federated_settings_org_role_mapping.md similarity index 90% rename from website/docs/r/federated_settings_org_role_mapping.html.markdown rename to docs/resources/federated_settings_org_role_mapping.md index 9087a55471..e54277adb1 100644 --- a/website/docs/r/federated_settings_org_role_mapping.html.markdown +++ b/docs/resources/federated_settings_org_role_mapping.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: mongodbatlas_federated_settings_org_role_mapping" -sidebar_current: "docs-mongodbatlas-resource-federated-settings-org-role-mapping" -description: |- - Provides a federated settings Role Mapping resource. ---- - # Resource: mongodbatlas_federated_settings_org_role_mapping `mongodbatlas_federated_settings_org_role_mapping` provides an Role Mapping resource. This allows organization role mapping to be created. diff --git a/website/docs/r/global_cluster_config.html.markdown b/docs/resources/global_cluster_config.md similarity index 95% rename from website/docs/r/global_cluster_config.html.markdown rename to docs/resources/global_cluster_config.md index 18fe29f942..313e5943e4 100644 --- a/website/docs/r/global_cluster_config.html.markdown +++ b/docs/resources/global_cluster_config.md @@ -1,16 +1,7 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: global_cluster_config" -sidebar_current: "docs-mongodbatlas-resource-global-cluster-config" -description: |- - Provides a Global Cluster Configuration resource. ---- - # Resource: mongodbatlas_global_cluster_config `mongodbatlas_global_cluster_config` provides a Global Cluster Configuration resource. - -> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation. -> **NOTE:** This resource can only be used with Atlas-managed clusters. See doc for `global_cluster_self_managed_sharding` attribute in [`mongodbatlas_advanced_cluster` resource](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster) for more info. diff --git a/website/docs/r/ldap_configuration.html.markdown b/docs/resources/ldap_configuration.md similarity index 89% rename from website/docs/r/ldap_configuration.html.markdown rename to docs/resources/ldap_configuration.md index 553ea8574d..a6d2642801 100644 --- a/website/docs/r/ldap_configuration.html.markdown +++ b/docs/resources/ldap_configuration.md @@ -1,14 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: ldap-configuration" -sidebar_current: "docs-mongodbatlas-resource-ldap-configuration" -description: |- - Provides a LDAP Configuration resource. ---- - # Resource: mongodbatlas_ldap_configuration -`mongodbatlas_ldap_configuration` provides an LDAP Configuration resource. This allows an LDAP configuration for an Atlas project to be crated and managed. This endpoint doesn’t verify connectivity using the provided LDAP over TLS configuration details. To verify a configuration before saving it, use the resource to [verify](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/INTMDB-114/website/docs/r/ldap_verify.html.markdown) the LDAP configuration. +`mongodbatlas_ldap_configuration` provides an LDAP Configuration resource. This allows an LDAP configuration for an Atlas project to be created and managed. This endpoint doesn’t verify connectivity using the provided LDAP over TLS configuration details. To verify a configuration before saving it, use the resource to [verify](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/resources/ldap_verify.md) the LDAP configuration. ## Example Usage diff --git a/website/docs/r/ldap_verify.html.markdown b/docs/resources/ldap_verify.md similarity index 83% rename from website/docs/r/ldap_verify.html.markdown rename to docs/resources/ldap_verify.md index 0109936e31..681a2d0223 100644 --- a/website/docs/r/ldap_verify.html.markdown +++ b/docs/resources/ldap_verify.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: ldap-verify" -sidebar_current: "docs-mongodbatlas-resource-ldap-verify" -description: |- - Provides a LDAP Verify resource. ---- - # Resource: mongodbatlas_ldap_verify `mongodbatlas_ldap_verify` provides an LDAP Verify resource. This allows a a verification of an LDAP configuration over TLS for an Atlas project. Atlas retains only the most recent request for each project. @@ -18,15 +10,23 @@ resource "mongodbatlas_project" "test" { org_id = "ORG ID" } -resource "mongodbatlas_cluster" "test" { - project_id = mongodbatlas_project.test.id - name = "NAME OF THE CLUSTER" - - // Provider Settings "block" - provider_name = "AWS" - provider_region_name = "US_EAST_2" - provider_instance_size_name = "M10" - cloud_backup = true //enable cloud provider snapshots +resource "mongodbatlas_advanced_cluster" "test" { + project_id = mongodbatlas_project.test.id + name = "NAME OF THE CLUSTER" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud backup snapshots + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_EAST_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_ldap_verify" "test" { @@ -35,7 +35,7 @@ resource "mongodbatlas_ldap_verify" "test" { port = 636 bind_username = "USERNAME" bind_password = "PASSWORD" - depends_on = [mongodbatlas_cluster.test] + depends_on = [ mongodbatlas_advanced_cluster.test ] } ``` diff --git a/website/docs/r/maintenance_window.html.markdown b/docs/resources/maintenance_window.md similarity index 89% rename from website/docs/r/maintenance_window.html.markdown rename to docs/resources/maintenance_window.md index af6c7b3519..463ea4c97e 100644 --- a/website/docs/r/maintenance_window.html.markdown +++ b/docs/resources/maintenance_window.md @@ -1,14 +1,8 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: maintenance_window" -sidebar_current: "docs-mongodbatlas-resource-maintenance_window" -description: |- - Provides an Maintenance Window resource. ---- - # Resource: mongodbatlas_maintenance_window -`mongodbatlas_maintenance_window` provides a resource to schedule a maintenance window for your MongoDB Atlas Project and/or set to defer a scheduled maintenance up to two times. +`mongodbatlas_maintenance_window` provides a resource to schedule the maintenance window for your MongoDB Atlas Project and/or set to defer a scheduled maintenance up to two times. Please refer to [Maintenance Windows](https://www.mongodb.com/docs/atlas/tutorial/cluster-maintenance-window/#configure-maintenance-window) documentation for more details. + +-> **NOTE:** Only a single maintenance window resource can be defined per project. -> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. diff --git a/website/docs/r/network_container.html.markdown b/docs/resources/network_container.md similarity index 96% rename from website/docs/r/network_container.html.markdown rename to docs/resources/network_container.md index c0d0da72cf..f43e35c89b 100644 --- a/website/docs/r/network_container.html.markdown +++ b/docs/resources/network_container.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: network_container" -sidebar_current: "docs-mongodbatlas-resource-network-container" -description: |- - Provides a Network Peering resource. ---- - # Resource: mongodbatlas_network_container `mongodbatlas_network_container` provides a Network Peering Container resource. The resource lets you create, edit and delete network peering containers. You must delete network peering containers before creating clusters in your project. You can't delete a network peering container if your project contains clusters. The resource requires your Project ID. Each cloud provider requires slightly different attributes so read the argument reference carefully. diff --git a/website/docs/r/network_peering.html.markdown b/docs/resources/network_peering.md similarity index 81% rename from website/docs/r/network_peering.html.markdown rename to docs/resources/network_peering.md index a7f211f03e..57b73ca768 100644 --- a/website/docs/r/network_peering.html.markdown +++ b/docs/resources/network_peering.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: network_peering" -sidebar_current: "docs-mongodbatlas-resource-network-peering" -description: |- - Provides a Network Peering resource. ---- - # Resource: mongodbatlas_network_peering `mongodbatlas_network_peering` provides a Network Peering Connection resource. The resource lets you create, edit and delete network peering connections. The resource requires your Project ID. @@ -110,30 +102,25 @@ resource "google_compute_network_peering" "peering" { } # Create the cluster once the peering connection is completed -resource "mongodbatlas_cluster" "test" { - project_id = local.project_id - name = "terraform-manually-test" - num_shards = 1 - - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "test" { + project_id = local.project_id + name = "terraform-manually-test" + cluster_type = "REPLICASET" + backup_enabled = true + replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_4" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "GCP" + region_name = "US_EAST_4" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - - auto_scaling_disk_gb_enabled = true - mongo_db_major_version = "7.0" - - # Provider Settings "block" - provider_name = "GCP" - provider_instance_size_name = "M10" - depends_on = ["google_compute_network_peering.peering"] + depends_on = [ google_compute_network_peering.peering ] } # Private connection strings are not available w/ GCP until the reciprocal @@ -174,32 +161,26 @@ resource "mongodbatlas_network_peering" "test" { } # Create the cluster once the peering connection is completed -resource "mongodbatlas_cluster" "test" { - project_id = local.project_id - name = "terraform-manually-test" +resource "mongodbatlas_advanced_cluster" "test" { + project_id = local.project_id + name = "terraform-manually-test" + cluster_type = "REPLICASET" + backup_enabled = true - cluster_type = "REPLICASET" replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "AZURE" + region_name = "US_EAST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - auto_scaling_disk_gb_enabled = true - mongo_db_major_version = "7.0" - - # Provider Settings "block" - provider_name = "AZURE" - provider_disk_type_name = "P4" - provider_instance_size_name = "M10" - - depends_on = ["mongodbatlas_network_peering.test"] + depends_on = [ mongodbatlas_network_peering.test ] } - ``` ## Example Usage - Peering Connection Only, Container Exists @@ -209,27 +190,23 @@ You can create a peering connection if an appropriate container for your cloud p ```terraform # Create an Atlas cluster, this creates a container if one # does not yet exist for this AWS region -resource "mongodbatlas_cluster" "test" { - project_id = local.project_id - name = "terraform-test" - - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "test" { + project_id = local.project_id + name = "terraform-manually-test" + cluster_type = "REPLICASET" + backup_enabled = true + replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_EAST_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - - auto_scaling_disk_gb_enabled = false - mongo_db_major_version = "7.0" - - //Provider Settings "block" - provider_name = "AWS" - provider_instance_size_name = "M10" } # the following assumes an AWS provider is configured @@ -243,7 +220,7 @@ resource "aws_default_vpc" "default" { resource "mongodbatlas_network_peering" "mongo_peer" { accepter_region_name = "us-east-2" project_id = local.project_id - container_id = mongodbatlas_cluster.test.container_id + container_id = one(values(mongodbatlas_advanced_cluster.test.container_id)) provider_name = "AWS" route_table_cidr_block = "172.31.0.0/16" vpc_id = aws_default_vpc.default.id @@ -265,27 +242,23 @@ resource "aws_vpc_peering_connection_accepter" "aws_peer" { ```terraform # Create an Atlas cluster, this creates a container if one # does not yet exist for this GCP -resource "mongodbatlas_cluster" "test" { - project_id = local.project_id - name = "terraform-manually-test" +resource "mongodbatlas_advanced_cluster" "test" { + project_id = local.project_id + name = "terraform-manually-test" + cluster_type = "REPLICASET" + backup_enabled = true - cluster_type = "REPLICASET" replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "GCP" + region_name = "US_EAST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - - auto_scaling_disk_gb_enabled = true - mongo_db_major_version = "7.0" - - //Provider Settings "block" - provider_name = "GCP" - provider_instance_size_name = "M10" } # Create the peering connection request @@ -293,7 +266,7 @@ resource "mongodbatlas_network_peering" "test" { project_id = local.project_id atlas_cidr_block = "192.168.0.0/18" - container_id = mongodbatlas_cluster.test.container_id + container_id = one(values(mongodbatlas_advanced_cluster.test.replication_specs[0].container_id)) provider_name = "GCP" gcp_project_id = local.GCP_PROJECT_ID network_name = "default" @@ -321,33 +294,29 @@ resource "google_compute_network_peering" "peering" { # Create an Atlas cluster, this creates a container if one # does not yet exist for this AZURE region -resource "mongodbatlas_cluster" "test" { - project_id = local.project_id - name = "cluster-azure" +resource "mongodbatlas_advanced_cluster" "test" { + project_id = local.project_id + name = "cluster-azure" + cluster_type = "REPLICASET" + backup_enabled = true - cluster_type = "REPLICASET" replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "AZURE" + region_name = "US_EAST_2" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - - auto_scaling_disk_gb_enabled = false - mongo_db_major_version = "7.0" - - //Provider Settings "block" - provider_name = "AZURE" - provider_instance_size_name = "M10" } # Create the peering connection request resource "mongodbatlas_network_peering" "test" { project_id = local.project_id - container_id = mongodbatlas_cluster.test.container_id + container_id = one(values(mongodbatlas_advanced_cluster.test.replication_specs[0].container_id)) provider_name = "AZURE" azure_directory_id = local.AZURE_DIRECTORY_ID azure_subscription_id = local.AZURE_SUBSCRIPTION_ID diff --git a/website/docs/r/online_archive.html.markdown b/docs/resources/online_archive.md similarity index 97% rename from website/docs/r/online_archive.html.markdown rename to docs/resources/online_archive.md index 96e3dd5875..0a8f3a86d0 100644 --- a/website/docs/r/online_archive.html.markdown +++ b/docs/resources/online_archive.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: mongodbatlas_online_archive" -sidebar_current: "docs-mongodbatlas-resource-online-archive" -description: |- - Provides a Online Archive resource for creation, update, and delete ---- - # Resource: mongodbatlas_online_archive `mongodbatlas_online_archive` resource provides access to create, edit, pause and resume an online archive for a collection. diff --git a/website/docs/r/org_invitation.html.markdown b/docs/resources/org_invitation.md similarity index 93% rename from website/docs/r/org_invitation.html.markdown rename to docs/resources/org_invitation.md index 073a19ee19..82608ee0de 100644 --- a/website/docs/r/org_invitation.html.markdown +++ b/docs/resources/org_invitation.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: org_invitation" -sidebar_current: "docs-mongodbatlas-resource-organization-invitation" -description: |- - Provides an Atlas Organization Invitation resource. ---- - # Resource: mongodbatlas_org_invitation `mongodbatlas_org_invitation` invites a user to join an Atlas organization. diff --git a/website/docs/r/organization.html.markdown b/docs/resources/organization.md similarity index 95% rename from website/docs/r/organization.html.markdown rename to docs/resources/organization.md index ca4c602031..8e99c2aad6 100644 --- a/website/docs/r/organization.html.markdown +++ b/docs/resources/organization.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: organization" -sidebar_current: "docs-mongodbatlas-resource-organization" -description: |- - Provides a Organization resource. ---- - # Resource: mongodbatlas_organization `mongodbatlas_organization` provides programmatic management (including creation) of a MongoDB Atlas Organization resource. diff --git a/website/docs/r/private_endpoint_regional_mode.html.markdown b/docs/resources/private_endpoint_regional_mode.md similarity index 96% rename from website/docs/r/private_endpoint_regional_mode.html.markdown rename to docs/resources/private_endpoint_regional_mode.md index 54c9109130..b4d2ae9ea1 100644 --- a/website/docs/r/private_endpoint_regional_mode.html.markdown +++ b/docs/resources/private_endpoint_regional_mode.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: private_endpoint_regional_mode" -sidebar_current: "docs-mongodbatlas-resource-private_endpoint_regional_mode" -description: |- - Provides a Private Endpoint Regional Mode resource ---- - # Resource: private_endpoint_regional_mode `mongodbatlas_private_endpoint_regional_mode` provides a Private Endpoint Regional Mode resource. This represents a regionalized private endpoint setting for a Project. Enable it to allow region specific private endpoints. diff --git a/website/docs/r/privatelink_endpoint.html.markdown b/docs/resources/privatelink_endpoint.md similarity index 96% rename from website/docs/r/privatelink_endpoint.html.markdown rename to docs/resources/privatelink_endpoint.md index 212fda2c8d..0b4eb81665 100644 --- a/website/docs/r/privatelink_endpoint.html.markdown +++ b/docs/resources/privatelink_endpoint.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: private_endpoint" -sidebar_current: "docs-mongodbatlas-resource-private_endpoint" -description: |- - Provides a Private Endpoint resource. ---- - # Resource: mongodbatlas_privatelink_endpoint `mongodbatlas_privatelink_endpoint` provides a Private Endpoint resource. This represents a [Private Endpoint Service](https://www.mongodb.com/docs/atlas/security-private-endpoint/#private-endpoint-concepts) that can be created in an Atlas project. diff --git a/website/docs/r/privatelink_endpoint_serverless.html.markdown b/docs/resources/privatelink_endpoint_serverless.md similarity index 92% rename from website/docs/r/privatelink_endpoint_serverless.html.markdown rename to docs/resources/privatelink_endpoint_serverless.md index d9b661810c..d5edd9dc3e 100644 --- a/website/docs/r/privatelink_endpoint_serverless.html.markdown +++ b/docs/resources/privatelink_endpoint_serverless.md @@ -1,12 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: privatelink_endpoint_serverless" -sidebar_current: "docs-mongodbatlas-datasource-privatelink-endpoint-serverless" -description: |- -Describes a Serverless PrivateLink Endpoint ---- - - # Resource: privatelink_endpoint_serverless `privatelink_endpoint_serverless` Provides a Serverless PrivateLink Endpoint resource. diff --git a/website/docs/r/privatelink_endpoint_service.html.markdown b/docs/resources/privatelink_endpoint_service.md similarity index 97% rename from website/docs/r/privatelink_endpoint_service.html.markdown rename to docs/resources/privatelink_endpoint_service.md index 7e5109c38d..c0bf2b960c 100644 --- a/website/docs/r/privatelink_endpoint_service.html.markdown +++ b/docs/resources/privatelink_endpoint_service.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: private_endpoint_link" -sidebar_current: "docs-mongodbatlas-resource-private_endpoint_interface_link" -description: |- - Provides a Private Endpoint Link resource. ---- - # Resource: mongodbatlas_privatelink_endpoint_service `mongodbatlas_privatelink_endpoint_service` provides a Private Endpoint Interface Link resource. This represents a Private Endpoint Interface Link, which adds one [Interface Endpoint](https://www.mongodb.com/docs/atlas/security-private-endpoint/#private-endpoint-concepts) to a private endpoint connection in an Atlas project. diff --git a/website/docs/r/privatelink_endpoint_service_data_federation_online_archive.html.markdown b/docs/resources/privatelink_endpoint_service_data_federation_online_archive.md similarity index 91% rename from website/docs/r/privatelink_endpoint_service_data_federation_online_archive.html.markdown rename to docs/resources/privatelink_endpoint_service_data_federation_online_archive.md index 8de1a15694..1e6dfe1022 100644 --- a/website/docs/r/privatelink_endpoint_service_data_federation_online_archive.html.markdown +++ b/docs/resources/privatelink_endpoint_service_data_federation_online_archive.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: mongodbatlas_privatelink_endpoint_service_data_federation_online_archive" -sidebar_current: "docs-mongodbatlas-resource-privatelink-endpoint-service-data-federation-online-archive" -description: |- - Provides a Privatelink Endpoint Service Data Federation Online Archive resource. ---- - # Resource: mongodbatlas_privatelink_endpoint_service_data_federation_online_archive `mongodbatlas_privatelink_endpoint_service_data_federation_online_archive` provides a Private Endpoint Service resource for Data Federation and Online Archive. The resource allows you to create and manage a private endpoint for Federated Database Instances and Online Archives to the specified project. diff --git a/website/docs/r/privatelink_endpoint_service_serverless.html.markdown b/docs/resources/privatelink_endpoint_service_serverless.md similarity index 96% rename from website/docs/r/privatelink_endpoint_service_serverless.html.markdown rename to docs/resources/privatelink_endpoint_service_serverless.md index 483e453c78..c541969aa0 100644 --- a/website/docs/r/privatelink_endpoint_service_serverless.html.markdown +++ b/docs/resources/privatelink_endpoint_service_serverless.md @@ -1,12 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: privatelink_endpoint_service_serverless" -sidebar_current: "docs-mongodbatlas-datasource-privatelink-endpoint-service-serverless" -description: |- -Describes a Serverless PrivateLink Endpoint Service ---- - - # Resource: privatelink_endpoint_service_serverless `privatelink_endpoint_service_serverless` Provides a Serverless PrivateLink Endpoint Service resource. diff --git a/website/docs/r/project.html.markdown b/docs/resources/project.md similarity index 98% rename from website/docs/r/project.html.markdown rename to docs/resources/project.md index 3ea692dc19..66c6eea61d 100644 --- a/website/docs/r/project.html.markdown +++ b/docs/resources/project.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: project" -sidebar_current: "docs-mongodbatlas-resource-project" -description: |- - Provides a Project resource. ---- - # Resource: mongodbatlas_project `mongodbatlas_project` provides a Project resource. This allows project to be created. diff --git a/website/docs/r/project_api_key.html.markdown b/docs/resources/project_api_key.md similarity index 88% rename from website/docs/r/project_api_key.html.markdown rename to docs/resources/project_api_key.md index c9b4069193..6404d3d0b9 100644 --- a/website/docs/r/project_api_key.html.markdown +++ b/docs/resources/project_api_key.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: project_api_key" -sidebar_current: "docs-mongodbatlas-resource-project-api-key" -description: |- - Creates and assigns the specified Atlas Organization API Key to the specified Project. Users with the Project Owner role in the project associated with the API key can use the organization API key to access the resources. ---- - # Resource: mongodbatlas_project_api_key `mongodbatlas_project_api_key` provides a Project API Key resource. This allows project API Key to be created. diff --git a/website/docs/r/project_invitation.html.markdown b/docs/resources/project_invitation.md similarity index 93% rename from website/docs/r/project_invitation.html.markdown rename to docs/resources/project_invitation.md index 309b3cee0a..06ae619a4c 100644 --- a/website/docs/r/project_invitation.html.markdown +++ b/docs/resources/project_invitation.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: project_invitation" -sidebar_current: "docs-mongodbatlas-resource-project-invitation" -description: |- - Provides an Atlas Project Invitation resource. ---- - # Resource: mongodbatlas_project_invitation `mongodbatlas_project_invitation` invites a user to join an Atlas project. diff --git a/website/docs/r/project_ip_access_list.html.markdown b/docs/resources/project_ip_access_list.md similarity index 94% rename from website/docs/r/project_ip_access_list.html.markdown rename to docs/resources/project_ip_access_list.md index 05f7a4aa74..5566f23b43 100644 --- a/website/docs/r/project_ip_access_list.html.markdown +++ b/docs/resources/project_ip_access_list.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: project_ip_access_list" -sidebar_current: "docs-mongodbatlas-resource-project-ip-access-list" -description: |- - Provides an IP Access List resource. ---- - # Resource: mongodbatlas_project_ip_access_list `mongodbatlas_project_ip_access_list` provides an IP Access List entry resource. The access list grants access from IPs, CIDRs or AWS Security Groups (if VPC Peering is enabled) to clusters within the Project. diff --git a/website/docs/r/push_based_log_export.html.markdown b/docs/resources/push_based_log_export.md similarity index 94% rename from website/docs/r/push_based_log_export.html.markdown rename to docs/resources/push_based_log_export.md index 9db32f3c20..5c2f5cb41a 100644 --- a/website/docs/r/push_based_log_export.html.markdown +++ b/docs/resources/push_based_log_export.md @@ -1,14 +1,5 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: mongodbatlas_push_based_log_export" -sidebar_current: "docs-mongodbatlas-resource-push-based-log-export" -description: |- - "Provides resource for push-based log export feature." ---- - # Resource: mongodbatlas_push_based_log_export - `mongodbatlas_push_based_log_export` provides a resource for push-based log export feature. The resource lets you configure, enable & disable the project level settings for the push-based log export feature. Using this resource you can continually push logs from mongod, mongos, and audit logs to an Amazon S3 bucket. Atlas exports logs every 5 minutes. @@ -43,6 +34,14 @@ resource "mongodbatlas_push_based_log_export" "test" { iam_role_id = mongodbatlas_cloud_provider_access_authorization.auth_role.role_id prefix_path = "push-based-log-test" } + +data "mongodbatlas_push_based_log_export" "test" { + project_id = mongodbatlas_push_based_log_export.test.project_id +} + +output "test" { + value = data.mongodbatlas_push_based_log_export.test.prefix_path +} ``` @@ -52,12 +51,13 @@ resource "mongodbatlas_push_based_log_export" "test" { - `bucket_name` (String) The name of the bucket to which the agent sends the logs to. - `iam_role_id` (String) ID of the AWS IAM role that is used to write to the S3 bucket. -- `prefix_path` (String) S3 directory in which vector writes in order to store the logs. An empty string denotes the root directory. - `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access. **NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups. ### Optional + +- `prefix_path` (String) S3 directory in which vector writes in order to store the logs. An empty string denotes the root directory. - `timeouts` (Attributes) (see [below for nested schema](#nestedatt--timeouts)) ### Read-Only diff --git a/website/docs/r/search_deployment.html.markdown b/docs/resources/search_deployment.md similarity index 95% rename from website/docs/r/search_deployment.html.markdown rename to docs/resources/search_deployment.md index ea30b6d3c5..7fb1a1e02e 100644 --- a/website/docs/r/search_deployment.html.markdown +++ b/docs/resources/search_deployment.md @@ -1,14 +1,5 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: mongodbatlas_search_deployment" -sidebar_current: "docs-mongodbatlas-resource-search-deployment" -description: |- - "Provides a Search Deployment resource." ---- - # Resource: mongodbatlas_search_deployment - `mongodbatlas_search_deployment` provides a Search Deployment resource. The resource lets you create, edit and delete dedicated search nodes in a cluster. -> **NOTE:** For details on supported cloud providers and existing limitations you can visit the [Search Node Documentation](https://www.mongodb.com/docs/atlas/cluster-config/multi-cloud-distribution/#search-nodes-for-workload-isolation). diff --git a/website/docs/r/search_index.html.markdown b/docs/resources/search_index.md similarity index 95% rename from website/docs/r/search_index.html.markdown rename to docs/resources/search_index.md index 2630213f75..87dc3e9f19 100644 --- a/website/docs/r/search_index.html.markdown +++ b/docs/resources/search_index.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: search index" -sidebar_current: "docs-mongodbatlas-resource-search-index" -description: |- - Provides a Search Index resource. ---- - # Resource: mongodbatlas_search_index `mongodbatlas_search_index` provides a Search Index resource. This allows indexes to be created. @@ -162,35 +154,36 @@ EOF ```terraform mappings_fields = <<-EOF { - "address": { - "type": "document", - "fields": { - "city": { - "type": "string", - "analyzer": "lucene.simple", - "ignoreAbove": 255 - }, - "state": { - "type": "string", - "analyzer": "lucene.english" + "address": { + "type": "document", + "fields": { + "city": { + "type": "string", + "analyzer": "lucene.simple", + "ignoreAbove": 255 + }, + "state": { + "type": "string", + "analyzer": "lucene.english" + } } - } - }, - "company": { - "type": "string", - "analyzer": "lucene.whitespace", - "multi": { - "mySecondaryAnalyzer": { - "type": "string", - "analyzer": "lucene.french" + }, + "company": { + "type": "string", + "analyzer": "lucene.whitespace", + "multi": { + "mySecondaryAnalyzer": { + "type": "string", + "analyzer": "lucene.french" + } } - } - }, - "employees": { - "type": "string", - "analyzer": "lucene.standard" + }, + "employees": { + "type": "string", + "analyzer": "lucene.standard" } } + EOF ``` * `search_analyzer` - [Analyzer](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/#std-label-analyzers-ref) to use when searching the index. Defaults to [lucene.standard](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/standard/#std-label-ref-standard-analyzer) @@ -198,10 +191,20 @@ EOF * `fields` - Array of [Fields](https://www.mongodb.com/docs/atlas/atlas-search/field-types/knn-vector/#std-label-fts-data-types-knn-vector) to configure this `vectorSearch` index. It is mandatory for vector searches and it must contain at least one `vector` type field. This field needs to be a JSON string in order to be decoded correctly. +* `stored_source` - String that can be "true" (store all fields), "false" (default, don't store any field), or a JSON string that contains the list of fields to store (include) or not store (exclude) on Atlas Search. To learn more, see [Stored Source Fields](https://www.mongodb.com/docs/atlas/atlas-search/stored-source-definition/). + ```terraform + stored_source = <<-EOF + { + "include": ["field1", "field2"] + } + EOF + ``` + ## Attributes Reference In addition to all arguments above, the following attributes are exported: +* `index_id` - The unique identifier of the Atlas Search index. * `status` - Current status of the index. ### Analyzers (search index) diff --git a/website/docs/r/serverless_instance.html.markdown b/docs/resources/serverless_instance.md similarity index 96% rename from website/docs/r/serverless_instance.html.markdown rename to docs/resources/serverless_instance.md index c4bff5f3a9..03b283632f 100644 --- a/website/docs/r/serverless_instance.html.markdown +++ b/docs/resources/serverless_instance.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: serverless instance" -sidebar_current: "docs-mongodbatlas-resource-serverless-instance" -description: |- -Provides a Serverless Instance resource. ---- - # Resource: mongodbatlas_serverless_instance `mongodbatlas_serverless_instance` provides a Serverless Instance resource. This allows serverless instances to be created. diff --git a/website/docs/r/stream_connection.html.markdown b/docs/resources/stream_connection.md similarity index 95% rename from website/docs/r/stream_connection.html.markdown rename to docs/resources/stream_connection.md index b9d684b2dd..962ca1831f 100644 --- a/website/docs/r/stream_connection.html.markdown +++ b/docs/resources/stream_connection.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: stream connection" -sidebar_current: "docs-mongodbatlas-resource-stream-connection" -description: |- - Provides a Stream Connection resource. ---- - # Resource: mongodbatlas_stream_connection `mongodbatlas_stream_connection` provides a Stream Connection resource. The resource lets you create, edit, and delete stream instance connections. diff --git a/website/docs/r/stream_instance.html.markdown b/docs/resources/stream_instance.md similarity index 93% rename from website/docs/r/stream_instance.html.markdown rename to docs/resources/stream_instance.md index 35b51e7640..149de90b8e 100644 --- a/website/docs/r/stream_instance.html.markdown +++ b/docs/resources/stream_instance.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: stream instance" -sidebar_current: "docs-mongodbatlas-resource-stream-instance" -description: |- - Provides a Stream Instance resource. ---- - # Resource: mongodbatlas_stream_instance `mongodbatlas_stream_instance` provides a Stream Instance resource. The resource lets you create, edit, and delete stream instances in a project. diff --git a/website/docs/r/team.html.markdown b/docs/resources/team.md similarity index 91% rename from website/docs/r/team.html.markdown rename to docs/resources/team.md index dd58ab1f14..5b7a0e7368 100644 --- a/website/docs/r/team.html.markdown +++ b/docs/resources/team.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: team" -sidebar_current: "docs-mongodbatlas-resource-team" -description: |- - Provides a Team resource. ---- - # Resource: mongodbatlas_team `mongodbatlas_team` provides a Team resource. The resource lets you create, edit and delete Teams. Also, Teams can be assigned to multiple projects, and team members’ access to the project is determined by the team’s project role. diff --git a/website/docs/r/teams.html.markdown b/docs/resources/teams.md similarity index 61% rename from website/docs/r/teams.html.markdown rename to docs/resources/teams.md index 25f7498a5b..5db231b8c1 100644 --- a/website/docs/r/teams.html.markdown +++ b/docs/resources/teams.md @@ -1,11 +1,9 @@ --- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: teams" -sidebar_current: "docs-mongodbatlas-resource-teams" -description: |- - Provides a Team resource. +subcategory: "Deprecated" --- +**WARNING:** This resource is deprecated, use `mongodbatlas_team` + # Resource: mongodbatlas_teams This resource is deprecated. Please transition to using `mongodbatlas_team` which defines the same underlying implementation, aligning the name of the resource with the implementation which manages a single team. diff --git a/website/docs/r/third_party_integration.markdown b/docs/resources/third_party_integration.md similarity index 92% rename from website/docs/r/third_party_integration.markdown rename to docs/resources/third_party_integration.md index 6b9d003326..6f0cff660e 100644 --- a/website/docs/r/third_party_integration.markdown +++ b/docs/resources/third_party_integration.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: third_party_integration" -sidebar_current: "docs-mongodbatlas-datasource-third-party-integration" -description: |- - Provides a Third-Party Integration Settings resource. ---- - # Resource: mongodbatlas_third_party_integration `mongodbatlas_third_party_integration` Provides a Third-Party Integration Settings for the given type. diff --git a/website/docs/r/x509_authentication_database_user.html.markdown b/docs/resources/x509_authentication_database_user.md similarity index 95% rename from website/docs/r/x509_authentication_database_user.html.markdown rename to docs/resources/x509_authentication_database_user.md index d3024e9724..b7ff380cb4 100644 --- a/website/docs/r/x509_authentication_database_user.html.markdown +++ b/docs/resources/x509_authentication_database_user.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: x509_authentication_database_user" -sidebar_current: "docs-mongodbatlas-resource-x509-authentication-database-user" -description: |- - Provides a X509 Authentication Database User resource. ---- - # Resource: mongodbatlas_x509_authentication_database_user `mongodbatlas_x509_authentication_database_user` provides a X509 Authentication Database User resource. The mongodbatlas_x509_authentication_database_user resource lets you manage MongoDB users who authenticate using X.509 certificates. You can manage these X.509 certificates or let Atlas do it for you. diff --git a/website/docs/troubleshooting.html.markdown b/docs/troubleshooting.md similarity index 84% rename from website/docs/troubleshooting.html.markdown rename to docs/troubleshooting.md index 2dc5884aba..565f40b754 100644 --- a/website/docs/troubleshooting.html.markdown +++ b/docs/troubleshooting.md @@ -1,11 +1,3 @@ ---- -layout: "mongodbatlas" -page_title: "Provider: MongoDB Atlas" -sidebar_current: "docs-mongodbatlas-troubleshooting" -description: |- - The MongoDB Atlas provider is used to interact with the resources supported by MongoDB Atlas. The provider needs to be configured with the proper credentials before it can be used. ---- - # Troubleshooting The following are some of the common issues/errors encountered when using Terraform Provider for MongoDB Atlas: diff --git a/examples/mongodbatlas_advanced_cluster/global-cluster/README.md b/examples/mongodbatlas_advanced_cluster/global-cluster/README.md index 71b82915f2..428821a1fe 100644 --- a/examples/mongodbatlas_advanced_cluster/global-cluster/README.md +++ b/examples/mongodbatlas_advanced_cluster/global-cluster/README.md @@ -30,7 +30,7 @@ private_key = "" atlas_org_id = "" ``` -... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/website/docs/index.html.markdown#aws-secrets-manager) +... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/index.md#aws-secrets-manager) **2\. Review the Terraform plan.** diff --git a/examples/mongodbatlas_advanced_cluster/multi-cloud/README.md b/examples/mongodbatlas_advanced_cluster/multi-cloud/README.md index 4b97956416..4e356cb8c9 100644 --- a/examples/mongodbatlas_advanced_cluster/multi-cloud/README.md +++ b/examples/mongodbatlas_advanced_cluster/multi-cloud/README.md @@ -30,7 +30,7 @@ private_key = "" atlas_org_id = "" ``` -... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/website/docs/index.html.markdown#aws-secrets-manager) +... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/index.md#aws-secrets-manager) **2\. Review the Terraform plan.** diff --git a/examples/mongodbatlas_api_key/create-and-assign-pak-together/versions.tf b/examples/mongodbatlas_api_key/create-and-assign-pak-together/versions.tf index 5a81a39da8..1888453805 100644 --- a/examples/mongodbatlas_api_key/create-and-assign-pak-together/versions.tf +++ b/examples/mongodbatlas_api_key/create-and-assign-pak-together/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { mongodbatlas = { source = "mongodb/mongodbatlas" - version = "~> 1.13.2" + version = "~> 1.0" } } required_version = ">= 1.0" diff --git a/examples/mongodbatlas_api_key/create-and-assign-pak/versions.tf b/examples/mongodbatlas_api_key/create-and-assign-pak/versions.tf index 5a81a39da8..1888453805 100644 --- a/examples/mongodbatlas_api_key/create-and-assign-pak/versions.tf +++ b/examples/mongodbatlas_api_key/create-and-assign-pak/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { mongodbatlas = { source = "mongodb/mongodbatlas" - version = "~> 1.13.2" + version = "~> 1.0" } } required_version = ">= 1.0" diff --git a/examples/mongodbatlas_api_key/create-api-key-assign-to-multiple-projects/versions.tf b/examples/mongodbatlas_api_key/create-api-key-assign-to-multiple-projects/versions.tf index 5a81a39da8..1888453805 100644 --- a/examples/mongodbatlas_api_key/create-api-key-assign-to-multiple-projects/versions.tf +++ b/examples/mongodbatlas_api_key/create-api-key-assign-to-multiple-projects/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { mongodbatlas = { source = "mongodb/mongodbatlas" - version = "~> 1.13.2" + version = "~> 1.0" } } required_version = ">= 1.0" diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf index ec16baddb9..952daf02b9 100644 --- a/examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf +++ b/examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf @@ -22,20 +22,28 @@ resource "aws_s3_bucket" "test_bucket" { } } -resource "mongodbatlas_cluster" "my_cluster" { - project_id = var.project_id - name = "MyCluster" - disk_size_gb = 1 - - provider_name = "AWS" - provider_region_name = "US_EAST_1" - provider_instance_size_name = "M10" - cloud_backup = true +resource "mongodbatlas_advanced_cluster" "my_cluster" { + project_id = var.project_id + name = "MyCluster" + cluster_type = "REPLICASET" + backup_enabled = true + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_EAST_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } resource "mongodbatlas_cloud_backup_snapshot" "test" { project_id = var.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name description = "myDescription" retention_in_days = 1 } @@ -50,7 +58,7 @@ resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { resource "mongodbatlas_cloud_backup_snapshot_export_job" "test" { project_id = var.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + cluster_name = mongodbatlas_advanced_cluster.my_cluster.name snapshot_id = mongodbatlas_cloud_backup_snapshot.test.snapshot_id export_bucket_id = mongodbatlas_cloud_backup_snapshot_export_bucket.test.export_bucket_id diff --git a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/main.tf b/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/main.tf index 7c518367ed..e146add173 100644 --- a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/main.tf +++ b/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/main.tf @@ -5,23 +5,31 @@ resource "mongodbatlas_project" "project_test" { org_id = var.org_id } -resource "mongodbatlas_cluster" "cluster_test" { - project_id = mongodbatlas_project.project_test.id - name = var.cluster_name +resource "mongodbatlas_advanced_cluster" "cluster_test" { + project_id = mongodbatlas_project.project_test.id + name = var.cluster_name + cluster_type = "REPLICASET" - # Provider Settings "block" - provider_name = "AWS" - provider_region_name = "US_EAST_1" - provider_instance_size_name = "M10" - cloud_backup = true # enable cloud provider snapshots - pit_enabled = true - retain_backups_enabled = true # keep the backup snapshopts once the cluster is deleted -} + backup_enabled = true # enable cloud provider snapshots + pit_enabled = true + retain_backups_enabled = true # keep the backup snapshopts once the cluster is deleted + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_EAST_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } +} resource "mongodbatlas_cloud_backup_snapshot" "test" { - project_id = mongodbatlas_cluster.cluster_test.project_id - cluster_name = mongodbatlas_cluster.cluster_test.name + project_id = mongodbatlas_advanced_cluster.cluster_test.project_id + cluster_name = mongodbatlas_advanced_cluster.cluster_test.name description = "My description" retention_in_days = "1" } @@ -34,8 +42,8 @@ resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" { delivery_type_config { point_in_time = true - target_cluster_name = mongodbatlas_cluster.cluster_test.name - target_project_id = mongodbatlas_cluster.cluster_test.project_id + target_cluster_name = mongodbatlas_advanced_cluster.cluster_test.name + target_project_id = mongodbatlas_advanced_cluster.cluster_test.project_id point_in_time_utc_seconds = var.point_in_time_utc_seconds } } diff --git a/examples/mongodbatlas_database_user/atlas_cluster.tf b/examples/mongodbatlas_database_user/atlas_cluster.tf index 75ad5ba2a0..985cc4462c 100644 --- a/examples/mongodbatlas_database_user/atlas_cluster.tf +++ b/examples/mongodbatlas_database_user/atlas_cluster.tf @@ -1,24 +1,22 @@ -resource "mongodbatlas_cluster" "cluster" { - project_id = mongodbatlas_project.project1.id - name = "MongoDB_Atlas" - mongo_db_major_version = "7.0" - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "cluster" { + project_id = mongodbatlas_project.project1.id + name = "MongoDB_Atlas" + cluster_type = "REPLICASET" + backup_enabled = true + replication_specs { - num_shards = 1 - regions_config { - region_name = var.region - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "AWS" + region_name = var.region + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - # Provider Settings "block" - cloud_backup = true - auto_scaling_disk_gb_enabled = true - provider_name = "AWS" - disk_size_gb = 10 - provider_instance_size_name = "M10" } + output "atlasclusterstring" { - value = mongodbatlas_cluster.cluster.connection_strings + value = mongodbatlas_advanced_cluster.cluster.connection_strings } diff --git a/examples/mongodbatlas_database_user/main.tf b/examples/mongodbatlas_database_user/main.tf index c685f0c7c1..5ee8b13f6a 100644 --- a/examples/mongodbatlas_database_user/main.tf +++ b/examples/mongodbatlas_database_user/main.tf @@ -15,7 +15,7 @@ resource "mongodbatlas_database_user" "user1" { } scopes { - name = mongodbatlas_cluster.cluster.name + name = mongodbatlas_advanced_cluster.cluster.name type = "CLUSTER" } } diff --git a/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/README.md b/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/README.md index b80cd6e205..f3c104f821 100644 --- a/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/README.md +++ b/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/README.md @@ -32,7 +32,7 @@ private_key = "22b722a9-34f4-3b1b-aada-298329a5c128" atlas_org_id = "63f4d4a47baeac59406dc131" ``` -... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/website/docs/index.html.markdown#aws-secrets-manager) +... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/index.md#aws-secrets-manager) **2\. Set your AWS access key & secret via environment variables: @@ -70,15 +70,15 @@ terraform destroy 1. Import the cluster using the Project ID and cluster name (e.g. `5beae24579358e0ae95492af-MyCluster`): - $ terraform import mongodbatlas_cluster.my_cluster ProjectId-ClusterName + $ terraform import mongodbatlas_advanced_cluster.cluster ProjectId-ClusterName -2. Add any non-default values to the cluster resource *mongodbatlas_cluster.my_cluster* in *main.tf*. And add the following attribute: `encryption_at_rest_provider = "AWS"` +2. Add any non-default values to the cluster resource *mongodbatlas_advanced_cluster.cluster* in *main.tf*. And add the following attribute: `encryption_at_rest_provider = "AWS"` 3. Run terraform apply to enable encryption at rest for the cluster: `terraform apply` 4. (Optional) To remove the cluster from TF state, in case you want to disable project-level encryption and delete the role and key without deleting the imported cluster: - 1. First disable encryption on the cluster by changing the attribute `encryption_at_rest_provider = "NONE"` for the cluster resource *mongodbatlas_cluster.my_cluster* in *main.tf*. If you skip this and the next step, you won't be able to disable encryption on the project-level + 1. First disable encryption on the cluster by changing the attribute `encryption_at_rest_provider = "NONE"` for the cluster resource *mongodbatlas_advanced_cluster.cluster* in *main.tf*. If you skip this and the next step, you won't be able to disable encryption on the project-level 2. Run terraform apply to disable encryption for the cluster: `terraform apply` 3. Finally, remove the cluster from TF state: - terraform state rm mongodbatlas_cluster.my_cluster + terraform state rm mongodbatlas_advanced_cluster.cluster 4. You should now be able to run terraform destroy without deleting the cluster: `terraform destroy` diff --git a/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf b/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf index c4fb0b4e7b..fb4b6d9826 100644 --- a/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf +++ b/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf @@ -23,13 +23,22 @@ resource "mongodbatlas_encryption_at_rest" "test" { } } -resource "mongodbatlas_cluster" "cluster" { +resource "mongodbatlas_advanced_cluster" "cluster" { project_id = var.atlas_project_id name = "MyCluster" cluster_type = "REPLICASET" - provider_name = "AWS" + backup_enabled = true encryption_at_rest_provider = "AWS" - backing_provider_name = "AWS" - provider_region_name = "US_EAST_1" - provider_instance_size_name = "M10" + + replication_specs { + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_EAST_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } } diff --git a/examples/mongodbatlas_encryption_at_rest/aws/multi-region-cluster/README.MD b/examples/mongodbatlas_encryption_at_rest/aws/multi-region-cluster/README.MD index 399938e967..88771e727a 100644 --- a/examples/mongodbatlas_encryption_at_rest/aws/multi-region-cluster/README.MD +++ b/examples/mongodbatlas_encryption_at_rest/aws/multi-region-cluster/README.MD @@ -32,7 +32,7 @@ private_key = "22b722a9-34f4-3b1b-aada-298329a5c128" atlas_org_id = "63f4d4a47baeac59406dc131" ``` -... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/website/docs/index.html.markdown#aws-secrets-manager) +... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/index.md#aws-secrets-manager) **2\. Set your AWS access key & secret via environment variables: diff --git a/examples/mongodbatlas_federated_settings_identity_provider/azure/atlas.tf b/examples/mongodbatlas_federated_settings_identity_provider/azure/atlas.tf index 575ba671cd..42a890c75e 100644 --- a/examples/mongodbatlas_federated_settings_identity_provider/azure/atlas.tf +++ b/examples/mongodbatlas_federated_settings_identity_provider/azure/atlas.tf @@ -1,5 +1,5 @@ locals { - mongodb_uri = mongodbatlas_cluster.this.connection_strings[0].standard + mongodb_uri = mongodbatlas_advanced_cluster.this.connection_strings[0].standard } data "mongodbatlas_federated_settings" "this" { @@ -16,25 +16,22 @@ resource "mongodbatlas_project_ip_access_list" "mongo-access" { cidr_block = "0.0.0.0/0" } -resource "mongodbatlas_cluster" "this" { - project_id = mongodbatlas_project.this.id - name = var.project_name - mongo_db_major_version = "7.0" - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "this" { + project_id = mongodbatlas_project.this.id + name = var.project_name + cluster_type = "REPLICASET" + replication_specs { - num_shards = 1 - regions_config { - region_name = var.region - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "AWS" + region_name = var.region + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - cloud_backup = false - auto_scaling_disk_gb_enabled = false - provider_name = "AWS" - disk_size_gb = 10 - provider_instance_size_name = "M10" } resource "mongodbatlas_federated_settings_identity_provider" "oidc" { diff --git a/examples/mongodbatlas_federated_settings_identity_provider/azure/outputs.tf b/examples/mongodbatlas_federated_settings_identity_provider/azure/outputs.tf index 57cdf85701..04d4a84209 100644 --- a/examples/mongodbatlas_federated_settings_identity_provider/azure/outputs.tf +++ b/examples/mongodbatlas_federated_settings_identity_provider/azure/outputs.tf @@ -9,7 +9,7 @@ output "ssh_connection_string" { } output "user_test_conn_string" { - value = "mongodb+srv://${local.test_user_username}:${local.test_user_password}@${replace(mongodbatlas_cluster.this.srv_address, "mongodb+srv://", "")}/?retryWrites=true" + value = "mongodb+srv://${local.test_user_username}:${local.test_user_password}@${replace(mongodbatlas_advanced_cluster.this.connection_strings[0].standard_srv, "mongodb+srv://", "")}/?retryWrites=true" sensitive = true description = "Useful for connecting to the database from Compass or other tool to validate data" } diff --git a/examples/mongodbatlas_network_peering/aws/main.tf b/examples/mongodbatlas_network_peering/aws/main.tf index 7c1b945b80..28da1d5cda 100644 --- a/examples/mongodbatlas_network_peering/aws/main.tf +++ b/examples/mongodbatlas_network_peering/aws/main.tf @@ -8,27 +8,23 @@ resource "mongodbatlas_project" "aws_atlas" { org_id = var.atlas_org_id } -resource "mongodbatlas_cluster" "cluster-atlas" { - project_id = mongodbatlas_project.aws_atlas.id - name = "cluster-atlas" - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "cluster-atlas" { + project_id = mongodbatlas_project.aws_atlas.id + name = "cluster-atlas" + cluster_type = "REPLICASET" + backup_enabled = true + replication_specs { - num_shards = 1 - regions_config { - region_name = var.atlas_region - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "AWS" + region_name = var.atlas_region + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - cloud_backup = true - auto_scaling_disk_gb_enabled = true - mongo_db_major_version = "7.0" - - # Provider Settings "block" - provider_name = "AWS" - disk_size_gb = 10 - provider_instance_size_name = "M10" } resource "mongodbatlas_database_user" "db-user" { @@ -46,7 +42,7 @@ resource "mongodbatlas_database_user" "db-user" { resource "mongodbatlas_network_peering" "aws-atlas" { accepter_region_name = var.aws_region project_id = mongodbatlas_project.aws_atlas.id - container_id = mongodbatlas_cluster.cluster-atlas.container_id + container_id = one(values(mongodbatlas_advanced_cluster.cluster-atlas.replication_specs[0].container_id)) provider_name = "AWS" route_table_cidr_block = aws_vpc.primary.cidr_block vpc_id = aws_vpc.primary.id diff --git a/examples/mongodbatlas_network_peering/azure/atlas.tf b/examples/mongodbatlas_network_peering/azure/atlas.tf index 4cfe740422..5485899bcf 100644 --- a/examples/mongodbatlas_network_peering/azure/atlas.tf +++ b/examples/mongodbatlas_network_peering/azure/atlas.tf @@ -3,34 +3,31 @@ provider "mongodbatlas" { public_key = var.public_key private_key = var.private_key } + # Create the mongodb atlas Azure cluster -resource "mongodbatlas_cluster" "azure-cluster" { - project_id = var.project_id - name = var.name - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "azure-cluster" { + project_id = var.project_id + name = var.name + cluster_type = "REPLICASET" + backup_enabled = true + replication_specs { - num_shards = 1 - regions_config { - region_name = var.provider_region_name - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "AZURE" + region_name = var.provider_region_name + electable_specs { + instance_size = var.provider_instance_size_name + node_count = 3 + } } } - backup_enabled = false - auto_scaling_disk_gb_enabled = true - mongo_db_major_version = "7.0" - - # Provider settings block in this case it is Azure - provider_name = "AZURE" - provider_disk_type_name = var.provider_disk_type_name - provider_instance_size_name = var.provider_instance_size_name } # Create the peering connection request resource "mongodbatlas_network_peering" "test" { project_id = var.project_id - container_id = mongodbatlas_cluster.azure-cluster.container_id + container_id = one(values(mongodbatlas_advanced_cluster.azure-cluster.replication_specs[0].container_id)) provider_name = "AZURE" azure_directory_id = data.azurerm_client_config.current.tenant_id azure_subscription_id = data.azurerm_client_config.current.subscription_id diff --git a/examples/mongodbatlas_network_peering/azure/variables.tf b/examples/mongodbatlas_network_peering/azure/variables.tf index 998e4db4c2..cffaacb7e2 100644 --- a/examples/mongodbatlas_network_peering/azure/variables.tf +++ b/examples/mongodbatlas_network_peering/azure/variables.tf @@ -10,9 +10,6 @@ variable "project_id" { variable "provider_instance_size_name" { type = string } -variable "provider_disk_type_name" { - type = string -} variable "resource_group_name" { type = string } diff --git a/examples/mongodbatlas_network_peering/gcp/cluster.tf b/examples/mongodbatlas_network_peering/gcp/cluster.tf index a3683eebca..b8e1d9ebe1 100644 --- a/examples/mongodbatlas_network_peering/gcp/cluster.tf +++ b/examples/mongodbatlas_network_peering/gcp/cluster.tf @@ -1,45 +1,45 @@ # This cluster is in GCP cloud-provider with VPC peering enabled -resource "mongodbatlas_cluster" "cluster" { - project_id = var.project_id - name = "cluster-test" - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "cluster" { + project_id = var.project_id + name = "cluster-test" + cluster_type = "REPLICASET" + backup_enabled = true # enable cloud provider snapshots + replication_specs { - num_shards = 1 - regions_config { - region_name = var.atlas_region - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "GCP" + region_name = var.atlas_region + electable_specs { + instance_size = "M10" + node_count = 3 + } + auto_scaling { + compute_enabled = true + compute_scale_down_enabled = true + compute_min_instance_size = "M10" + compute_max_instance_size = "M20" + disk_gb_enabled = true + } } } - labels { + tags { key = "environment" value = "prod" } - cloud_backup = true - auto_scaling_disk_gb_enabled = true - mongo_db_major_version = "7.0" - auto_scaling_compute_enabled = true - auto_scaling_compute_scale_down_enabled = true - - - # Provider Settings "block" - provider_name = "GCP" - provider_instance_size_name = "M10" - provider_auto_scaling_compute_max_instance_size = "M20" - provider_auto_scaling_compute_min_instance_size = "M10" - disk_size_gb = 40 advanced_configuration { minimum_enabled_tls_protocol = "TLS1_2" } + lifecycle { ignore_changes = [ - provider_instance_size_name + replication_specs[0].region_configs[0].electable_specs[0].instance_size, ] } } + # The connection strings available for the GCP MognoDB Atlas cluster output "connection_string" { - value = mongodbatlas_cluster.cluster.connection_strings + value = mongodbatlas_advanced_cluster.cluster.connection_strings } diff --git a/examples/mongodbatlas_online_archive/main.tf b/examples/mongodbatlas_online_archive/main.tf index bb3d21bad2..ebb9eb8cdc 100644 --- a/examples/mongodbatlas_online_archive/main.tf +++ b/examples/mongodbatlas_online_archive/main.tf @@ -31,15 +31,22 @@ resource "mongodbatlas_online_archive" "users_archive" { } } -# tflint-ignore: terraform_unused_declarations data "mongodbatlas_online_archive" "read_archive" { project_id = mongodbatlas_online_archive.users_archive.project_id cluster_name = mongodbatlas_online_archive.users_archive.cluster_name archive_id = mongodbatlas_online_archive.users_archive.archive_id } -# tflint-ignore: terraform_unused_declarations data "mongodbatlas_online_archives" "all" { project_id = mongodbatlas_online_archive.users_archive.project_id cluster_name = mongodbatlas_online_archive.users_archive.cluster_name } + +output "online_archive_state" { + value = data.mongodbatlas_online_archive.read_archive.state +} + +output "online_archives_results" { + value = data.mongodbatlas_online_archives.all.results +} + diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/README.md b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/README.md index 4f2402e7c0..eb703bcc3e 100644 --- a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/README.md +++ b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/README.md @@ -83,7 +83,7 @@ $ terraform destroy 2. `mongodbatlas_privatelink_endpoint` is dependent on the `mongodbatlas_project` 3. `aws_vpc_endpoint` is dependent on the `mongodbatlas_privatelink_endpoint`, and its dependencies. 4. `mongodbatlas_privatelink_endpoint_service` is dependent on `aws_vpc_endpoint` and its dependencies. -5. `mongodbatlas_cluster` is dependent only on the `mongodbatlas_project`, howerver; its `connection_strings` are sourced from the `mongodbatlas_privatelink_endpoint_service`. `mongodbatlas_privatelink_endpoint_service` has explicitly been added to the `mongodbatlas_cluster` `depends_on` to ensure the private connection strings are correct following `terraform apply`. +5. `mongodbatlas_advanced_cluster` is dependent only on the `mongodbatlas_project`, howerver; its `connection_strings` are sourced from the `mongodbatlas_privatelink_endpoint_service`. `mongodbatlas_privatelink_endpoint_service` has explicitly been added to the `mongodbatlas_advanced_cluster` `depends_on` to ensure the private connection strings are correct following `terraform apply`. **Important Point** @@ -123,7 +123,7 @@ Cluster `connection_strings` is a list of maps matching the signature below. `aw In order to output the `private_endpoint.#.srv_connection_string` for the `aws_vpc_endpoint`, utilize locals such as the [following](output.tf): ``` locals { - private_endpoints = flatten([for cs in mongodbatlas_cluster.aws_private_connection.connection_strings : cs.private_endpoint]) + private_endpoints = flatten([for cs in mongodbatlas_advanced_cluster.aws_private_connection.connection_strings : cs.private_endpoint]) connection_strings = [ for pe in local.private_endpoints : pe.srv_connection_string diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/atlas-cluster.tf b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/atlas-cluster.tf index 4bf9bd3384..38e08232b1 100644 --- a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/atlas-cluster.tf +++ b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/atlas-cluster.tf @@ -1,23 +1,19 @@ -resource "mongodbatlas_cluster" "aws_private_connection" { - project_id = var.project_id - name = var.cluster_name - cloud_backup = true - auto_scaling_disk_gb_enabled = true - mongo_db_major_version = "7.0" - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "aws_private_connection" { + project_id = var.project_id + name = var.cluster_name + cluster_type = "REPLICASET" + backup_enabled = true + replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_EAST_1" + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - # Provider settings - provider_name = "AWS" - disk_size_gb = 10 - provider_instance_size_name = "M10" - depends_on = [mongodbatlas_privatelink_endpoint_service.pe_east_service] } diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/output.tf b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/output.tf index 45294b427f..28ce135c8d 100644 --- a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/output.tf +++ b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/output.tf @@ -1,5 +1,5 @@ locals { - private_endpoints = flatten([for cs in mongodbatlas_cluster.aws_private_connection.connection_strings : cs.private_endpoint]) + private_endpoints = flatten([for cs in mongodbatlas_advanced_cluster.aws_private_connection.connection_strings : cs.private_endpoint]) connection_strings = [ for pe in local.private_endpoints : pe.srv_connection_string diff --git a/examples/mongodbatlas_privatelink_endpoint/azure/main.tf b/examples/mongodbatlas_privatelink_endpoint/azure/main.tf index 6ccf44be28..f94d8046cd 100644 --- a/examples/mongodbatlas_privatelink_endpoint/azure/main.tf +++ b/examples/mongodbatlas_privatelink_endpoint/azure/main.tf @@ -19,12 +19,12 @@ resource "azurerm_virtual_network" "test" { } resource "azurerm_subnet" "test" { - name = "testsubnet" - resource_group_name = var.resource_group_name - virtual_network_name = azurerm_virtual_network.test.name - address_prefixes = ["10.0.1.0/24"] - enforce_private_link_service_network_policies = true - enforce_private_link_endpoint_network_policies = true + name = "testsubnet" + resource_group_name = var.resource_group_name + virtual_network_name = azurerm_virtual_network.test.name + address_prefixes = ["10.0.1.0/24"] + private_link_service_network_policies_enabled = true + private_endpoint_network_policies_enabled = true } resource "mongodbatlas_privatelink_endpoint" "test" { diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/main.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/main.tf index 2c683de23d..d40e580bc5 100644 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/main.tf +++ b/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/main.tf @@ -19,12 +19,12 @@ resource "azurerm_virtual_network" "test" { } resource "azurerm_subnet" "test" { - name = "testsubnet" - resource_group_name = var.resource_group_name - virtual_network_name = azurerm_virtual_network.test.name - address_prefixes = ["10.0.1.0/24"] - enforce_private_link_service_network_policies = true - enforce_private_link_endpoint_network_policies = true + name = "testsubnet" + resource_group_name = var.resource_group_name + virtual_network_name = azurerm_virtual_network.test.name + address_prefixes = ["10.0.1.0/24"] + private_link_service_network_policies_enabled = true + private_endpoint_network_policies_enabled = true } resource "mongodbatlas_privatelink_endpoint_serverless" "test" { diff --git a/examples/mongodbatlas_search_deployment/versions.tf b/examples/mongodbatlas_search_deployment/versions.tf index 7cac4906f0..1888453805 100644 --- a/examples/mongodbatlas_search_deployment/versions.tf +++ b/examples/mongodbatlas_search_deployment/versions.tf @@ -2,8 +2,8 @@ terraform { required_providers { mongodbatlas = { source = "mongodb/mongodbatlas" - version = "~> 1.13" + version = "~> 1.0" } } required_version = ">= 1.0" -} \ No newline at end of file +} diff --git a/examples/mongodbatlas_third_party_integration/prometheus-and-teams/third-party-integration.tf b/examples/mongodbatlas_third_party_integration/prometheus-and-teams/third-party-integration.tf index 2c624722eb..8236cd0f5e 100644 --- a/examples/mongodbatlas_third_party_integration/prometheus-and-teams/third-party-integration.tf +++ b/examples/mongodbatlas_third_party_integration/prometheus-and-teams/third-party-integration.tf @@ -10,7 +10,6 @@ resource "mongodbatlas_third_party_integration" "test_prometheus" { user_name = var.user_name password = var.password service_discovery = "file" - scheme = "https" enabled = true } diff --git a/examples/starter/Readme.md b/examples/starter/Readme.md index b145aec076..e855faa776 100644 --- a/examples/starter/Readme.md +++ b/examples/starter/Readme.md @@ -75,7 +75,7 @@ Or to fetch the connection string using terraform follow the below steps: ```hcl output "atlasclusterstring" { - value = mongodbatlas_cluster.cluster.connection_strings + value = mongodbatlas_advanced_cluster.cluster.connection_strings } ``` **Outputs:** @@ -100,7 +100,7 @@ To fetch a particular connection string, use the **lookup()** function of terraf ``` output "plstring" { - value = lookup(mongodbatlas_cluster.cluster.connection_strings[0].aws_private_link_srv, aws_vpc_endpoint.ptfe_service.id) + value = lookup(mongodbatlas_advanced_cluster.cluster.connection_strings[0].aws_private_link_srv, aws_vpc_endpoint.ptfe_service.id) } ``` **Output:** diff --git a/examples/starter/atlas_cluster.tf b/examples/starter/atlas_cluster.tf index f07552a47c..18fff374e6 100644 --- a/examples/starter/atlas_cluster.tf +++ b/examples/starter/atlas_cluster.tf @@ -1,24 +1,23 @@ -resource "mongodbatlas_cluster" "cluster" { - project_id = mongodbatlas_project.project.id - name = var.cluster_name - mongo_db_major_version = var.mongodbversion - cluster_type = "REPLICASET" +resource "mongodbatlas_advanced_cluster" "cluster" { + project_id = mongodbatlas_project.project.id + name = var.cluster_name + cluster_type = "REPLICASET" + backup_enabled = true + replication_specs { - num_shards = 1 - regions_config { - region_name = var.region - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 + region_configs { + priority = 7 + provider_name = var.cloud_provider + region_name = var.region + electable_specs { + instance_size = "M10" + node_count = 3 + } } } - # Provider Settings "block" - cloud_backup = true - auto_scaling_disk_gb_enabled = true - provider_name = var.cloud_provider - provider_instance_size_name = "M10" } + output "connection_strings" { - value = mongodbatlas_cluster.cluster.connection_strings[0].standard_srv + value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].standard_srv } diff --git a/examples/starter/variables.tf b/examples/starter/variables.tf index 4307ced129..2072e854be 100644 --- a/examples/starter/variables.tf +++ b/examples/starter/variables.tf @@ -26,10 +26,6 @@ variable "region" { type = string description = "MongoDB Atlas Cluster Region, must be a region for the provider given" } -variable "mongodbversion" { - type = string - description = "The Major MongoDB Version" -} variable "dbuser" { type = string description = "MongoDB Atlas Database User Name" diff --git a/go.mod b/go.mod index 523605c069..7693eb1604 100644 --- a/go.mod +++ b/go.mod @@ -4,25 +4,24 @@ go 1.22 require ( github.com/andygrunwald/go-jira/v2 v2.0.0-20240116150243-50d59fe116d6 - github.com/aws/aws-sdk-go v1.54.13 - github.com/go-test/deep v1.1.1 + github.com/aws/aws-sdk-go v1.54.19 github.com/hashicorp/go-changelog v0.0.0-20240318095659-4d68c58a6e7f github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 github.com/hashicorp/go-version v1.7.0 github.com/hashicorp/hcl/v2 v2.21.0 - github.com/hashicorp/terraform-plugin-framework v1.9.0 + github.com/hashicorp/terraform-plugin-framework v1.10.0 github.com/hashicorp/terraform-plugin-framework-timeouts v0.4.1 - github.com/hashicorp/terraform-plugin-framework-validators v0.12.0 + github.com/hashicorp/terraform-plugin-framework-validators v0.13.0 github.com/hashicorp/terraform-plugin-go v0.23.0 github.com/hashicorp/terraform-plugin-log v0.9.0 github.com/hashicorp/terraform-plugin-mux v0.16.0 github.com/hashicorp/terraform-plugin-sdk v1.17.2 github.com/hashicorp/terraform-plugin-sdk/v2 v2.34.0 - github.com/hashicorp/terraform-plugin-testing v1.8.0 + github.com/hashicorp/terraform-plugin-testing v1.9.0 github.com/mongodb-forks/digest v1.1.0 github.com/spf13/cast v1.6.0 github.com/stretchr/testify v1.9.0 - github.com/zclconf/go-cty v1.14.4 + github.com/zclconf/go-cty v1.15.0 go.mongodb.org/atlas v0.36.0 go.mongodb.org/atlas-sdk/v20231115014 v20231115014.0.0 go.mongodb.org/atlas-sdk/v20240530002 v20240530002.0.1-0.20240710142852-8a1b5dd5d8f3 @@ -77,7 +76,7 @@ require ( github.com/hashicorp/go-plugin v1.6.0 // indirect github.com/hashicorp/go-safetemp v1.0.0 // indirect github.com/hashicorp/go-uuid v1.0.3 // indirect - github.com/hashicorp/hc-install v0.6.4 // indirect + github.com/hashicorp/hc-install v0.7.0 // indirect github.com/hashicorp/hcl v1.0.0 // indirect github.com/hashicorp/logutils v1.0.0 // indirect github.com/hashicorp/terraform-exec v0.21.0 // indirect @@ -122,15 +121,15 @@ require ( go.opentelemetry.io/otel v1.22.0 // indirect go.opentelemetry.io/otel/metric v1.22.0 // indirect go.opentelemetry.io/otel/trace v1.22.0 // indirect - golang.org/x/crypto v0.23.0 // indirect - golang.org/x/mod v0.16.0 // indirect - golang.org/x/net v0.23.0 // indirect + golang.org/x/crypto v0.25.0 // indirect + golang.org/x/mod v0.17.0 // indirect + golang.org/x/net v0.25.0 // indirect golang.org/x/oauth2 v0.17.0 // indirect - golang.org/x/sync v0.6.0 // indirect - golang.org/x/sys v0.20.0 // indirect - golang.org/x/text v0.15.0 // indirect + golang.org/x/sync v0.7.0 // indirect + golang.org/x/sys v0.22.0 // indirect + golang.org/x/text v0.16.0 // indirect golang.org/x/time v0.5.0 // indirect - golang.org/x/tools v0.13.0 // indirect + golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect google.golang.org/api v0.162.0 // indirect google.golang.org/appengine v1.6.8 // indirect google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de // indirect diff --git a/go.sum b/go.sum index 54dc6c9576..4c207b062c 100644 --- a/go.sum +++ b/go.sum @@ -243,8 +243,8 @@ github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkY github.com/aws/aws-sdk-go v1.15.78/go.mod h1:E3/ieXAlvM0XWO57iftYVDLLvQ824smPP3ATZkfNZeM= github.com/aws/aws-sdk-go v1.37.0/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.44.122/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= -github.com/aws/aws-sdk-go v1.54.13 h1:zpCuiG+/mFdDY/klKJvmSioAZWk45F4rLGq0JWVAAzk= -github.com/aws/aws-sdk-go v1.54.13/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= +github.com/aws/aws-sdk-go v1.54.19 h1:tyWV+07jagrNiCcGRzRhdtVjQs7Vy41NwsuOcl0IbVI= +github.com/aws/aws-sdk-go v1.54.19/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d h1:xDfNPAt8lFiC1UJrqV3uuy861HCTo708pDMbjHHdCas= github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d/go.mod h1:6QX/PXZ00z/TKoufEY6K/a0k6AhaJrQKdFe6OfVXsa4= github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY= @@ -499,8 +499,8 @@ github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKe github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= -github.com/hashicorp/hc-install v0.6.4 h1:QLqlM56/+SIIGvGcfFiwMY3z5WGXT066suo/v9Km8e0= -github.com/hashicorp/hc-install v0.6.4/go.mod h1:05LWLy8TD842OtgcfBbOT0WMoInBMUSHjmDx10zuBIA= +github.com/hashicorp/hc-install v0.7.0 h1:Uu9edVqjKQxxuD28mR5TikkKDd/p55S8vzPC1659aBk= +github.com/hashicorp/hc-install v0.7.0/go.mod h1:ELmmzZlGnEcqoUMKUuykHaPCIR1sYLYX+KSggWSKZuA= github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f/go.mod h1:oZtUIOe8dh44I2q6ScRibXws4Ajl+d+nod3AaR9vL5w= github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= @@ -518,12 +518,12 @@ github.com/hashicorp/terraform-exec v0.21.0/go.mod h1:1PPeMYou+KDUSSeRE9szMZ/oHf github.com/hashicorp/terraform-json v0.10.0/go.mod h1:3defM4kkMfttwiE7VakJDwCd4R+umhSQnvJwORXbprE= github.com/hashicorp/terraform-json v0.22.1 h1:xft84GZR0QzjPVWs4lRUwvTcPnegqlyS7orfb5Ltvec= github.com/hashicorp/terraform-json v0.22.1/go.mod h1:JbWSQCLFSXFFhg42T7l9iJwdGXBYV8fmmD6o/ML4p3A= -github.com/hashicorp/terraform-plugin-framework v1.9.0 h1:caLcDoxiRucNi2hk8+j3kJwkKfvHznubyFsJMWfZqKU= -github.com/hashicorp/terraform-plugin-framework v1.9.0/go.mod h1:qBXLDn69kM97NNVi/MQ9qgd1uWWsVftGSnygYG1tImM= +github.com/hashicorp/terraform-plugin-framework v1.10.0 h1:xXhICE2Fns1RYZxEQebwkB2+kXouLC932Li9qelozrc= +github.com/hashicorp/terraform-plugin-framework v1.10.0/go.mod h1:qBXLDn69kM97NNVi/MQ9qgd1uWWsVftGSnygYG1tImM= github.com/hashicorp/terraform-plugin-framework-timeouts v0.4.1 h1:gm5b1kHgFFhaKFhm4h2TgvMUlNzFAtUqlcOWnWPm+9E= github.com/hashicorp/terraform-plugin-framework-timeouts v0.4.1/go.mod h1:MsjL1sQ9L7wGwzJ5RjcI6FzEMdyoBnw+XK8ZnOvQOLY= -github.com/hashicorp/terraform-plugin-framework-validators v0.12.0 h1:HOjBuMbOEzl7snOdOoUfE2Jgeto6JOjLVQ39Ls2nksc= -github.com/hashicorp/terraform-plugin-framework-validators v0.12.0/go.mod h1:jfHGE/gzjxYz6XoUwi/aYiiKrJDeutQNUtGQXkaHklg= +github.com/hashicorp/terraform-plugin-framework-validators v0.13.0 h1:bxZfGo9DIUoLLtHMElsu+zwqI4IsMZQBRRy4iLzZJ8E= +github.com/hashicorp/terraform-plugin-framework-validators v0.13.0/go.mod h1:wGeI02gEhj9nPANU62F2jCaHjXulejm/X+af4PdZaNo= github.com/hashicorp/terraform-plugin-go v0.23.0 h1:AALVuU1gD1kPb48aPQUjug9Ir/125t+AAurhqphJ2Co= github.com/hashicorp/terraform-plugin-go v0.23.0/go.mod h1:1E3Cr9h2vMlahWMbsSEcNrOCxovCZhOOIXjFHbjc/lQ= github.com/hashicorp/terraform-plugin-log v0.9.0 h1:i7hOA+vdAItN1/7UrfBqBwvYPQ9TFvymaRGZED3FCV0= @@ -535,8 +535,8 @@ github.com/hashicorp/terraform-plugin-sdk v1.17.2/go.mod h1:wkvldbraEMkz23NxkkAs github.com/hashicorp/terraform-plugin-sdk/v2 v2.34.0 h1:kJiWGx2kiQVo97Y5IOGR4EMcZ8DtMswHhUuFibsCQQE= github.com/hashicorp/terraform-plugin-sdk/v2 v2.34.0/go.mod h1:sl/UoabMc37HA6ICVMmGO+/0wofkVIRxf+BMb/dnoIg= github.com/hashicorp/terraform-plugin-test/v2 v2.2.1/go.mod h1:eZ9JL3O69Cb71Skn6OhHyj17sLmHRb+H6VrDcJjKrYU= -github.com/hashicorp/terraform-plugin-testing v1.8.0 h1:wdYIgwDk4iO933gC4S8KbKdnMQShu6BXuZQPScmHvpk= -github.com/hashicorp/terraform-plugin-testing v1.8.0/go.mod h1:o2kOgf18ADUaZGhtOl0YCkfIxg01MAiMATT2EtIHlZk= +github.com/hashicorp/terraform-plugin-testing v1.9.0 h1:xOsQRqqlHKXpFq6etTxih3ubdK3HVDtfE1IY7Rpd37o= +github.com/hashicorp/terraform-plugin-testing v1.9.0/go.mod h1:fhhVx/8+XNJZTD5o3b4stfZ6+q7z9+lIWigIYdT6/44= github.com/hashicorp/terraform-registry-address v0.2.3 h1:2TAiKJ1A3MAkZlH1YI/aTVcLZRu7JseiXNRHbOAyoTI= github.com/hashicorp/terraform-registry-address v0.2.3/go.mod h1:lFHA76T8jfQteVfT7caREqguFrW3c4MFSPhZB7HHgUM= github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734/go.mod h1:kNDNcF7sN4DocDLBkQYz73HGKwN1ANB1blq4lIYLYvg= @@ -770,8 +770,8 @@ github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLE github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8= github.com/zclconf/go-cty v1.2.1/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8= github.com/zclconf/go-cty v1.8.2/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk= -github.com/zclconf/go-cty v1.14.4 h1:uXXczd9QDGsgu0i/QFR/hzI5NYCHLf6NQw/atrbnhq8= -github.com/zclconf/go-cty v1.14.4/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE= +github.com/zclconf/go-cty v1.15.0 h1:tTCRWxsexYUmtt/wVxgDClUe+uQusuI443uL6e+5sXQ= +github.com/zclconf/go-cty v1.15.0/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE= github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b/go.mod h1:ZRKQfBXbGkpdV6QMzT3rU1kSTAnfu1dO8dPKjYprgj8= github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940 h1:4r45xpDWB6ZMSMNJFMOjqrGHynW3DIBuR2H9j0ug+Mo= github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940/go.mod h1:CmBdvvj3nqzfzJ6nTCIwDTPZ56aVGvDrmztiO5g3qrM= @@ -831,8 +831,8 @@ golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU golang.org/x/crypto v0.11.0/go.mod h1:xgJhtzW8F9jGdVFWZESrid1U1bjeNy4zgy5cRr/CIio= golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= golang.org/x/crypto v0.16.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= -golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI= -golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8= +golang.org/x/crypto v0.25.0 h1:ypSNr+bnYL2YhwoMt2zPxHFmbAN1KZs/njMG3hxUp30= +golang.org/x/crypto v0.25.0/go.mod h1:T+wALwcMOSE0kXgUAnPAHqTLW+XHgcELELW8VaDgm/M= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -876,8 +876,8 @@ golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/mod v0.16.0 h1:QX4fJ0Rr5cPQCF7O9lh9Se4pmwfwskqZfq5moyldzic= -golang.org/x/mod v0.16.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA= +golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/net v0.0.0-20180530234432-1e491301e022/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -946,8 +946,8 @@ golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= golang.org/x/net v0.12.0/go.mod h1:zEVYFnQC7m/vmpQFELhcD1EWkZlX69l4oqgmer6hfKA= golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U= -golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs= -golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= +golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac= +golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -992,8 +992,8 @@ golang.org/x/sync v0.0.0-20220929204114-8fcdb60fdcc0/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.2.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= -golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ= -golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M= +golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -1091,8 +1091,8 @@ golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y= -golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI= +golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20220722155259-a9ba230a4035/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= @@ -1106,8 +1106,8 @@ golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.10.0/go.mod h1:lpqdcUyK/oCiQxvxVrppt5ggO2KCZ5QblwqPnfZ6d5o= golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU= golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0= -golang.org/x/term v0.20.0 h1:VnkxpohqXaOBYJtBmEppKUG6mXpi+4O6purfc2+sMhw= -golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY= +golang.org/x/term v0.22.0 h1:BbsgPEJULsl2fV/AT3v15Mjva5yXKQDyKf+TbDz7QJk= +golang.org/x/term v0.22.0/go.mod h1:F3qCibpT5AMpCRfhfT53vVJwhLtIVHhB9XDjfFvnMI4= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -1127,8 +1127,8 @@ golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk= -golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= +golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4= +golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -1196,8 +1196,9 @@ golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s= golang.org/x/tools v0.9.1/go.mod h1:owI94Op576fPu3cIGQeHs3joujW/2Oc6MtlxbF5dfNc= golang.org/x/tools v0.9.3/go.mod h1:owI94Op576fPu3cIGQeHs3joujW/2Oc6MtlxbF5dfNc= -golang.org/x/tools v0.13.0 h1:Iey4qkscZuv0VvIt8E0neZjtPVQFSc870HQ448QgEmQ= golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= +golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d h1:vU5i/LfpvrRCpgM/VPfJLg5KjxD3E+hfT1SH+d9zLwg= +golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= diff --git a/internal/common/conversion/encode_state_test.go b/internal/common/conversion/encode_state_test.go index 94628c994f..1b3e75c239 100644 --- a/internal/common/conversion/encode_state_test.go +++ b/internal/common/conversion/encode_state_test.go @@ -1,9 +1,9 @@ package conversion_test import ( + "reflect" "testing" - "github.com/go-test/deep" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" ) @@ -16,8 +16,8 @@ func TestEncodeDecodeID(t *testing.T) { got := conversion.DecodeStateID(conversion.EncodeStateID(expected)) - if diff := deep.Equal(expected, got); diff != nil { - t.Fatalf("Bad testEncodeDecodeID return \n got = %#v\nwant = %#v \ndiff = %#v", got, expected, diff) + if !reflect.DeepEqual(expected, got) { + t.Fatalf("Bad testEncodeDecodeID return \n got = %#v\nwant = %#v", got, expected) } } @@ -28,7 +28,7 @@ func TestDecodeID(t *testing.T) { got := conversion.DecodeStateID(expected) got2 := conversion.DecodeStateID(expected2) - if diff := deep.Equal(got, got2); diff != nil { - t.Fatalf("Bad TestDecodeID return \n got = %#v\nwant = %#v \ndiff = %#v", got, got2, diff) + if !reflect.DeepEqual(got, got2) { + t.Fatalf("Bad TestDecodeID return \n got = %#v\nwant = %#v", got, got2) } } diff --git a/internal/service/advancedcluster/model_advanced_cluster.go b/internal/service/advancedcluster/model_advanced_cluster.go index c9d7f6480b..2b1cb8ec10 100644 --- a/internal/service/advancedcluster/model_advanced_cluster.go +++ b/internal/service/advancedcluster/model_advanced_cluster.go @@ -695,10 +695,12 @@ func flattenAdvancedReplicationSpecRegionConfigSpec(apiObject *admin.DedicatedHa if len(tfMapObjects) > 0 { tfMapObject := tfMapObjects[0].(map[string]any) - if providerName == "AWS" { + if providerName == constant.AWS || providerName == constant.AZURE { if cast.ToInt64(apiObject.GetDiskIOPS()) > 0 { tfMap["disk_iops"] = apiObject.GetDiskIOPS() } + } + if providerName == constant.AWS { if v, ok := tfMapObject["ebs_volume_type"]; ok && v.(string) != "" { tfMap["ebs_volume_type"] = apiObject.GetEbsVolumeType() } @@ -945,10 +947,12 @@ func expandRegionConfig(tfMap map[string]any, rootDiskSizeGB *float64) *admin.Cl func expandRegionConfigSpec(tfList []any, providerName string, rootDiskSizeGB *float64) *admin.DedicatedHardwareSpec20250101 { tfMap, _ := tfList[0].(map[string]any) apiObject := new(admin.DedicatedHardwareSpec20250101) - if providerName == "AWS" { + if providerName == constant.AWS || providerName == constant.AZURE { if v, ok := tfMap["disk_iops"]; ok && v.(int) > 0 { apiObject.DiskIOPS = conversion.Pointer(v.(int)) } + } + if providerName == constant.AWS { if v, ok := tfMap["ebs_volume_type"]; ok { apiObject.EbsVolumeType = conversion.StringPtr(v.(string)) } diff --git a/internal/service/advancedcluster/resource_advanced_cluster_state_upgrader_test.go b/internal/service/advancedcluster/resource_advanced_cluster_state_upgrader_test.go index f860d49b4d..eafce884cb 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster_state_upgrader_test.go +++ b/internal/service/advancedcluster/resource_advanced_cluster_state_upgrader_test.go @@ -109,7 +109,7 @@ func TestMigAdvancedCluster_v0StateUpgrade_ReplicationSpecs(t *testing.T) { v0Config := terraform.NewResourceConfigRaw(v0State) diags := advancedcluster.ResourceV0().Validate(v0Config) - if len(diags) > 0 { + if diags.HasError() { fmt.Println(diags) t.Error("test precondition failed - invalid mongodb cluster v0 config") @@ -121,7 +121,7 @@ func TestMigAdvancedCluster_v0StateUpgrade_ReplicationSpecs(t *testing.T) { v1Config := terraform.NewResourceConfigRaw(v1State) diags = advancedcluster.Resource().Validate(v1Config) - if len(diags) > 0 { + if diags.HasError() { fmt.Println(diags) t.Error("migrated advanced cluster replication_specs invalid") diff --git a/internal/service/advancedcluster/resource_advanced_cluster_test.go b/internal/service/advancedcluster/resource_advanced_cluster_test.go index 07f0fee3f8..b177f35182 100644 --- a/internal/service/advancedcluster/resource_advanced_cluster_test.go +++ b/internal/service/advancedcluster/resource_advanced_cluster_test.go @@ -950,7 +950,15 @@ func checkSingleShardedMultiCloud(name string, verifyExternalID bool) resource.T additionalChecks := []resource.TestCheckFunc{} if verifyExternalID { - additionalChecks = append(additionalChecks, resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.external_id")) + additionalChecks = append( + additionalChecks, + resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.external_id"), + resource.TestCheckResourceAttrWith(resourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)), + resource.TestCheckResourceAttrWith(resourceName, "replication_specs.0.region_configs.0.analytics_specs.0.disk_iops", acc.IntGreatThan(0)), + resource.TestCheckResourceAttrWith(resourceName, "replication_specs.0.region_configs.1.electable_specs.0.disk_iops", acc.IntGreatThan(0)), + resource.TestCheckResourceAttrWith(dataSourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)), + resource.TestCheckResourceAttrWith(dataSourceName, "replication_specs.0.region_configs.0.analytics_specs.0.disk_iops", acc.IntGreatThan(0)), + resource.TestCheckResourceAttrWith(dataSourceName, "replication_specs.0.region_configs.1.electable_specs.0.disk_iops", acc.IntGreatThan(0))) } return checkAggr( diff --git a/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go b/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go index bd0c81dc53..6c3dabfdf8 100644 --- a/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go +++ b/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go @@ -14,8 +14,9 @@ import ( ) const ( - resourceName = "mongodbatlas_backup_compliance_policy.backup_policy_res" - dataSourceName = "data.mongodbatlas_backup_compliance_policy.backup_policy" + resourceName = "mongodbatlas_backup_compliance_policy.backup_policy_res" + dataSourceName = "data.mongodbatlas_backup_compliance_policy.backup_policy" + projectIDTerraform = "mongodbatlas_project.test.id" ) func TestAccBackupCompliancePolicy_basic(t *testing.T) { @@ -57,10 +58,25 @@ func TestAccBackupCompliancePolicy_update(t *testing.T) { } func TestAccBackupCompliancePolicy_overwriteBackupPolicies(t *testing.T) { + acc.SkipTestForCI(t) // TODO: CLOUDP-262014 for ensuring replicationSpec.id is being populated for replica set and symmetric sharded clusters var ( orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") projectName = acc.RandomProjectName() // No ProjectIDExecution to avoid conflicts with backup compliance policy projectOwnerID = os.Getenv("MONGODB_ATLAS_PROJECT_OWNER_ID") + req = acc.ClusterRequest{ + AdvancedConfiguration: map[string]any{ + acc.ClusterAdvConfigOplogMinRetentionHours: 8, + }, + ProjectID: projectIDTerraform, + MongoDBMajorVersion: "6.0", + CloudBackup: true, + DiskSizeGb: 12, + RetainBackupsEnabled: true, + ReplicationSpecs: []acc.ReplicationSpecRequest{ + {EbsVolumeType: "STANDARD", AutoScalingDiskGbEnabled: true, NodeCount: 3}, + }, + } + clusterInfo = acc.GetClusterInfo(t, &req) ) resource.ParallelTest(t, resource.TestCase{ @@ -68,10 +84,10 @@ func TestAccBackupCompliancePolicy_overwriteBackupPolicies(t *testing.T) { ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, Steps: []resource.TestStep{ { - Config: configClusterWithBackupSchedule(projectName, orgID, projectOwnerID), + Config: configClusterWithBackupSchedule(projectName, orgID, projectOwnerID, &clusterInfo), }, { - Config: configOverwriteIncompatibleBackupPoliciesError(projectName, orgID, projectOwnerID), + Config: configOverwriteIncompatibleBackupPoliciesError(projectName, orgID, projectOwnerID, &clusterInfo), ExpectError: regexp.MustCompile(`BACKUP_POLICIES_NOT_MEETING_BACKUP_COMPLIANCE_POLICY_REQUIREMENTS`), }, }, @@ -324,39 +340,11 @@ func configWithoutRestoreDays(projectName, orgID, projectOwnerID string) string ` } -func configOverwriteIncompatibleBackupPoliciesError(projectName, orgID, projectOwnerID string) string { - return acc.ConfigProjectWithSettings(projectName, orgID, projectOwnerID, false) + ` - resource "mongodbatlas_cluster" "test" { - project_id = mongodbatlas_project.test.id - name = "test1" - provider_name = "AWS" - cluster_type = "REPLICASET" - mongo_db_major_version = "6.0" - provider_instance_size_name = "M10" - auto_scaling_compute_enabled = false - cloud_backup = true - auto_scaling_disk_gb_enabled = true - disk_size_gb = 12 - provider_volume_type = "STANDARD" - retain_backups_enabled = true - - advanced_configuration { - oplog_min_retention_hours = 8 - } - - replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - } - +func configOverwriteIncompatibleBackupPoliciesError(projectName, orgID, projectOwnerID string, info *acc.ClusterInfo) string { + return acc.ConfigProjectWithSettings(projectName, orgID, projectOwnerID, false) + fmt.Sprintf(` + %[1]s resource "mongodbatlas_cloud_backup_schedule" "test" { - cluster_name = mongodbatlas_cluster.test.name + cluster_name = %[2]s.name project_id = mongodbatlas_project.test.id reference_hour_of_day = 3 @@ -367,7 +355,7 @@ func configOverwriteIncompatibleBackupPoliciesError(projectName, orgID, projectO cloud_provider = "AWS" frequencies = ["DAILY"] region_name = "US_WEST_1" - replication_spec_id = one(mongodbatlas_cluster.test.replication_specs).id + replication_spec_id = one(%[2]s.replication_specs).id should_copy_oplogs = false } } @@ -393,42 +381,14 @@ func configOverwriteIncompatibleBackupPoliciesError(projectName, orgID, projectO retention_value = 1 } } - ` + `, info.TerraformStr, info.ResourceName) } -func configClusterWithBackupSchedule(projectName, orgID, projectOwnerID string) string { - return acc.ConfigProjectWithSettings(projectName, orgID, projectOwnerID, false) + ` - resource "mongodbatlas_cluster" "test" { - project_id = mongodbatlas_project.test.id - name = "test1" - provider_name = "AWS" - cluster_type = "REPLICASET" - mongo_db_major_version = "6.0" - provider_instance_size_name = "M10" - auto_scaling_compute_enabled = false - cloud_backup = true - auto_scaling_disk_gb_enabled = true - disk_size_gb = 12 - provider_volume_type = "STANDARD" - retain_backups_enabled = true - - advanced_configuration { - oplog_min_retention_hours = 8 - } - - replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - } - +func configClusterWithBackupSchedule(projectName, orgID, projectOwnerID string, info *acc.ClusterInfo) string { + return acc.ConfigProjectWithSettings(projectName, orgID, projectOwnerID, false) + fmt.Sprintf(` + %[1]s resource "mongodbatlas_cloud_backup_schedule" "test" { - cluster_name = mongodbatlas_cluster.test.name + cluster_name = %[2]s.name project_id = mongodbatlas_project.test.id reference_hour_of_day = 3 @@ -439,11 +399,11 @@ func configClusterWithBackupSchedule(projectName, orgID, projectOwnerID string) cloud_provider = "AWS" frequencies = ["DAILY"] region_name = "US_WEST_1" - replication_spec_id = one(mongodbatlas_cluster.test.replication_specs).id + replication_spec_id = one(%[2]s.replication_specs).id should_copy_oplogs = false } } - ` + `, info.TerraformStr, info.ResourceName) } func basicChecks() []resource.TestCheckFunc { diff --git a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go index bc9235dbe6..830bb5ab67 100644 --- a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go +++ b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go @@ -30,8 +30,8 @@ func TestMigBackupRSCloudBackupSchedule_basic(t *testing.T) { Config: config, Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "0"), resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "0"), resource.TestCheckResourceAttr(resourceName, "restore_window_days", "7"), diff --git a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go index 761e18c4e1..3be96b2add 100644 --- a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go +++ b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go @@ -36,7 +36,7 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { }), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "3"), resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "45"), resource.TestCheckResourceAttr(resourceName, "restore_window_days", "4"), @@ -45,7 +45,7 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "policy_item_weekly.#", "0"), resource.TestCheckResourceAttr(resourceName, "policy_item_monthly.#", "0"), resource.TestCheckResourceAttr(resourceName, "policy_item_yearly.#", "0"), - resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttrSet(dataSourceName, "reference_hour_of_day"), resource.TestCheckResourceAttrSet(dataSourceName, "reference_minute_of_hour"), resource.TestCheckResourceAttrSet(dataSourceName, "restore_window_days"), @@ -64,7 +64,7 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { }, true), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "0"), resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "0"), resource.TestCheckResourceAttr(resourceName, "restore_window_days", "7"), @@ -93,7 +93,7 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "policy_item_yearly.0.frequency_interval", "1"), resource.TestCheckResourceAttr(resourceName, "policy_item_yearly.0.retention_unit", "years"), resource.TestCheckResourceAttr(resourceName, "policy_item_yearly.0.retention_value", "1"), - resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttrSet(dataSourceName, "reference_hour_of_day"), resource.TestCheckResourceAttrSet(dataSourceName, "reference_minute_of_hour"), resource.TestCheckResourceAttrSet(dataSourceName, "restore_window_days"), @@ -107,7 +107,7 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { }), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "auto_export_enabled", "false"), resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "0"), resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "0"), @@ -167,7 +167,7 @@ func TestAccBackupRSCloudBackupSchedule_export(t *testing.T) { Config: configExportPolicies(&clusterInfo, policyName, roleName, bucketName), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "auto_export_enabled", "true"), resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "20"), resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "5"), @@ -199,7 +199,7 @@ func TestAccBackupRSCloudBackupSchedule_onePolicy(t *testing.T) { }), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "3"), resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "45"), resource.TestCheckResourceAttr(resourceName, "restore_window_days", "4"), @@ -233,7 +233,7 @@ func TestAccBackupRSCloudBackupSchedule_onePolicy(t *testing.T) { }), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "0"), resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "0"), resource.TestCheckResourceAttr(resourceName, "restore_window_days", "7"), @@ -251,10 +251,20 @@ func TestAccBackupRSCloudBackupSchedule_onePolicy(t *testing.T) { } func TestAccBackupRSCloudBackupSchedule_copySettings(t *testing.T) { + acc.SkipTestForCI(t) // TODO: CLOUDP-262014 for ensuring replicationSpec.id is being populated for replica set and symmetric sharded clusters var ( - projectID = acc.ProjectIDExecution(t) - clusterName = acc.RandomClusterName() - checkMap = map[string]string{ + clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{ + CloudBackup: true, + ReplicationSpecs: []acc.ReplicationSpecRequest{ + {Region: "US_EAST_2"}, + }, + PitEnabled: true, // you cannot copy oplogs when pit is not enabled + }) + clusterName = clusterInfo.Name + terraformStr = clusterInfo.TerraformStr + clusterResourceName = clusterInfo.ResourceName + projectID = clusterInfo.ProjectID + checkMap = map[string]string{ "cluster_name": clusterName, "reference_hour_of_day": "3", "reference_minute_of_hour": "45", @@ -300,7 +310,7 @@ func TestAccBackupRSCloudBackupSchedule_copySettings(t *testing.T) { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configCopySettings(projectID, clusterName, false, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configCopySettings(terraformStr, projectID, clusterResourceName, false, &admin20231115.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(1), @@ -308,7 +318,7 @@ func TestAccBackupRSCloudBackupSchedule_copySettings(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc(checksCreate...), }, { - Config: configCopySettings(projectID, clusterName, true, &admin20231115.DiskBackupSnapshotSchedule{ + Config: configCopySettings(terraformStr, projectID, clusterResourceName, true, &admin20231115.DiskBackupSnapshotSchedule{ ReferenceHourOfDay: conversion.Pointer(3), ReferenceMinuteOfHour: conversion.Pointer(45), RestoreWindowDays: conversion.Pointer(1), @@ -336,7 +346,7 @@ func TestAccBackupRSCloudBackupScheduleImport_basic(t *testing.T) { }), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "3"), resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "45"), resource.TestCheckResourceAttr(resourceName, "restore_window_days", "4"), @@ -374,7 +384,8 @@ func TestAccBackupRSCloudBackupScheduleImport_basic(t *testing.T) { func TestAccBackupRSCloudBackupSchedule_azure(t *testing.T) { var ( - clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{CloudBackup: true, ProviderName: constant.AZURE}) + spec = acc.ReplicationSpecRequest{ProviderName: constant.AZURE} + clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{CloudBackup: true, ReplicationSpecs: []acc.ReplicationSpecRequest{spec}}) ) resource.ParallelTest(t, resource.TestCase{ @@ -390,7 +401,7 @@ func TestAccBackupRSCloudBackupSchedule_azure(t *testing.T) { }), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.frequency_interval", "1"), resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.retention_unit", "days"), resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.retention_value", "1")), @@ -403,7 +414,7 @@ func TestAccBackupRSCloudBackupSchedule_azure(t *testing.T) { }), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.frequency_interval", "2"), resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.retention_unit", "days"), resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.retention_value", "3"), @@ -463,10 +474,10 @@ func checkDestroy(s *terraform.State) error { } func configNoPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule) string { - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q reference_hour_of_day = %[3]d reference_minute_of_hour = %[4]d @@ -475,16 +486,16 @@ func configNoPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshot data "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q } - `, info.ClusterNameStr, info.ProjectIDStr, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) + `, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) } func configDefault(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule) string { - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q reference_hour_of_day = %[3]d reference_minute_of_hour = %[4]d @@ -519,15 +530,15 @@ func configDefault(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSch data "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q } - `, info.ClusterNameStr, info.ProjectIDStr, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) + `, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) } -func configCopySettings(projectID, clusterName string, emptyCopySettings bool, p *admin20231115.DiskBackupSnapshotSchedule) string { +func configCopySettings(terraformStr, projectID, clusterResourceName string, emptyCopySettings bool, p *admin20231115.DiskBackupSnapshotSchedule) string { var copySettings string if !emptyCopySettings { - copySettings = ` + copySettings = fmt.Sprintf(` copy_settings { cloud_provider = "AWS" frequencies = ["HOURLY", @@ -537,40 +548,19 @@ func configCopySettings(projectID, clusterName string, emptyCopySettings bool, p "YEARLY", "ON_DEMAND"] region_name = "US_EAST_1" - replication_spec_id = mongodbatlas_cluster.my_cluster.replication_specs.*.id[0] + replication_spec_id = %[1]s.replication_specs.*.id[0] should_copy_oplogs = true - }` + }`, clusterResourceName) } return fmt.Sprintf(` - resource "mongodbatlas_cluster" "my_cluster" { - project_id = %[1]q - name = %[2]q - - cluster_type = "REPLICASET" - replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - // Provider Settings "block" - provider_name = "AWS" - provider_region_name = "US_EAST_2" - provider_instance_size_name = "M10" - cloud_backup = true //enable cloud provider snapshots - pit_enabled = true // enable point in time restore. you cannot copy oplogs when pit is not enabled. - } - + %[1]s resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { - project_id = %[1]q - cluster_name = %[2]q + project_id = %[2]q + cluster_name = %[3]s.name - reference_hour_of_day = %[3]d - reference_minute_of_hour = %[4]d - restore_window_days = %[5]d + reference_hour_of_day = %[4]d + reference_minute_of_hour = %[5]d + restore_window_days = %[6]d policy_item_hourly { frequency_interval = 1 @@ -597,16 +587,16 @@ func configCopySettings(projectID, clusterName string, emptyCopySettings bool, p retention_unit = "years" retention_value = 1 } - %s + %[7]s } - `, projectID, clusterName, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays(), copySettings) + `, terraformStr, projectID, clusterResourceName, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays(), copySettings) } func configOnePolicy(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule) string { - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q reference_hour_of_day = %[3]d reference_minute_of_hour = %[4]d @@ -618,7 +608,7 @@ func configOnePolicy(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotS retention_value = 1 } } - `, info.ClusterNameStr, info.ProjectIDStr, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) + `, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) } func configNewPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule, useYearly bool) string { @@ -633,10 +623,10 @@ func configNewPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapsho ` } - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q reference_hour_of_day = %[3]d reference_minute_of_hour = %[4]d @@ -667,16 +657,16 @@ func configNewPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapsho data "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q } - `, info.ClusterNameStr, info.ProjectIDStr, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays(), strYearly) + `, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays(), strYearly) } func configAzure(info *acc.ClusterInfo, policy *admin20231115.DiskBackupApiPolicyItem) string { - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q policy_item_hourly { frequency_interval = %[3]d @@ -687,16 +677,16 @@ func configAzure(info *acc.ClusterInfo, policy *admin20231115.DiskBackupApiPolic data "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q } - `, info.ClusterNameStr, info.ProjectIDStr, policy.GetFrequencyInterval(), policy.GetRetentionUnit(), policy.GetRetentionValue()) + `, info.TerraformNameRef, info.ProjectID, policy.GetFrequencyInterval(), policy.GetRetentionUnit(), policy.GetRetentionValue()) } func configAdvancedPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSnapshotSchedule) string { - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q auto_export_enabled = false reference_hour_of_day = %[3]d @@ -739,14 +729,14 @@ func configAdvancedPolicies(info *acc.ClusterInfo, p *admin20231115.DiskBackupSn retention_value = 1 } } - `, info.ClusterNameStr, info.ProjectIDStr, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) + `, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays()) } func configExportPolicies(info *acc.ClusterInfo, policyName, roleName, bucketName string) string { - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q auto_export_enabled = true reference_hour_of_day = 20 reference_minute_of_hour = "05" @@ -786,12 +776,12 @@ func configExportPolicies(info *acc.ClusterInfo, policyName, roleName, bucketNam } resource "mongodbatlas_cloud_provider_access_setup" "setup_only" { - project_id = %[2]s + project_id = %[2]q provider_name = "AWS" } resource "mongodbatlas_cloud_provider_access_authorization" "auth_role" { - project_id = %[2]s + project_id = %[2]q role_id = mongodbatlas_cloud_provider_access_setup.setup_only.role_id aws { iam_assumed_role_arn = aws_iam_role.test_role.arn @@ -799,7 +789,7 @@ func configExportPolicies(info *acc.ClusterInfo, policyName, roleName, bucketNam } resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { - project_id = %[2]s + project_id = %[2]q iam_role_id = mongodbatlas_cloud_provider_access_authorization.auth_role.role_id bucket_name = aws_s3_bucket.backup.bucket cloud_provider = "AWS" @@ -848,7 +838,7 @@ func configExportPolicies(info *acc.ClusterInfo, policyName, roleName, bucketNam } EOF } - `, info.ClusterNameStr, info.ProjectIDStr, policyName, roleName, bucketName) + `, info.TerraformNameRef, info.ProjectID, policyName, roleName, bucketName) } func importStateIDFunc(resourceName string) resource.ImportStateIdFunc { diff --git a/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go b/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go index 2279919f71..269e98010e 100644 --- a/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go +++ b/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go @@ -1,9 +1,9 @@ package cloudbackupsnapshot_test import ( + "reflect" "testing" - "github.com/go-test/deep" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/cloudbackupsnapshot" "go.mongodb.org/atlas-sdk/v20240530002/admin" ) @@ -20,8 +20,8 @@ func TestSplitSnapshotImportID(t *testing.T) { SnapshotId: "5cf5a45a9ccf6400e60981b7", } - if diff := deep.Equal(expected, got); diff != nil { - t.Errorf("Bad splitSnapshotImportID return \n got = %#v\nwant = %#v \ndiff = %#v", expected, *got, diff) + if !reflect.DeepEqual(expected, got) { + t.Errorf("Bad splitSnapshotImportID return \n got = %#v\nwant = %#v", expected, *got) } if _, err := cloudbackupsnapshot.SplitSnapshotImportID("5cf5a45a9ccf6400e60981b6projectname-environment-mongo-global-cluster5cf5a45a9ccf6400e60981b7"); err == nil { diff --git a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_migration_test.go b/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_migration_test.go index 6db8a17ea7..3ab11cd76a 100644 --- a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_migration_test.go +++ b/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_migration_test.go @@ -29,8 +29,8 @@ func TestMigBackupRSCloudBackupSnapshot_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "type", "replicaSet"), resource.TestCheckResourceAttr(resourceName, "members.#", "0"), resource.TestCheckResourceAttr(resourceName, "snapshot_ids.#", "0"), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), - resource.TestCheckResourceAttr(resourceName, "replica_set_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), + resource.TestCheckResourceAttr(resourceName, "replica_set_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "cloud_provider", "AWS"), resource.TestCheckResourceAttr(resourceName, "description", description), resource.TestCheckResourceAttr(resourceName, "retention_in_days", retentionInDays), diff --git a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_test.go b/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_test.go index 0bb2eb1fbd..9edb525df6 100644 --- a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_test.go +++ b/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_test.go @@ -38,8 +38,8 @@ func TestAccBackupRSCloudBackupSnapshot_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "type", "replicaSet"), resource.TestCheckResourceAttr(resourceName, "members.#", "0"), resource.TestCheckResourceAttr(resourceName, "snapshot_ids.#", "0"), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), - resource.TestCheckResourceAttr(resourceName, "replica_set_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), + resource.TestCheckResourceAttr(resourceName, "replica_set_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "cloud_provider", "AWS"), resource.TestCheckResourceAttr(resourceName, "description", description), resource.TestCheckResourceAttr(resourceName, "retention_in_days", retentionInDays), @@ -47,8 +47,8 @@ func TestAccBackupRSCloudBackupSnapshot_basic(t *testing.T) { resource.TestCheckResourceAttr(dataSourceName, "type", "replicaSet"), resource.TestCheckResourceAttr(dataSourceName, "members.#", "0"), resource.TestCheckResourceAttr(dataSourceName, "snapshot_ids.#", "0"), - resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.ClusterName), - resource.TestCheckResourceAttr(dataSourceName, "replica_set_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(dataSourceName, "cluster_name", clusterInfo.Name), + resource.TestCheckResourceAttr(dataSourceName, "replica_set_name", clusterInfo.Name), resource.TestCheckResourceAttr(dataSourceName, "cloud_provider", "AWS"), resource.TestCheckResourceAttr(dataSourceName, "description", description), resource.TestCheckResourceAttrSet(dataSourcePluralSimpleName, "results.#"), @@ -148,10 +148,10 @@ func importStateIDFunc(resourceName string) resource.ImportStateIdFunc { } func configBasic(info *acc.ClusterInfo, description, retentionInDays string) string { - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_cloud_backup_snapshot" "test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q description = %[3]q retention_in_days = %[4]q } @@ -159,21 +159,21 @@ func configBasic(info *acc.ClusterInfo, description, retentionInDays string) str data "mongodbatlas_cloud_backup_snapshot" "test" { snapshot_id = mongodbatlas_cloud_backup_snapshot.test.snapshot_id cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q } data "mongodbatlas_cloud_backup_snapshots" "test" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q } data "mongodbatlas_cloud_backup_snapshots" "pagination" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q page_num = 1 items_per_page = 5 } - `, info.ClusterNameStr, info.ProjectIDStr, description, retentionInDays) + `, info.TerraformNameRef, info.ProjectID, description, retentionInDays) } func configSharded(projectID, clusterName, description, retentionInDays string) string { diff --git a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go index 4b451363e5..7ebf7f5694 100644 --- a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go +++ b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go @@ -58,7 +58,7 @@ func basicTestCase(tb testing.TB) *resource.TestCase { ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, Steps: []resource.TestStep{ { - Config: configBasic(projectID, bucketName, roleName, policyName, clusterInfo.ClusterNameStr, clusterInfo.ClusterTerraformStr), + Config: configBasic(projectID, bucketName, roleName, policyName, clusterInfo.TerraformNameRef, clusterInfo.TerraformStr), Check: resource.ComposeAggregateTestCheckFunc(checks...), }, { diff --git a/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_test.go b/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_test.go index 188b026b33..3f27e3a900 100644 --- a/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_test.go +++ b/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_test.go @@ -18,17 +18,28 @@ const ( dataSourceName = "data.mongodbatlas_cloud_backup_snapshot_restore_job.test" ) +func clusterRequest() *acc.ClusterRequest { + return &acc.ClusterRequest{ + CloudBackup: true, + ReplicationSpecs: []acc.ReplicationSpecRequest{ + {Region: "US_WEST_2"}, + }, + } +} + func TestAccCloudBackupSnapshotRestoreJob_basic(t *testing.T) { resource.ParallelTest(t, *basicTestCase(t)) } func TestAccCloudBackupSnapshotRestoreJob_basicDownload(t *testing.T) { var ( - projectID = acc.ProjectIDExecution(t) - clusterName = acc.RandomClusterName() - description = fmt.Sprintf("My description in %s", clusterName) - retentionInDays = "1" - useSnapshotID = true + clusterInfo = acc.GetClusterInfo(t, clusterRequest()) + clusterName = clusterInfo.Name + description = fmt.Sprintf("My description in %s", clusterName) + retentionInDays = "1" + useSnapshotID = true + clusterTerraformStr = clusterInfo.TerraformStr + clusterResourceName = clusterInfo.ResourceName ) resource.ParallelTest(t, resource.TestCase{ @@ -37,14 +48,14 @@ func TestAccCloudBackupSnapshotRestoreJob_basicDownload(t *testing.T) { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configDownload(projectID, clusterName, description, retentionInDays, useSnapshotID), + Config: configDownload(clusterTerraformStr, clusterResourceName, description, retentionInDays, useSnapshotID), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), resource.TestCheckResourceAttr(resourceName, "delivery_type_config.0.download", "true"), ), }, { - Config: configDownload(projectID, clusterName, description, retentionInDays, !useSnapshotID), + Config: configDownload(clusterTerraformStr, clusterResourceName, description, retentionInDays, !useSnapshotID), ExpectError: regexp.MustCompile("SNAPSHOT_NOT_FOUND"), }, }, @@ -57,8 +68,8 @@ func basicTestCase(tb testing.TB) *resource.TestCase { var ( snapshotsDataSourceName = "data.mongodbatlas_cloud_backup_snapshot_restore_jobs.test" snapshotsDataSourcePaginationName = "data.mongodbatlas_cloud_backup_snapshot_restore_jobs.pagination" - projectID = acc.ProjectIDExecution(tb) - clusterName = acc.RandomClusterName() + clusterInfo = acc.GetClusterInfo(tb, clusterRequest()) + clusterName = clusterInfo.Name description = fmt.Sprintf("My description in %s", clusterName) retentionInDays = "1" ) @@ -69,7 +80,7 @@ func basicTestCase(tb testing.TB) *resource.TestCase { CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configBasic(projectID, clusterName, description, retentionInDays), + Config: configBasic(clusterInfo.TerraformStr, clusterInfo.ResourceName, description, retentionInDays), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), resource.TestCheckResourceAttr(resourceName, "delivery_type_config.0.automated", "true"), @@ -139,25 +150,15 @@ func importStateIDFunc(resourceName string) resource.ImportStateIdFunc { } } -func configBasic(projectID, clusterName, description, retentionInDays string) string { +func configBasic(terraformStr, clusterResourceName, description, retentionInDays string) string { return fmt.Sprintf(` - resource "mongodbatlas_cluster" "my_cluster" { - project_id = %[1]q - name = %[2]q - - // Provider Settings "block" - provider_name = "AWS" - provider_region_name = "US_WEST_2" - provider_instance_size_name = "M10" - cloud_backup = true - } - + %[1]s resource "mongodbatlas_cloud_backup_snapshot" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = %[2]s.project_id + cluster_name = %[2]s.name description = %[3]q retention_in_days = %[4]q - depends_on = [mongodbatlas_cluster.my_cluster] + depends_on = [%[2]s] } resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" { @@ -191,29 +192,20 @@ func configBasic(projectID, clusterName, description, retentionInDays string) st page_num = 1 items_per_page = 5 } - `, projectID, clusterName, description, retentionInDays) + `, terraformStr, clusterResourceName, description, retentionInDays) } -func configDownload(projectID, clusterName, description, retentionInDays string, useSnapshotID bool) string { +func configDownload(terraformStr, clusterResourceName, description, retentionInDays string, useSnapshotID bool) string { var snapshotIDField string if useSnapshotID { snapshotIDField = `snapshot_id = mongodbatlas_cloud_backup_snapshot.test.id` } return fmt.Sprintf(` - resource "mongodbatlas_cluster" "my_cluster" { - project_id = %[1]q - name = %[2]q - - provider_name = "AWS" - provider_region_name = "US_WEST_2" - provider_instance_size_name = "M10" - cloud_backup = true // enable cloud provider snapshots - } - + %[1]s resource "mongodbatlas_cloud_backup_snapshot" "test" { - project_id = mongodbatlas_cluster.my_cluster.project_id - cluster_name = mongodbatlas_cluster.my_cluster.name + project_id = %[2]s.project_id + cluster_name = %[2]s.name description = %[3]q retention_in_days = %[4]q } @@ -227,5 +219,5 @@ func configDownload(projectID, clusterName, description, retentionInDays string, download = true } } - `, projectID, clusterName, description, retentionInDays, snapshotIDField) + `, terraformStr, clusterResourceName, description, retentionInDays, snapshotIDField) } diff --git a/internal/service/cluster/resource_cluster_test.go b/internal/service/cluster/resource_cluster_test.go index 50dedc053c..4e891aced7 100644 --- a/internal/service/cluster/resource_cluster_test.go +++ b/internal/service/cluster/resource_cluster_test.go @@ -603,7 +603,7 @@ func TestAccCluster_Global(t *testing.T) { CheckDestroy: acc.CheckDestroyCluster, Steps: []resource.TestStep{ { - Config: acc.ConfigClusterGlobal(orgID, projectName, clusterName), + Config: configClusterGlobal(orgID, projectName, clusterName), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), resource.TestCheckResourceAttrSet(resourceName, "mongo_uri"), @@ -2290,6 +2290,51 @@ resource "mongodbatlas_cluster" "test" { `, projectID, name, backupEnabled, paused) } +func configClusterGlobal(orgID, projectName, clusterName string) string { + return fmt.Sprintf(` + + resource "mongodbatlas_project" "test" { + org_id = %[1]q + name = %[2]q + } + + resource "mongodbatlas_cluster" test { + project_id = mongodbatlas_project.test.id + name = %[3]q + disk_size_gb = 80 + num_shards = 1 + cloud_backup = false + cluster_type = "GEOSHARDED" + + // Provider Settings "block" + provider_name = "AWS" + provider_instance_size_name = "M30" + + replication_specs { + zone_name = "Zone 1" + num_shards = 2 + regions_config { + region_name = "US_EAST_1" + electable_nodes = 3 + priority = 7 + read_only_nodes = 0 + } + } + + replication_specs { + zone_name = "Zone 2" + num_shards = 2 + regions_config { + region_name = "US_WEST_2" + electable_nodes = 3 + priority = 7 + read_only_nodes = 0 + } + } + } + `, orgID, projectName, clusterName) +} + func TestIsMultiRegionCluster(t *testing.T) { tests := []struct { name string diff --git a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_migration_test.go b/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_migration_test.go index 3a9cc5f190..3c75ffa36a 100644 --- a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_migration_test.go +++ b/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_migration_test.go @@ -3,63 +3,13 @@ package clusteroutagesimulation_test import ( "testing" - "github.com/hashicorp/terraform-plugin-testing/helper/resource" - "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig" ) func TestMigOutageSimulationCluster_SingleRegion_basic(t *testing.T) { - var ( - projectID = acc.ProjectIDExecution(t) - clusterName = acc.RandomClusterName() - config = configSingleRegion(projectID, clusterName) - ) - - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { mig.PreCheckBasic(t) }, - CheckDestroy: checkDestroy, - Steps: []resource.TestStep{ - { - ExternalProviders: mig.ExternalProviders(), - Config: config, - Check: resource.ComposeAggregateTestCheckFunc( - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterName), - resource.TestCheckResourceAttrSet(resourceName, "project_id"), - resource.TestCheckResourceAttrSet(resourceName, "outage_filters.#"), - resource.TestCheckResourceAttrSet(resourceName, "start_request_date"), - resource.TestCheckResourceAttrSet(resourceName, "simulation_id"), - resource.TestCheckResourceAttrSet(resourceName, "state"), - ), - }, - mig.TestStepCheckEmptyPlan(config), - }, - }) + mig.CreateAndRunTest(t, singleRegionTestCase(t)) } func TestMigOutageSimulationCluster_MultiRegion_basic(t *testing.T) { - var ( - projectID = acc.ProjectIDExecution(t) - clusterName = acc.RandomClusterName() - config = configMultiRegion(projectID, clusterName) - ) - - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { mig.PreCheckBasic(t) }, - CheckDestroy: checkDestroy, - Steps: []resource.TestStep{ - { - ExternalProviders: mig.ExternalProviders(), - Config: config, - Check: resource.ComposeAggregateTestCheckFunc( - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterName), - resource.TestCheckResourceAttrSet(resourceName, "project_id"), - resource.TestCheckResourceAttrSet(resourceName, "outage_filters.#"), - resource.TestCheckResourceAttrSet(resourceName, "start_request_date"), - resource.TestCheckResourceAttrSet(resourceName, "simulation_id"), - resource.TestCheckResourceAttrSet(resourceName, "state"), - ), - }, - mig.TestStepCheckEmptyPlan(config), - }, - }) + mig.CreateAndRunTest(t, multiRegionTestCase(t)) } diff --git a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_test.go b/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_test.go index a1224b620e..cd0eb7dae5 100644 --- a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_test.go +++ b/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_test.go @@ -17,18 +17,27 @@ const ( ) func TestAccOutageSimulationCluster_SingleRegion_basic(t *testing.T) { + resource.ParallelTest(t, *singleRegionTestCase(t)) +} + +func singleRegionTestCase(t *testing.T) *resource.TestCase { + t.Helper() var ( - projectID = acc.ProjectIDExecution(t) - clusterName = acc.RandomClusterName() + singleRegionRequest = acc.ClusterRequest{ + ReplicationSpecs: []acc.ReplicationSpecRequest{ + {Region: "US_WEST_2", InstanceSize: "M10"}, + }, + } + clusterInfo = acc.GetClusterInfo(t, &singleRegionRequest) + clusterName = clusterInfo.Name ) - - resource.ParallelTest(t, resource.TestCase{ + return &resource.TestCase{ PreCheck: func() { acc.PreCheckBasic(t) }, ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configSingleRegion(projectID, clusterName), + Config: configSingleRegion(&clusterInfo), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterName), resource.TestCheckResourceAttrSet(resourceName, "project_id"), @@ -46,22 +55,37 @@ func TestAccOutageSimulationCluster_SingleRegion_basic(t *testing.T) { ), }, }, - }) + } } func TestAccOutageSimulationCluster_MultiRegion_basic(t *testing.T) { + resource.ParallelTest(t, *multiRegionTestCase(t)) +} + +func multiRegionTestCase(t *testing.T) *resource.TestCase { + t.Helper() var ( - projectID = acc.ProjectIDExecution(t) - clusterName = acc.RandomClusterName() + multiRegionRequest = acc.ClusterRequest{ReplicationSpecs: []acc.ReplicationSpecRequest{ + { + Region: "US_EAST_1", + NodeCount: 3, + ExtraRegionConfigs: []acc.ReplicationSpecRequest{ + {Region: "US_EAST_2", NodeCount: 2, Priority: 6}, + {Region: "US_WEST_2", NodeCount: 2, Priority: 5, NodeCountReadOnly: 2}, + }, + }, + }} + clusterInfo = acc.GetClusterInfo(t, &multiRegionRequest) + clusterName = clusterInfo.Name ) - resource.ParallelTest(t, resource.TestCase{ + return &resource.TestCase{ PreCheck: func() { acc.PreCheckBasic(t) }, ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, CheckDestroy: checkDestroy, Steps: []resource.TestStep{ { - Config: configMultiRegion(projectID, clusterName), + Config: configMultiRegion(&clusterInfo), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterName), resource.TestCheckResourceAttrSet(resourceName, "project_id"), @@ -79,73 +103,36 @@ func TestAccOutageSimulationCluster_MultiRegion_basic(t *testing.T) { ), }, }, - }) + } } -func configSingleRegion(projectID, clusterName string) string { +func configSingleRegion(info *acc.ClusterInfo) string { return fmt.Sprintf(` - resource "mongodbatlas_cluster" "test" { - project_id = %[1]q - name = %[2]q - provider_name = "AWS" - provider_region_name = "US_WEST_2" - provider_instance_size_name = "M10" - } - + %[1]s resource "mongodbatlas_cluster_outage_simulation" "test_outage" { - project_id = %[1]q - cluster_name = %[2]q + project_id = %[2]q + cluster_name = %[3]q outage_filters { cloud_provider = "AWS" region_name = "US_WEST_2" } - depends_on = ["mongodbatlas_cluster.test"] + depends_on = [%[4]s] } data "mongodbatlas_cluster_outage_simulation" "test" { - project_id = %[1]q - cluster_name = %[2]q + project_id = %[2]q + cluster_name = %[3]q depends_on = [mongodbatlas_cluster_outage_simulation.test_outage] } - `, projectID, clusterName) + `, info.TerraformStr, info.ProjectID, info.Name, info.ResourceName) } -func configMultiRegion(projectID, clusterName string) string { +func configMultiRegion(info *acc.ClusterInfo) string { return fmt.Sprintf(` - resource "mongodbatlas_cluster" "test" { - project_id = %[1]q - name = %[2]q - cluster_type = "REPLICASET" - - provider_name = "AWS" - provider_instance_size_name = "M10" - - replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - regions_config { - region_name = "US_EAST_2" - electable_nodes = 2 - priority = 6 - read_only_nodes = 0 - } - regions_config { - region_name = "US_WEST_2" - electable_nodes = 2 - priority = 5 - read_only_nodes = 2 - } - } - } - + %[1]s resource "mongodbatlas_cluster_outage_simulation" "test_outage" { - project_id = %[1]q - cluster_name = %[2]q + project_id = %[2]q + cluster_name = %[3]q outage_filters { cloud_provider = "AWS" @@ -155,15 +142,15 @@ func configMultiRegion(projectID, clusterName string) string { cloud_provider = "AWS" region_name = "US_EAST_2" } - depends_on = ["mongodbatlas_cluster.test"] + depends_on = [%[4]s] } data "mongodbatlas_cluster_outage_simulation" "test" { - project_id = %[1]q - cluster_name = %[2]q + project_id = %[2]q + cluster_name = %[3]q depends_on = [mongodbatlas_cluster_outage_simulation.test_outage] } - `, projectID, clusterName) + `, info.TerraformStr, info.ProjectID, info.Name, info.ResourceName) } func checkDestroy(s *terraform.State) error { diff --git a/internal/service/eventtrigger/resource_event_trigger.go b/internal/service/eventtrigger/resource_event_trigger.go index 2a91685775..0aa40f2b27 100644 --- a/internal/service/eventtrigger/resource_event_trigger.go +++ b/internal/service/eventtrigger/resource_event_trigger.go @@ -7,9 +7,9 @@ import ( "fmt" "log" "net/http" + "reflect" "strings" - "github.com/go-test/deep" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -118,8 +118,7 @@ func Resource() *schema.Resource { log.Printf("[ERROR] json.Unmarshal %v", err) return false } - if diff := deep.Equal(&j, &j2); diff != nil { - log.Printf("[DEBUG] deep equal not passed: %v", diff) + if !reflect.DeepEqual(&j, &j2) { return false } @@ -140,8 +139,7 @@ func Resource() *schema.Resource { log.Printf("[ERROR] json.Unmarshal %v", err) return false } - if diff := deep.Equal(&j, &j2); diff != nil { - log.Printf("[DEBUG] deep equal not passed: %v", diff) + if !reflect.DeepEqual(&j, &j2) { return false } diff --git a/internal/service/federateddatabaseinstance/resource_federated_database_instance_test.go b/internal/service/federateddatabaseinstance/resource_federated_database_instance_test.go index 7c95aa741b..7bba2984eb 100644 --- a/internal/service/federateddatabaseinstance/resource_federated_database_instance_test.go +++ b/internal/service/federateddatabaseinstance/resource_federated_database_instance_test.go @@ -113,12 +113,23 @@ func TestAccFederatedDatabaseInstance_s3bucket(t *testing.T) { func TestAccFederatedDatabaseInstance_atlasCluster(t *testing.T) { var ( - resourceName = "mongodbatlas_federated_database_instance.test" - orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") - projectName = acc.RandomProjectName() - clusterName1 = acc.RandomClusterName() - clusterName2 = acc.RandomClusterName() - name = acc.RandomName() + specs = []acc.ReplicationSpecRequest{ + {Region: "EU_WEST_2"}, + } + clusterRequest = acc.ClusterRequest{ + ReplicationSpecs: specs, + } + resourceName = "mongodbatlas_federated_database_instance.test" + name = acc.RandomName() + clusterInfo = acc.GetClusterInfo(t, &clusterRequest) + projectID = clusterInfo.ProjectID + clusterRequest2 = acc.ClusterRequest{ + ProjectID: projectID, + ReplicationSpecs: specs, + ResourceSuffix: "cluster2", + } + cluster2Info = acc.GetClusterInfo(t, &clusterRequest2) + dependencyTerraform = fmt.Sprintf("%s\n%s", clusterInfo.TerraformStr, cluster2Info.TerraformStr) ) resource.ParallelTest(t, resource.TestCase{ @@ -127,7 +138,7 @@ func TestAccFederatedDatabaseInstance_atlasCluster(t *testing.T) { Steps: []resource.TestStep{ { ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, - Config: configWithCluster(orgID, projectName, clusterName1, clusterName2, name), + Config: configWithCluster(dependencyTerraform, projectID, clusterInfo.ResourceName, cluster2Info.ResourceName, name), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(resourceName, "project_id"), resource.TestCheckResourceAttr(resourceName, "name", name), @@ -140,34 +151,12 @@ func TestAccFederatedDatabaseInstance_atlasCluster(t *testing.T) { }) } -func configWithCluster(orgID, projectName, clusterName1, clusterName2, name string) string { +func configWithCluster(terraformStr, projectID, cluster1ResourceName, cluster2ResourceName, name string) string { return fmt.Sprintf(` - resource "mongodbatlas_project" "project-tf" { - org_id = %[1]q - name = %[2]q - } - - resource "mongodbatlas_cluster" "cluster-1" { - project_id = mongodbatlas_project.project-tf.id - provider_name = "AWS" - name = %[3]q - backing_provider_name = "AWS" - provider_region_name = "EU_WEST_2" - provider_instance_size_name = "M10" - } - - - resource "mongodbatlas_cluster" "cluster-2" { - project_id = mongodbatlas_project.project-tf.id - provider_name = "AWS" - name = %[4]q - backing_provider_name = "AWS" - provider_region_name = "EU_WEST_2" - provider_instance_size_name = "M10" - } + %[1]s resource "mongodbatlas_federated_database_instance" "test" { - project_id = mongodbatlas_project.project-tf.id + project_id = %[2]q name = %[5]q storage_databases { name = "VirtualDatabase0" @@ -176,21 +165,21 @@ func configWithCluster(orgID, projectName, clusterName1, clusterName2, name stri data_sources { collection = "listingsAndReviews" database = "sample_airbnb" - store_name = mongodbatlas_cluster.cluster-1.name + store_name = %[3]s.name } data_sources { collection = "listingsAndReviews" database = "sample_airbnb" - store_name = mongodbatlas_cluster.cluster-2.name + store_name = %[4]s.name } } } storage_stores { - name = mongodbatlas_cluster.cluster-1.name - cluster_name = mongodbatlas_cluster.cluster-1.name - project_id = mongodbatlas_project.project-tf.id + name = %[3]s.name + cluster_name = %[3]s.name + project_id = %[2]q provider = "atlas" read_preference { mode = "secondary" @@ -218,9 +207,9 @@ func configWithCluster(orgID, projectName, clusterName1, clusterName2, name stri } storage_stores { - name = mongodbatlas_cluster.cluster-2.name - cluster_name = mongodbatlas_cluster.cluster-2.name - project_id = mongodbatlas_project.project-tf.id + name = %[4]s.name + cluster_name = %[4]s.name + project_id = %[2]q provider = "atlas" read_preference { mode = "secondary" @@ -247,7 +236,7 @@ func configWithCluster(orgID, projectName, clusterName1, clusterName2, name stri } } } - `, orgID, projectName, clusterName1, clusterName2, name) + `, terraformStr, projectID, cluster1ResourceName, cluster2ResourceName, name) } func importStateIDFuncS3Bucket(resourceName, s3Bucket string) resource.ImportStateIdFunc { diff --git a/internal/service/globalclusterconfig/resource_global_cluster_config_migration_test.go b/internal/service/globalclusterconfig/resource_global_cluster_config_migration_test.go index c70697344c..7353bc22cd 100644 --- a/internal/service/globalclusterconfig/resource_global_cluster_config_migration_test.go +++ b/internal/service/globalclusterconfig/resource_global_cluster_config_migration_test.go @@ -27,7 +27,7 @@ func TestMigClusterRSGlobalCluster_basic(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "custom_zone_mapping.%"), resource.TestCheckResourceAttrSet(resourceName, "custom_zone_mapping.CA"), resource.TestCheckResourceAttrSet(resourceName, "project_id"), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "managed_namespaces.#", "1"), resource.TestCheckResourceAttr(resourceName, "managed_namespaces.0.is_custom_shard_key_hashed", "false"), resource.TestCheckResourceAttr(resourceName, "managed_namespaces.0.is_shard_key_unique", "false"), diff --git a/internal/service/globalclusterconfig/resource_global_cluster_config_test.go b/internal/service/globalclusterconfig/resource_global_cluster_config_test.go index 342a354f21..522305f543 100644 --- a/internal/service/globalclusterconfig/resource_global_cluster_config_test.go +++ b/internal/service/globalclusterconfig/resource_global_cluster_config_test.go @@ -31,7 +31,7 @@ func TestAccClusterRSGlobalCluster_basic(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "custom_zone_mapping.%"), resource.TestCheckResourceAttrSet(resourceName, "custom_zone_mapping.CA"), resource.TestCheckResourceAttrSet(resourceName, "project_id"), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), resource.TestCheckResourceAttr(resourceName, "managed_namespaces.#", "1"), resource.TestCheckResourceAttr(resourceName, "managed_namespaces.0.is_custom_shard_key_hashed", "false"), resource.TestCheckResourceAttr(resourceName, "managed_namespaces.0.is_shard_key_unique", "false"), @@ -64,7 +64,7 @@ func TestAccClusterRSGlobalCluster_withAWSAndBackup(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "custom_zone_mapping.%"), resource.TestCheckResourceAttrSet(resourceName, "custom_zone_mapping.CA"), resource.TestCheckResourceAttrSet(resourceName, "project_id"), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), ), }, { @@ -80,7 +80,11 @@ func TestAccClusterRSGlobalCluster_withAWSAndBackup(t *testing.T) { func TestAccClusterRSGlobalCluster_database(t *testing.T) { var ( - clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{Geosharded: true, ExtraConfig: zonesStr}) + specUS = acc.ReplicationSpecRequest{ZoneName: "US", Region: "US_EAST_1"} + specEU = acc.ReplicationSpecRequest{ZoneName: "EU", Region: "EU_WEST_1"} + specDE = acc.ReplicationSpecRequest{ZoneName: "DE", Region: "EU_NORTH_1"} + specJP = acc.ReplicationSpecRequest{ZoneName: "JP", Region: "AP_NORTHEAST_1"} + clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{Geosharded: true, ReplicationSpecs: []acc.ReplicationSpecRequest{specUS, specEU, specDE, specJP}}) ) resource.Test(t, resource.TestCase{ @@ -99,7 +103,7 @@ func TestAccClusterRSGlobalCluster_database(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "custom_zone_mapping.IE"), resource.TestCheckResourceAttrSet(resourceName, "custom_zone_mapping.DE"), resource.TestCheckResourceAttrSet(resourceName, "project_id"), - resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.ClusterName), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name), ), }, { @@ -170,10 +174,10 @@ func checkDestroy(s *terraform.State) error { } func configBasic(info *acc.ClusterInfo, isCustomShard, isShardKeyUnique bool) string { - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_global_cluster_config" "config" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q managed_namespaces { db = "mydata" @@ -191,16 +195,16 @@ func configBasic(info *acc.ClusterInfo, isCustomShard, isShardKeyUnique bool) st data "mongodbatlas_global_cluster_config" "config" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q } - `, info.ClusterNameStr, info.ProjectIDStr, isCustomShard, isShardKeyUnique) + `, info.TerraformNameRef, info.ProjectID, isCustomShard, isShardKeyUnique) } func configWithDBConfig(info *acc.ClusterInfo, zones string) string { - return info.ClusterTerraformStr + fmt.Sprintf(` + return info.TerraformStr + fmt.Sprintf(` resource "mongodbatlas_global_cluster_config" "config" { cluster_name = %[1]s - project_id = %[2]s + project_id = %[2]q managed_namespaces { db = "horizonv2-sg" @@ -229,7 +233,7 @@ func configWithDBConfig(info *acc.ClusterInfo, zones string) string { } %[3]s } - `, info.ClusterNameStr, info.ProjectIDStr, zones) + `, info.TerraformNameRef, info.ProjectID, zones) } const ( @@ -268,47 +272,4 @@ const ( zone = "JP" } ` - - zonesStr = ` - replication_specs { - zone_name = "US" - num_shards = 1 - regions_config { - region_name = "US_EAST_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - replication_specs { - zone_name = "EU" - num_shards = 1 - regions_config { - region_name = "EU_WEST_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - replication_specs { - zone_name = "DE" - num_shards = 1 - regions_config { - region_name = "EU_NORTH_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - replication_specs { - zone_name = "JP" - num_shards = 1 - regions_config { - region_name = "AP_NORTHEAST_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - ` ) diff --git a/internal/service/ldapconfiguration/resource_ldap_configuration_test.go b/internal/service/ldapconfiguration/resource_ldap_configuration_test.go index 7db034a5dd..5fb300be5c 100644 --- a/internal/service/ldapconfiguration/resource_ldap_configuration_test.go +++ b/internal/service/ldapconfiguration/resource_ldap_configuration_test.go @@ -30,8 +30,14 @@ func TestAccLDAPConfiguration_withVerify_CACertificateComplete(t *testing.T) { password = os.Getenv("MONGODB_ATLAS_LDAP_PASSWORD") port = os.Getenv("MONGODB_ATLAS_LDAP_PORT") caCertificate = os.Getenv("MONGODB_ATLAS_LDAP_CA_CERTIFICATE") - projectID = acc.ProjectIDExecution(t) - clusterName = acc.RandomClusterName() + clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{ + CloudBackup: true, + ReplicationSpecs: []acc.ReplicationSpecRequest{ + {Region: "US_EAST_2"}, + }, + }) + projectID = clusterInfo.ProjectID + clusterTerraformStr = clusterInfo.TerraformStr ) resource.Test(t, resource.TestCase{ @@ -39,7 +45,7 @@ func TestAccLDAPConfiguration_withVerify_CACertificateComplete(t *testing.T) { ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, Steps: []resource.TestStep{ { - Config: configWithVerify(projectID, clusterName, hostname, username, password, caCertificate, cast.ToInt(port), true), + Config: configWithVerify(clusterTerraformStr, clusterInfo.ResourceName, projectID, hostname, username, password, caCertificate, cast.ToInt(port), true), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), resource.TestCheckResourceAttrSet(resourceName, "project_id"), @@ -155,40 +161,33 @@ func configBasic(projectID, hostname, username, password string, authEnabled boo `, projectID, hostname, username, password, authEnabled, port) } -func configWithVerify(projectID, clusterName, hostname, username, password, caCertificate string, port int, authEnabled bool) string { +func configWithVerify(clusterTerraformStr, clusterResourceName, projectID, hostname, username, password, caCertificate string, port int, authEnabled bool) string { return fmt.Sprintf(` - resource "mongodbatlas_cluster" "test" { - project_id = %[1]q - name = %[2]q - provider_name = "AWS" - provider_region_name = "US_EAST_2" - provider_instance_size_name = "M10" - cloud_backup = true //enable cloud provider snapshots - } +%[8]s resource "mongodbatlas_ldap_verify" "test" { - project_id = %[1]q - hostname = %[3]q - bind_username = %[4]q - bind_password = %[5]q - port = %[6]d + project_id = %[1]q + hostname = %[2]q + bind_username = %[3]q + bind_password = %[4]q + port = %[5]d ca_certificate = <<-EOF -%[8]s +%[7]s EOF authz_query_template = "{USER}?memberOf?base" - depends_on = [mongodbatlas_cluster.test] + depends_on = [%[9]s] } resource "mongodbatlas_ldap_configuration" "test" { - project_id = %[1]q - authorization_enabled = false - hostname = %[3]q - bind_username = %[4]q - bind_password = %[5]q - port = %[6]d - authentication_enabled = %[7]t + project_id = %[1]q + authorization_enabled = false + hostname = %[2]q + bind_username = %[3]q + bind_password = %[4]q + port = %[5]d + authentication_enabled = %[6]t ca_certificate = <<-EOF -%[8]s +%[7]s EOF authz_query_template = "{USER}?memberOf?base" user_to_dn_mapping{ @@ -196,5 +195,5 @@ func configWithVerify(projectID, clusterName, hostname, username, password, caCe ldap_query = "DC=example,DC=com??sub?(userPrincipalName={0})" } depends_on = [mongodbatlas_ldap_verify.test] - }`, projectID, clusterName, hostname, username, password, port, authEnabled, caCertificate) + }`, projectID, hostname, username, password, port, authEnabled, caCertificate, clusterTerraformStr, clusterResourceName) } diff --git a/internal/service/onlinearchive/resource_online_archive_migration_test.go b/internal/service/onlinearchive/resource_online_archive_migration_test.go index bce4755b2e..6035a59544 100644 --- a/internal/service/onlinearchive/resource_online_archive_migration_test.go +++ b/internal/service/onlinearchive/resource_online_archive_migration_test.go @@ -1,11 +1,8 @@ package onlinearchive_test import ( - "os" "testing" - matlas "go.mongodb.org/atlas/mongodbatlas" - "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" @@ -14,18 +11,18 @@ import ( func TestMigBackupRSOnlineArchiveWithNoChangeBetweenVersions(t *testing.T) { var ( - cluster matlas.Cluster - resourceName = "mongodbatlas_cluster.online_archive_test" onlineArchiveResourceName = "mongodbatlas_online_archive.users_archive" - orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") - projectName = acc.RandomProjectName() - clusterName = acc.RandomClusterName() + clusterInfo = acc.GetClusterInfo(t, clusterRequest()) + clusterName = clusterInfo.Name + projectID = clusterInfo.ProjectID + clusterTerraformStr = clusterInfo.TerraformStr + clusterResourceName = clusterInfo.ResourceName deleteExpirationDays = 0 ) if mig.IsProviderVersionAtLeast("1.12.2") { deleteExpirationDays = 7 } - config := configWithDailySchedule(orgID, projectName, clusterName, 1, deleteExpirationDays) + config := configWithDailySchedule(clusterTerraformStr, clusterResourceName, 1, deleteExpirationDays) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { mig.PreCheckBasic(t) }, @@ -33,9 +30,9 @@ func TestMigBackupRSOnlineArchiveWithNoChangeBetweenVersions(t *testing.T) { Steps: []resource.TestStep{ { ExternalProviders: mig.ExternalProviders(), - Config: configFirstStep(orgID, projectName, clusterName), + Config: clusterTerraformStr, Check: resource.ComposeAggregateTestCheckFunc( - populateWithSampleData(resourceName, &cluster), + populateWithSampleData(clusterResourceName, projectID, clusterName), ), }, { diff --git a/internal/service/onlinearchive/resource_online_archive_test.go b/internal/service/onlinearchive/resource_online_archive_test.go index 4386bb9fdf..5f2e95b16d 100644 --- a/internal/service/onlinearchive/resource_online_archive_test.go +++ b/internal/service/onlinearchive/resource_online_archive_test.go @@ -4,29 +4,36 @@ import ( "context" "fmt" "log" - "os" "regexp" "testing" "time" - matlas "go.mongodb.org/atlas/mongodbatlas" - "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" - "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" ) +func clusterRequest() *acc.ClusterRequest { + return &acc.ClusterRequest{ + ReplicationSpecs: []acc.ReplicationSpecRequest{ + // Must use US_EAST_1 in dev for online_archive to work + {AutoScalingDiskGbEnabled: true, Region: "US_EAST_1"}, + }, + Tags: map[string]string{ + "ArchiveTest": "true", "Owner": "test", + }, + } +} func TestAccBackupRSOnlineArchive(t *testing.T) { var ( - cluster matlas.Cluster - resourceName = "mongodbatlas_cluster.online_archive_test" onlineArchiveResourceName = "mongodbatlas_online_archive.users_archive" onlineArchiveDataSourceName = "data.mongodbatlas_online_archive.read_archive" onlineArchivesDataSourceName = "data.mongodbatlas_online_archives.all" - orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") - projectName = acc.RandomProjectName() - clusterName = acc.RandomClusterName() + clusterInfo = acc.GetClusterInfo(t, clusterRequest()) + clusterName = clusterInfo.Name + projectID = clusterInfo.ProjectID + clusterTerraformStr = clusterInfo.TerraformStr + clusterResourceName = clusterInfo.ResourceName ) resource.ParallelTest(t, resource.TestCase{ @@ -35,15 +42,13 @@ func TestAccBackupRSOnlineArchive(t *testing.T) { CheckDestroy: acc.CheckDestroyCluster, Steps: []resource.TestStep{ { - // We need this step to pupulate the cluster with Sample Data - // The online archive won't work if the cluster does not have data - Config: configFirstStep(orgID, projectName, clusterName), + Config: clusterTerraformStr, Check: resource.ComposeAggregateTestCheckFunc( - populateWithSampleData(resourceName, &cluster), + populateWithSampleData(clusterResourceName, projectID, clusterName), ), }, { - Config: configWithDailySchedule(orgID, projectName, clusterName, 1, 7), + Config: configWithDailySchedule(clusterTerraformStr, clusterResourceName, 1, 7), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"), resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"), @@ -59,7 +64,7 @@ func TestAccBackupRSOnlineArchive(t *testing.T) { ), }, { - Config: configWithDailySchedule(orgID, projectName, clusterName, 2, 8), + Config: configWithDailySchedule(clusterTerraformStr, clusterResourceName, 2, 8), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"), resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"), @@ -75,7 +80,7 @@ func TestAccBackupRSOnlineArchive(t *testing.T) { ), }, { - Config: testAccBackupRSOnlineArchiveConfigWithWeeklySchedule(orgID, projectName, clusterName, 2), + Config: testAccBackupRSOnlineArchiveConfigWithWeeklySchedule(clusterTerraformStr, clusterResourceName, 2), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"), resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"), @@ -88,7 +93,7 @@ func TestAccBackupRSOnlineArchive(t *testing.T) { ), }, { - Config: testAccBackupRSOnlineArchiveConfigWithMonthlySchedule(orgID, projectName, clusterName, 2), + Config: testAccBackupRSOnlineArchiveConfigWithMonthlySchedule(clusterTerraformStr, clusterResourceName, 2), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"), resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"), @@ -101,7 +106,7 @@ func TestAccBackupRSOnlineArchive(t *testing.T) { ), }, { - Config: configWithoutSchedule(orgID, projectName, clusterName), + Config: configWithoutSchedule(clusterTerraformStr, clusterResourceName), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"), resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"), @@ -110,7 +115,7 @@ func TestAccBackupRSOnlineArchive(t *testing.T) { ), }, { - Config: configWithoutSchedule(orgID, projectName, clusterName), + Config: configWithoutSchedule(clusterTerraformStr, clusterResourceName), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttr(onlineArchiveResourceName, "partition_fields.0.field_name", "last_review"), ), @@ -121,12 +126,12 @@ func TestAccBackupRSOnlineArchive(t *testing.T) { func TestAccBackupRSOnlineArchiveBasic(t *testing.T) { var ( - cluster matlas.Cluster - resourceName = "mongodbatlas_cluster.online_archive_test" + clusterInfo = acc.GetClusterInfo(t, clusterRequest()) + clusterResourceName = clusterInfo.ResourceName + clusterName = clusterInfo.Name + projectID = clusterInfo.ProjectID onlineArchiveResourceName = "mongodbatlas_online_archive.users_archive" - orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") - projectName = acc.RandomProjectName() - clusterName = acc.RandomClusterName() + clusterTerraformStr = clusterInfo.TerraformStr ) resource.ParallelTest(t, resource.TestCase{ @@ -135,15 +140,13 @@ func TestAccBackupRSOnlineArchiveBasic(t *testing.T) { CheckDestroy: acc.CheckDestroyCluster, Steps: []resource.TestStep{ { - // We need this step to pupulate the cluster with Sample Data - // The online archive won't work if the cluster does not have data - Config: configFirstStep(orgID, projectName, clusterName), + Config: clusterTerraformStr, Check: resource.ComposeAggregateTestCheckFunc( - populateWithSampleData(resourceName, &cluster), + populateWithSampleData(clusterResourceName, projectID, clusterName), ), }, { - Config: configWithoutSchedule(orgID, projectName, clusterName), + Config: configWithoutSchedule(clusterTerraformStr, clusterResourceName), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"), resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"), @@ -151,7 +154,7 @@ func TestAccBackupRSOnlineArchiveBasic(t *testing.T) { ), }, { - Config: configWithDailySchedule(orgID, projectName, clusterName, 1, 1), + Config: configWithDailySchedule(clusterTerraformStr, clusterResourceName, 1, 1), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"), resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"), @@ -169,13 +172,13 @@ func TestAccBackupRSOnlineArchiveBasic(t *testing.T) { func TestAccBackupRSOnlineArchiveWithProcessRegion(t *testing.T) { var ( - cluster matlas.Cluster - resourceName = "mongodbatlas_cluster.online_archive_test" onlineArchiveResourceName = "mongodbatlas_online_archive.users_archive" onlineArchiveDataSourceName = "data.mongodbatlas_online_archive.read_archive" - orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") - projectName = acc.RandomProjectName() - clusterName = acc.RandomClusterName() + clusterInfo = acc.GetClusterInfo(t, clusterRequest()) + clusterResourceName = clusterInfo.ResourceName + clusterName = clusterInfo.Name + projectID = clusterInfo.ProjectID + clusterTerraformStr = clusterInfo.TerraformStr cloudProvider = "AWS" processRegion = "US_EAST_1" ) @@ -186,15 +189,13 @@ func TestAccBackupRSOnlineArchiveWithProcessRegion(t *testing.T) { CheckDestroy: acc.CheckDestroyCluster, Steps: []resource.TestStep{ { - // We need this step to pupulate the cluster with Sample Data - // The online archive won't work if the cluster does not have data - Config: configFirstStep(orgID, projectName, clusterName), + Config: clusterTerraformStr, Check: resource.ComposeAggregateTestCheckFunc( - populateWithSampleData(resourceName, &cluster), + populateWithSampleData(clusterResourceName, projectID, clusterName), ), }, { - Config: configWithDataProcessRegion(orgID, projectName, clusterName, cloudProvider, processRegion), + Config: configWithDataProcessRegion(clusterTerraformStr, clusterResourceName, cloudProvider, processRegion), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttr(onlineArchiveResourceName, "data_process_region.0.cloud_provider", cloudProvider), resource.TestCheckResourceAttr(onlineArchiveResourceName, "data_process_region.0.region", processRegion), @@ -203,11 +204,11 @@ func TestAccBackupRSOnlineArchiveWithProcessRegion(t *testing.T) { ), }, { - Config: configWithDataProcessRegion(orgID, projectName, clusterName, cloudProvider, "AP_SOUTH_1"), + Config: configWithDataProcessRegion(clusterTerraformStr, clusterResourceName, cloudProvider, "AP_SOUTH_1"), ExpectError: regexp.MustCompile("data_process_region can't be modified"), }, { - Config: configWithoutSchedule(orgID, projectName, clusterName), + Config: configWithoutSchedule(clusterTerraformStr, clusterResourceName), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttr(onlineArchiveResourceName, "data_process_region.0.cloud_provider", cloudProvider), resource.TestCheckResourceAttr(onlineArchiveResourceName, "data_process_region.0.region", processRegion), @@ -219,10 +220,10 @@ func TestAccBackupRSOnlineArchiveWithProcessRegion(t *testing.T) { func TestAccBackupRSOnlineArchiveInvalidProcessRegion(t *testing.T) { var ( - orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") - projectName = acc.RandomProjectName() - clusterName = acc.RandomClusterName() - cloudProvider = "AWS" + clusterInfo = acc.GetClusterInfo(t, clusterRequest()) + clusterTerraformStr = clusterInfo.TerraformStr + cloudProvider = "AWS" + clusterResourceName = clusterInfo.ResourceName ) resource.ParallelTest(t, resource.TestCase{ @@ -231,14 +232,15 @@ func TestAccBackupRSOnlineArchiveInvalidProcessRegion(t *testing.T) { CheckDestroy: acc.CheckDestroyCluster, Steps: []resource.TestStep{ { - Config: configWithDataProcessRegion(orgID, projectName, clusterName, cloudProvider, "UNKNOWN"), + Config: configWithDataProcessRegion(clusterTerraformStr, clusterResourceName, cloudProvider, "UNKNOWN"), ExpectError: regexp.MustCompile("INVALID_ATTRIBUTE"), }, }, }) } -func populateWithSampleData(resourceName string, cluster *matlas.Cluster) resource.TestCheckFunc { +// populateWithSampleData adds Sample Data to the cluster otherwise online archive won't work +func populateWithSampleData(resourceName, projectID, clusterName string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceName] if !ok { @@ -247,18 +249,18 @@ func populateWithSampleData(resourceName string, cluster *matlas.Cluster) resour if rs.Primary.ID == "" { return fmt.Errorf("no ID is set") } - ids := conversion.DecodeStateID(rs.Primary.ID) - log.Printf("[DEBUG] projectID: %s, name %s", ids["project_id"], ids["cluster_name"]) - clusterResp, _, err := acc.Conn().Clusters.Get(context.Background(), ids["project_id"], ids["cluster_name"]) + conn := acc.ConnV2() + ctx := context.Background() + _, _, err := conn.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute() if err != nil { - return fmt.Errorf("cluster(%s:%s) does not exist %s", rs.Primary.Attributes["project_id"], rs.Primary.ID, err) + return fmt.Errorf("cluster(%s:%s) does not exist %s", projectID, clusterName, err) } - *cluster = *clusterResp - - job, _, err := acc.Conn().Clusters.LoadSampleDataset(context.Background(), ids["project_id"], ids["cluster_name"]) - + job, _, err := conn.ClustersApi.LoadSampleDataset(context.Background(), projectID, clusterName).Execute() if err != nil { - return fmt.Errorf("cluster(%s:%s) loading sample data set error %s", rs.Primary.Attributes["project_id"], rs.Primary.ID, err) + return fmt.Errorf("cluster(%s:%s) loading sample data set error %s", projectID, clusterName, err) + } + if job == nil { + return fmt.Errorf("cluster(%s:%s) loading sample data set error, no job found", projectID, clusterName) } ticker := time.NewTicker(30 * time.Second) @@ -268,26 +270,28 @@ func populateWithSampleData(resourceName string, cluster *matlas.Cluster) resour case <-time.After(20 * time.Second): log.Println("timeout elapsed ....") case <-ticker.C: - job, _, err = acc.Conn().Clusters.GetSampleDatasetStatus(context.Background(), ids["project_id"], job.ID) + job, _, err = conn.ClustersApi.GetSampleDatasetLoadStatus(ctx, projectID, job.GetId()).Execute() fmt.Println("querying for job ") - if job.State != "WORKING" { + if err != nil { + return fmt.Errorf("cluster(%s:%s) failed to query for job, %s", projectID, clusterName, err) + } + if job == nil { + return fmt.Errorf("cluster(%s:%s) failed to query for job, no job found", projectID, clusterName) + } + if job.GetState() != "WORKING" { break JOB } } } - if err != nil { - return fmt.Errorf("cluster(%s:%s) loading sample data set error %s", rs.Primary.Attributes["project_id"], rs.Primary.ID, err) - } - - if job.State != "COMPLETED" { - return fmt.Errorf("cluster(%s:%s) working sample data set error %s", rs.Primary.Attributes["project_id"], job.ID, job.State) + if job.GetState() != "COMPLETED" { + return fmt.Errorf("cluster(%s:%s) working sample data set error %s", projectID, job.GetId(), job.GetState()) } return nil } } -func configWithDailySchedule(orgID, projectName, clusterName string, startHour, deleteExpirationDays int) string { +func configWithDailySchedule(clusterTerraformStr, clusterResourceName string, startHour, deleteExpirationDays int) string { var dataExpirationRuleBlock string if deleteExpirationDays > 0 { dataExpirationRuleBlock = fmt.Sprintf(` @@ -300,8 +304,8 @@ func configWithDailySchedule(orgID, projectName, clusterName string, startHour, return fmt.Sprintf(` %[1]s resource "mongodbatlas_online_archive" "users_archive" { - project_id = mongodbatlas_cluster.online_archive_test.project_id - cluster_name = mongodbatlas_cluster.online_archive_test.name + project_id = %[4]s.project_id + cluster_name = %[4]s.name coll_name = "listingsAndReviews" collection_type = "STANDARD" db_name = "sample_airbnb" @@ -351,15 +355,15 @@ func configWithDailySchedule(orgID, projectName, clusterName string, startHour, project_id = mongodbatlas_online_archive.users_archive.project_id cluster_name = mongodbatlas_online_archive.users_archive.cluster_name } - `, configFirstStep(orgID, projectName, clusterName), startHour, dataExpirationRuleBlock) + `, clusterTerraformStr, startHour, dataExpirationRuleBlock, clusterResourceName) } -func configWithoutSchedule(orgID, projectName, clusterName string) string { +func configWithoutSchedule(clusterTerraformStr, clusterResourceName string) string { return fmt.Sprintf(` - %s + %[1]s resource "mongodbatlas_online_archive" "users_archive" { - project_id = mongodbatlas_cluster.online_archive_test.project_id - cluster_name = mongodbatlas_cluster.online_archive_test.name + project_id = %[2]s.project_id + cluster_name = %[2]s.name coll_name = "listingsAndReviews" collection_type = "STANDARD" db_name = "sample_airbnb" @@ -399,15 +403,15 @@ func configWithoutSchedule(orgID, projectName, clusterName string) string { project_id = mongodbatlas_online_archive.users_archive.project_id cluster_name = mongodbatlas_online_archive.users_archive.cluster_name } - `, configFirstStep(orgID, projectName, clusterName)) + `, clusterTerraformStr, clusterResourceName) } -func configWithDataProcessRegion(orgID, projectName, clusterName, cloudProvider, region string) string { +func configWithDataProcessRegion(clusterTerraformStr, clusterResourceName, cloudProvider, region string) string { return fmt.Sprintf(` - %s + %[1]s resource "mongodbatlas_online_archive" "users_archive" { - project_id = mongodbatlas_cluster.online_archive_test.project_id - cluster_name = mongodbatlas_cluster.online_archive_test.name + project_id = %[4]s.project_id + cluster_name = %[4]s.name coll_name = "listingsAndReviews" collection_type = "STANDARD" db_name = "sample_airbnb" @@ -452,58 +456,15 @@ func configWithDataProcessRegion(orgID, projectName, clusterName, cloudProvider, project_id = mongodbatlas_online_archive.users_archive.project_id cluster_name = mongodbatlas_online_archive.users_archive.cluster_name } - `, configFirstStep(orgID, projectName, clusterName), cloudProvider, region) + `, clusterTerraformStr, cloudProvider, region, clusterResourceName) } -func configFirstStep(orgID, projectName, clusterName string) string { +func testAccBackupRSOnlineArchiveConfigWithWeeklySchedule(clusterTerraformStr, clusterResourceName string, startHour int) string { return fmt.Sprintf(` - resource "mongodbatlas_project" "cluster_project" { - name = %[2]q - org_id = %[1]q - } - resource "mongodbatlas_cluster" "online_archive_test" { - project_id = mongodbatlas_project.cluster_project.id - name = %[3]q - disk_size_gb = 10 - - cluster_type = "REPLICASET" - replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - - cloud_backup = false - auto_scaling_disk_gb_enabled = true - - // Provider Settings "block" - provider_name = "AWS" - provider_instance_size_name = "M10" - - labels { - key = "ArchiveTest" - value = "true" - } - labels { - key = "Owner" - value = "test" - } - } - - - `, orgID, projectName, clusterName) -} - -func testAccBackupRSOnlineArchiveConfigWithWeeklySchedule(orgID, projectName, clusterName string, startHour int) string { - return fmt.Sprintf(` - %s + %[1]s resource "mongodbatlas_online_archive" "users_archive" { - project_id = mongodbatlas_cluster.online_archive_test.project_id - cluster_name = mongodbatlas_cluster.online_archive_test.name + project_id = %[3]s.project_id + cluster_name = %[3]s.name coll_name = "listingsAndReviews" collection_type = "STANDARD" db_name = "sample_airbnb" @@ -520,7 +481,7 @@ func testAccBackupRSOnlineArchiveConfigWithWeeklySchedule(orgID, projectName, cl day_of_week = 1 end_hour = 1 end_minute = 1 - start_hour = %d + start_hour = %[2]d start_minute = 1 } @@ -552,15 +513,15 @@ func testAccBackupRSOnlineArchiveConfigWithWeeklySchedule(orgID, projectName, cl project_id = mongodbatlas_online_archive.users_archive.project_id cluster_name = mongodbatlas_online_archive.users_archive.cluster_name } - `, configFirstStep(orgID, projectName, clusterName), startHour) + `, clusterTerraformStr, startHour, clusterResourceName) } -func testAccBackupRSOnlineArchiveConfigWithMonthlySchedule(orgID, projectName, clusterName string, startHour int) string { +func testAccBackupRSOnlineArchiveConfigWithMonthlySchedule(clusterTerraformStr, clusterResourceName string, startHour int) string { return fmt.Sprintf(` - %s + %[1]s resource "mongodbatlas_online_archive" "users_archive" { - project_id = mongodbatlas_cluster.online_archive_test.project_id - cluster_name = mongodbatlas_cluster.online_archive_test.name + project_id = %[3]s.project_id + cluster_name = %[3]s.name coll_name = "listingsAndReviews" collection_type = "STANDARD" db_name = "sample_airbnb" @@ -577,7 +538,7 @@ func testAccBackupRSOnlineArchiveConfigWithMonthlySchedule(orgID, projectName, c day_of_month = 1 end_hour = 1 end_minute = 1 - start_hour = %d + start_hour = %[2]d start_minute = 1 } @@ -611,5 +572,5 @@ func testAccBackupRSOnlineArchiveConfigWithMonthlySchedule(orgID, projectName, c project_id = mongodbatlas_online_archive.users_archive.project_id cluster_name = mongodbatlas_online_archive.users_archive.cluster_name } - `, configFirstStep(orgID, projectName, clusterName), startHour) + `, clusterTerraformStr, startHour, clusterResourceName) } diff --git a/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode_test.go b/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode_test.go index bc89b6732f..93be48622b 100644 --- a/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode_test.go +++ b/internal/service/privateendpointregionalmode/resource_private_endpoint_regional_mode_test.go @@ -4,7 +4,6 @@ import ( "context" "fmt" "os" - "strconv" "strings" "testing" @@ -18,31 +17,31 @@ func TestAccPrivateEndpointRegionalMode_basic(t *testing.T) { } func TestAccPrivateEndpointRegionalMode_conn(t *testing.T) { - acc.SkipTestForCI(t) // needs AWS configuration - var ( - endpointResourceSuffix = "atlasple" - resourceSuffix = "atlasrm" - resourceName = fmt.Sprintf("mongodbatlas_private_endpoint_regional_mode.%s", resourceSuffix) - awsAccessKey = os.Getenv("AWS_ACCESS_KEY_ID") - awsSecretKey = os.Getenv("AWS_SECRET_ACCESS_KEY") - providerName = "AWS" - region = os.Getenv("AWS_REGION") - projectID = acc.ProjectIDExecution(t) - orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") - projectName = acc.RandomProjectName() - clusterName = acc.RandomClusterName() - clusterResourceName = "test" - clusterResource = acc.ConfigClusterGlobal(orgID, projectName, clusterName) - clusterDataSource = modeClusterData(clusterResourceName, resourceSuffix, endpointResourceSuffix) - endpointResources = testConfigUnmanagedAWS( + endpointResourceSuffix = "atlasple" + resourceSuffix = "atlasrm" + resourceName = fmt.Sprintf("mongodbatlas_private_endpoint_regional_mode.%s", resourceSuffix) + awsAccessKey = os.Getenv("AWS_ACCESS_KEY_ID") + awsSecretKey = os.Getenv("AWS_SECRET_ACCESS_KEY") + providerName = "AWS" + region = os.Getenv("AWS_REGION_LOWERCASE") + privatelinkEndpointServiceResourceName = fmt.Sprintf("mongodbatlas_privatelink_endpoint_service.%s", endpointResourceSuffix) + spec1 = acc.ReplicationSpecRequest{Region: os.Getenv("AWS_REGION_UPPERCASE"), ProviderName: providerName, ZoneName: "Zone 1"} + spec2 = acc.ReplicationSpecRequest{Region: "US_WEST_2", ProviderName: providerName, ZoneName: "Zone 2"} + clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{Geosharded: true, DiskSizeGb: 80, ReplicationSpecs: []acc.ReplicationSpecRequest{spec1, spec2}}) + projectID = clusterInfo.ProjectID + clusterResourceName = clusterInfo.ResourceName + clusterDataName = "data.mongodbatlas_advanced_cluster.test" + endpointResources = testConfigUnmanagedAWS( awsAccessKey, awsSecretKey, projectID, providerName, region, endpointResourceSuffix, ) - dependencies = []string{clusterResource, clusterDataSource, endpointResources} + clusterDataSource = modeClusterData(clusterResourceName, resourceName, privatelinkEndpointServiceResourceName) + dependencies = []string{clusterInfo.TerraformStr, clusterDataSource, endpointResources} ) resource.Test(t, resource.TestCase{ - PreCheck: func() { acc.PreCheck(t) }, + PreCheck: func() { acc.PreCheckAwsEnvBasic(t); acc.PreCheckAwsRegionCases(t) }, + ExternalProviders: acc.ExternalProvidersOnlyAWS(), ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, CheckDestroy: checkDestroy, Steps: []resource.TestStep{ @@ -50,9 +49,8 @@ func TestAccPrivateEndpointRegionalMode_conn(t *testing.T) { Config: configWithDependencies(resourceSuffix, projectID, false, dependencies), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - checkModeClustersUpToDate(projectID, clusterName, clusterResourceName), + resource.TestCheckResourceAttr(clusterDataName, "connection_strings.0.private_endpoint.#", "0"), resource.TestCheckResourceAttrSet(resourceName, "project_id"), - resource.TestCheckResourceAttrSet(resourceName, "enabled"), resource.TestCheckResourceAttr(resourceName, "enabled", "false"), ), }, @@ -60,9 +58,8 @@ func TestAccPrivateEndpointRegionalMode_conn(t *testing.T) { Config: configWithDependencies(resourceSuffix, projectID, true, dependencies), Check: resource.ComposeAggregateTestCheckFunc( checkExists(resourceName), - checkModeClustersUpToDate(projectID, clusterName, clusterResourceName), + resource.TestCheckResourceAttr(clusterDataName, "connection_strings.0.private_endpoint.#", "1"), resource.TestCheckResourceAttrSet(resourceName, "project_id"), - resource.TestCheckResourceAttrSet(resourceName, "enabled"), resource.TestCheckResourceAttr(resourceName, "enabled", "true"), ), }, @@ -113,12 +110,12 @@ func basicTestCase(tb testing.TB) *resource.TestCase { func modeClusterData(clusterResourceName, regionalModeResourceName, privateLinkResourceName string) string { return fmt.Sprintf(` - data "mongodbatlas_cluster" %[1]q { - project_id = mongodbatlas_cluster.%[1]s.project_id - name = mongodbatlas_cluster.%[1]s.name + data "mongodbatlas_advanced_cluster" "test" { + project_id = %[1]s.project_id + name = %[1]s.name depends_on = [ - mongodbatlas_privatelink_endpoint_service.%[3]s, - mongodbatlas_private_endpoint_regional_mode.%[2]s + %[2]s, + %[3]s ] } `, clusterResourceName, regionalModeResourceName, privateLinkResourceName) @@ -179,32 +176,6 @@ func checkExists(resourceName string) resource.TestCheckFunc { } } -func checkModeClustersUpToDate(projectID, clusterName, clusterResourceName string) resource.TestCheckFunc { - resourceName := strings.Join([]string{"data", "mongodbatlas_cluster", clusterResourceName}, ".") - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[resourceName] - if !ok { - return fmt.Errorf("Could not find resource state for cluster (%s) on project (%s)", clusterName, projectID) - } - var rsPrivateEndpointCount int - var err error - if rsPrivateEndpointCount, err = strconv.Atoi(rs.Primary.Attributes["connection_strings.0.private_endpoint.#"]); err != nil { - return fmt.Errorf("Connection strings private endpoint count is not a number") - } - c, _, _ := acc.Conn().Clusters.Get(context.Background(), projectID, clusterName) - if rsPrivateEndpointCount != len(c.ConnectionStrings.PrivateEndpoint) { - return fmt.Errorf("Cluster PrivateEndpoint count does not match resource") - } - if rs.Primary.Attributes["connection_strings.0.standard"] != c.ConnectionStrings.Standard { - return fmt.Errorf("Cluster standard connection_string does not match resource") - } - if rs.Primary.Attributes["connection_strings.0.standard_srv"] != c.ConnectionStrings.StandardSrv { - return fmt.Errorf("Cluster standard connection_string does not match resource") - } - return nil - } -} - func checkDestroy(s *terraform.State) error { for _, rs := range s.RootModule().Resources { if rs.Type != "mongodbatlas_private_endpoint_regional_mode" { @@ -221,14 +192,14 @@ func checkDestroy(s *terraform.State) error { func testConfigUnmanagedAWS(awsAccessKey, awsSecretKey, projectID, providerName, region, serviceResourceName string) string { return fmt.Sprintf(` provider "aws" { - region = "%[5]s" - access_key = "%[1]s" - secret_key = "%[2]s" + region = %[5]q + access_key = %[1]q + secret_key = %[2]q } resource "mongodbatlas_privatelink_endpoint" "test" { - project_id = "%[3]s" - provider_name = "%[4]s" - region = "%[5]s" + project_id = %[3]q + provider_name = %[4]q + region = %[5]q } resource "aws_vpc_endpoint" "ptfe_service" { vpc_id = aws_vpc.primary.id diff --git a/internal/service/searchindex/data_source_search_index.go b/internal/service/searchindex/data_source_search_index.go index 3283ff1c8c..495e7033e6 100644 --- a/internal/service/searchindex/data_source_search_index.go +++ b/internal/service/searchindex/data_source_search_index.go @@ -32,37 +32,35 @@ func returnSearchIndexDSSchema() map[string]*schema.Schema { }, "analyzer": { Type: schema.TypeString, - Optional: true, + Computed: true, }, "analyzers": { - Type: schema.TypeString, - Optional: true, - DiffSuppressFunc: validateSearchAnalyzersDiff, + Type: schema.TypeString, + Computed: true, }, "collection_name": { Type: schema.TypeString, - Optional: true, + Computed: true, }, "database": { Type: schema.TypeString, - Optional: true, + Computed: true, }, "name": { Type: schema.TypeString, - Optional: true, + Computed: true, }, "search_analyzer": { Type: schema.TypeString, - Optional: true, + Computed: true, }, "mappings_dynamic": { Type: schema.TypeBool, - Optional: true, + Computed: true, }, "mappings_fields": { - Type: schema.TypeString, - Optional: true, - DiffSuppressFunc: validateSearchIndexMappingDiff, + Type: schema.TypeString, + Computed: true, }, "synonyms": { Type: schema.TypeSet, @@ -90,12 +88,15 @@ func returnSearchIndexDSSchema() map[string]*schema.Schema { }, "type": { Type: schema.TypeString, - Optional: true, + Computed: true, }, "fields": { - Type: schema.TypeString, - Optional: true, - DiffSuppressFunc: validateSearchIndexMappingDiff, + Type: schema.TypeString, + Computed: true, + }, + "stored_source": { + Type: schema.TypeString, + Computed: true, }, } } @@ -185,6 +186,15 @@ func dataSourceMongoDBAtlasSearchIndexRead(ctx context.Context, d *schema.Resour } } + storedSource := searchIndex.LatestDefinition.GetStoredSource() + strStoredSource, errStoredSource := MarshalStoredSource(storedSource) + if errStoredSource != nil { + return diag.FromErr(errStoredSource) + } + if err := d.Set("stored_source", strStoredSource); err != nil { + return diag.Errorf("error setting `stored_source` for search index (%s): %s", d.Id(), err) + } + d.SetId(conversion.EncodeStateID(map[string]string{ "project_id": projectID.(string), "cluster_name": clusterName.(string), diff --git a/internal/service/searchindex/data_source_search_indexes.go b/internal/service/searchindex/data_source_search_indexes.go index 63a272af64..3cfd89f617 100644 --- a/internal/service/searchindex/data_source_search_indexes.go +++ b/internal/service/searchindex/data_source_search_indexes.go @@ -35,7 +35,7 @@ func PluralDataSource() *schema.Resource { Type: schema.TypeList, Computed: true, Elem: &schema.Resource{ - Schema: returnSearchIndexSchema(), + Schema: returnSearchIndexDSSchema(), }, }, "total_count": { @@ -131,7 +131,13 @@ func flattenSearchIndexes(searchIndexes []admin.SearchIndexResponse, projectID, } searchIndexesMap[i]["fields"] = fieldsMarshaled } - } + storedSource := searchIndexes[i].LatestDefinition.GetStoredSource() + strStoredSource, errStoredSource := MarshalStoredSource(storedSource) + if errStoredSource != nil { + return nil, errStoredSource + } + searchIndexesMap[i]["stored_source"] = strStoredSource + } return searchIndexesMap, nil } diff --git a/internal/service/searchindex/model_search_index.go b/internal/service/searchindex/model_search_index.go new file mode 100644 index 0000000000..6b5adfbbb4 --- /dev/null +++ b/internal/service/searchindex/model_search_index.go @@ -0,0 +1,150 @@ +package searchindex + +import ( + "bytes" + "context" + "encoding/json" + "log" + "reflect" + "strconv" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "go.mongodb.org/atlas-sdk/v20240530002/admin" +) + +func flattenSearchIndexSynonyms(synonyms []admin.SearchSynonymMappingDefinition) []map[string]any { + synonymsMap := make([]map[string]any, len(synonyms)) + for i, s := range synonyms { + synonymsMap[i] = map[string]any{ + "name": s.Name, + "analyzer": s.Analyzer, + "source_collection": s.Source.Collection, + } + } + return synonymsMap +} + +func expandSearchIndexSynonyms(d *schema.ResourceData) []admin.SearchSynonymMappingDefinition { + var synonymsList []admin.SearchSynonymMappingDefinition + if vSynonyms, ok := d.GetOk("synonyms"); ok { + for _, s := range vSynonyms.(*schema.Set).List() { + synonym := s.(map[string]any) + synonymsDoc := admin.SearchSynonymMappingDefinition{ + Name: synonym["name"].(string), + Analyzer: synonym["analyzer"].(string), + Source: admin.SynonymSource{ + Collection: synonym["source_collection"].(string), + }, + } + synonymsList = append(synonymsList, synonymsDoc) + } + } + return synonymsList +} + +func marshalSearchIndex(fields any) (string, error) { + respBytes, err := json.Marshal(fields) + return string(respBytes), err +} + +func unmarshalSearchIndexMappingFields(str string) (map[string]any, diag.Diagnostics) { + fields := map[string]any{} + if str == "" { + return fields, nil + } + if err := json.Unmarshal([]byte(str), &fields); err != nil { + return nil, diag.Errorf("cannot unmarshal search index attribute `mappings_fields` because it has an incorrect format") + } + return fields, nil +} + +func unmarshalSearchIndexFields(str string) ([]map[string]any, diag.Diagnostics) { + fields := []map[string]any{} + if str == "" { + return fields, nil + } + if err := json.Unmarshal([]byte(str), &fields); err != nil { + return nil, diag.Errorf("cannot unmarshal search index attribute `fields` because it has an incorrect format") + } + + return fields, nil +} + +func unmarshalSearchIndexAnalyzersFields(str string) ([]admin.AtlasSearchAnalyzer, diag.Diagnostics) { + fields := []admin.AtlasSearchAnalyzer{} + if str == "" { + return fields, nil + } + dec := json.NewDecoder(bytes.NewReader([]byte(str))) + dec.DisallowUnknownFields() + if err := dec.Decode(&fields); err != nil { + return nil, diag.Errorf("cannot unmarshal search index attribute `analyzers` because it has an incorrect format") + } + return fields, nil +} + +func MarshalStoredSource(obj any) (string, error) { + if obj == nil { + return "", nil + } + if b, ok := obj.(bool); ok { + return strconv.FormatBool(b), nil + } + respBytes, err := json.Marshal(obj) + return string(respBytes), err +} + +func UnmarshalStoredSource(str string) (any, diag.Diagnostics) { + switch str { + case "": + return any(nil), nil + case "true": + return true, nil + case "false": + return false, nil + default: + var obj any + if err := json.Unmarshal([]byte(str), &obj); err != nil { + return nil, diag.Errorf("cannot unmarshal search index attribute `stored_source` because it has an incorrect format") + } + return obj, nil + } +} + +func diffSuppressJSON(k, old, newStr string, d *schema.ResourceData) bool { + var j, j2 any + + if old == "" { + old = "{}" + } + + if newStr == "" { + newStr = "{}" + } + + if err := json.Unmarshal([]byte(old), &j); err != nil { + log.Printf("[ERROR] cannot unmarshal old search index analyzer json %v", err) + } + if err := json.Unmarshal([]byte(newStr), &j2); err != nil { + log.Printf("[ERROR] cannot unmarshal new search index analyzer json %v", err) + } + if !reflect.DeepEqual(&j, &j2) { + return false + } + + return true +} + +func resourceSearchIndexRefreshFunc(ctx context.Context, clusterName, projectID, indexID string, connV2 *admin.APIClient) retry.StateRefreshFunc { + return func() (any, string, error) { + searchIndex, _, err := connV2.AtlasSearchApi.GetAtlasSearchIndex(ctx, projectID, clusterName, indexID).Execute() + if err != nil { + return nil, "ERROR", err + } + status := conversion.SafeString(searchIndex.Status) + return searchIndex, status, nil + } +} diff --git a/internal/service/searchindex/resource_search_index.go b/internal/service/searchindex/resource_search_index.go index 72f0d0b783..0139101588 100644 --- a/internal/service/searchindex/resource_search_index.go +++ b/internal/service/searchindex/resource_search_index.go @@ -2,14 +2,12 @@ package searchindex import ( "context" - "encoding/json" "errors" "fmt" "log" "strings" "time" - "github.com/go-test/deep" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -63,7 +61,7 @@ func returnSearchIndexSchema() map[string]*schema.Schema { "analyzers": { Type: schema.TypeString, Optional: true, - DiffSuppressFunc: validateSearchAnalyzersDiff, + DiffSuppressFunc: diffSuppressJSON, }, "collection_name": { Type: schema.TypeString, @@ -88,7 +86,7 @@ func returnSearchIndexSchema() map[string]*schema.Schema { "mappings_fields": { Type: schema.TypeString, Optional: true, - DiffSuppressFunc: validateSearchIndexMappingDiff, + DiffSuppressFunc: diffSuppressJSON, }, "synonyms": { Type: schema.TypeSet, @@ -125,7 +123,12 @@ func returnSearchIndexSchema() map[string]*schema.Schema { "fields": { Type: schema.TypeString, Optional: true, - DiffSuppressFunc: validateSearchIndexMappingDiff, + DiffSuppressFunc: diffSuppressJSON, + }, + "stored_source": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: diffSuppressJSON, }, } } @@ -257,6 +260,14 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag. searchIndex.Definition.Synonyms = &synonyms } + if d.HasChange("stored_source") { + obj, err := UnmarshalStoredSource(d.Get("stored_source").(string)) + if err != nil { + return err + } + searchIndex.Definition.StoredSource = obj + } + if _, _, err := connV2.AtlasSearchApi.UpdateAtlasSearchIndex(ctx, projectID, clusterName, indexID, searchIndex).Execute(); err != nil { return diag.Errorf("error updating search index (%s): %s", indexName, err) } @@ -371,24 +382,16 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di } } - return nil -} - -func flattenSearchIndexSynonyms(synonyms []admin.SearchSynonymMappingDefinition) []map[string]any { - synonymsMap := make([]map[string]any, len(synonyms)) - for i, s := range synonyms { - synonymsMap[i] = map[string]any{ - "name": s.Name, - "analyzer": s.Analyzer, - "source_collection": s.Source.Collection, - } + storedSource := searchIndex.LatestDefinition.GetStoredSource() + strStoredSource, errStoredSource := MarshalStoredSource(storedSource) + if errStoredSource != nil { + return diag.FromErr(errStoredSource) + } + if err := d.Set("stored_source", strStoredSource); err != nil { + return diag.Errorf("error setting `stored_source` for search index (%s): %s", d.Id(), err) } - return synonymsMap -} -func marshalSearchIndex(fields any) (string, error) { - bytes, err := json.Marshal(fields) - return string(bytes), err + return nil } func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics { @@ -432,6 +435,12 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. searchIndexRequest.Definition.Synonyms = &synonyms } + objStoredSource, errStoredSource := UnmarshalStoredSource(d.Get("stored_source").(string)) + if errStoredSource != nil { + return errStoredSource + } + searchIndexRequest.Definition.StoredSource = objStoredSource + dbSearchIndexRes, _, err := connV2.AtlasSearchApi.CreateAtlasSearchIndex(ctx, projectID, clusterName, searchIndexRequest).Execute() if err != nil { return diag.Errorf("error creating index: %s", err) @@ -469,116 +478,3 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag. return resourceRead(ctx, d, meta) } - -func expandSearchIndexSynonyms(d *schema.ResourceData) []admin.SearchSynonymMappingDefinition { - var synonymsList []admin.SearchSynonymMappingDefinition - if vSynonyms, ok := d.GetOk("synonyms"); ok { - for _, s := range vSynonyms.(*schema.Set).List() { - synonym := s.(map[string]any) - synonymsDoc := admin.SearchSynonymMappingDefinition{ - Name: synonym["name"].(string), - Analyzer: synonym["analyzer"].(string), - Source: admin.SynonymSource{ - Collection: synonym["source_collection"].(string), - }, - } - synonymsList = append(synonymsList, synonymsDoc) - } - } - return synonymsList -} - -func validateSearchIndexMappingDiff(k, old, newStr string, d *schema.ResourceData) bool { - var j, j2 any - - if old == "" { - old = "{}" - } - - if newStr == "" { - newStr = "{}" - } - - if err := json.Unmarshal([]byte(old), &j); err != nil { - log.Printf("[ERROR] cannot unmarshal old search index mapping json %v", err) - } - if err := json.Unmarshal([]byte(newStr), &j2); err != nil { - log.Printf("[ERROR] cannot unmarshal new search index mapping json %v", err) - } - if diff := deep.Equal(&j, &j2); diff != nil { - log.Printf("[DEBUG] deep equal not passed: %v", diff) - return false - } - - return true -} - -func validateSearchAnalyzersDiff(k, old, newStr string, d *schema.ResourceData) bool { - var j, j2 any - - if old == "" { - old = "{}" - } - - if newStr == "" { - newStr = "{}" - } - - if err := json.Unmarshal([]byte(old), &j); err != nil { - log.Printf("[ERROR] cannot unmarshal old search index analyzer json %v", err) - } - if err := json.Unmarshal([]byte(newStr), &j2); err != nil { - log.Printf("[ERROR] cannot unmarshal new search index analyzer json %v", err) - } - if diff := deep.Equal(&j, &j2); diff != nil { - log.Printf("[DEBUG] deep equal not passed: %v", diff) - return false - } - - return true -} - -func unmarshalSearchIndexMappingFields(str string) (map[string]any, diag.Diagnostics) { - fields := map[string]any{} - if str == "" { - return fields, nil - } - if err := json.Unmarshal([]byte(str), &fields); err != nil { - return nil, diag.Errorf("cannot unmarshal search index attribute `mappings_fields` because it has an incorrect format") - } - return fields, nil -} - -func unmarshalSearchIndexFields(str string) ([]map[string]any, diag.Diagnostics) { - fields := []map[string]any{} - if str == "" { - return fields, nil - } - if err := json.Unmarshal([]byte(str), &fields); err != nil { - return nil, diag.Errorf("cannot unmarshal search index attribute `fields` because it has an incorrect format") - } - - return fields, nil -} - -func unmarshalSearchIndexAnalyzersFields(str string) ([]admin.AtlasSearchAnalyzer, diag.Diagnostics) { - fields := []admin.AtlasSearchAnalyzer{} - if str == "" { - return fields, nil - } - if err := json.Unmarshal([]byte(str), &fields); err != nil { - return nil, diag.Errorf("cannot unmarshal search index attribute `analyzers` because it has an incorrect format") - } - return fields, nil -} - -func resourceSearchIndexRefreshFunc(ctx context.Context, clusterName, projectID, indexID string, connV2 *admin.APIClient) retry.StateRefreshFunc { - return func() (any, string, error) { - searchIndex, _, err := connV2.AtlasSearchApi.GetAtlasSearchIndex(ctx, projectID, clusterName, indexID).Execute() - if err != nil { - return nil, "ERROR", err - } - status := conversion.SafeString(searchIndex.Status) - return searchIndex, status, nil - } -} diff --git a/internal/service/searchindex/resource_search_index_migration_test.go b/internal/service/searchindex/resource_search_index_migration_test.go index 0cc1138662..a131d500ff 100644 --- a/internal/service/searchindex/resource_search_index_migration_test.go +++ b/internal/service/searchindex/resource_search_index_migration_test.go @@ -7,6 +7,7 @@ import ( ) func TestMigSearchIndex_basic(t *testing.T) { + mig.SkipIfVersionBelow(t, "1.17.4") mig.CreateAndRunTest(t, basicTestCase(t)) } diff --git a/internal/service/searchindex/resource_search_index_test.go b/internal/service/searchindex/resource_search_index_test.go index d0edb9cc84..6bb1a76db2 100644 --- a/internal/service/searchindex/resource_search_index_test.go +++ b/internal/service/searchindex/resource_search_index_test.go @@ -3,6 +3,7 @@ package searchindex_test import ( "context" "fmt" + "regexp" "testing" "github.com/hashicorp/terraform-plugin-testing/helper/resource" @@ -27,8 +28,8 @@ func TestAccSearchIndex_withSearchType(t *testing.T) { CheckDestroy: acc.CheckDestroySearchIndex, Steps: []resource.TestStep{ { - Config: configBasic(projectID, clusterName, indexName, "search", databaseName), - Check: checkBasic(projectID, clusterName, indexName, "search", databaseName), + Config: configBasic(projectID, clusterName, indexName, "search", databaseName, ""), + Check: checkBasic(projectID, clusterName, indexName, "search", databaseName, ""), }, }, }) @@ -114,6 +115,10 @@ func TestAccSearchIndex_updatedToEmptyAnalyzers(t *testing.T) { Config: configAdditional(projectID, indexName, databaseName, clusterName, ""), Check: checkAdditionalAnalyzers(projectID, indexName, databaseName, clusterName, false), }, + { + Config: configAdditional(projectID, indexName, databaseName, clusterName, incorrectFormatAnalyzersTF), + ExpectError: regexp.MustCompile("cannot unmarshal search index attribute `analyzers` because it has an incorrect format"), + }, }, }) } @@ -158,11 +163,11 @@ func basicTestCase(tb testing.TB) *resource.TestCase { CheckDestroy: acc.CheckDestroySearchIndex, Steps: []resource.TestStep{ { - Config: configBasic(projectID, clusterName, indexName, "", databaseName), - Check: checkBasic(projectID, clusterName, indexName, "", databaseName), + Config: configBasic(projectID, clusterName, indexName, "", databaseName, ""), + Check: checkBasic(projectID, clusterName, indexName, "", databaseName, ""), }, { - Config: configBasic(projectID, clusterName, indexName, "", databaseName), + Config: configBasic(projectID, clusterName, indexName, "", databaseName, ""), ResourceName: resourceName, ImportStateIdFunc: importStateIDFunc(resourceName), ImportState: true, @@ -172,6 +177,74 @@ func basicTestCase(tb testing.TB) *resource.TestCase { } } +func TestAccSearchIndex_withStoredSourceFalse(t *testing.T) { + resource.ParallelTest(t, *storedSourceTestCase(t, "false")) +} + +func TestAccSearchIndex_withStoredSourceTrue(t *testing.T) { + resource.ParallelTest(t, *storedSourceTestCase(t, "true")) +} + +func TestAccSearchIndex_withStoredSourceInclude(t *testing.T) { + resource.ParallelTest(t, *storedSourceTestCase(t, storedSourceIncludeJSON)) +} + +func TestAccSearchIndex_withStoredSourceExclude(t *testing.T) { + resource.ParallelTest(t, *storedSourceTestCase(t, storedSourceExcludeJSON)) +} + +func TestAccSearchIndex_withStoredSourceUpdateEmptyType(t *testing.T) { + resource.ParallelTest(t, *storedSourceTestCaseUpdate(t, "")) +} + +func TestAccSearchIndex_withStoredSourceUpdateSearchType(t *testing.T) { + resource.ParallelTest(t, *storedSourceTestCaseUpdate(t, "search")) +} + +func storedSourceTestCase(tb testing.TB, storedSource string) *resource.TestCase { + tb.Helper() + var ( + projectID, clusterName = acc.ClusterNameExecution(tb) + indexName = acc.RandomName() + databaseName = acc.RandomName() + ) + return &resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(tb) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: acc.CheckDestroySearchIndex, + Steps: []resource.TestStep{ + { + Config: configBasic(projectID, clusterName, indexName, "search", databaseName, storedSource), + Check: checkBasic(projectID, clusterName, indexName, "search", databaseName, storedSource), + }, + }, + } +} + +func storedSourceTestCaseUpdate(tb testing.TB, searchType string) *resource.TestCase { + tb.Helper() + var ( + projectID, clusterName = acc.ClusterNameExecution(tb) + indexName = acc.RandomName() + databaseName = acc.RandomName() + ) + return &resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(tb) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: acc.CheckDestroySearchIndex, + Steps: []resource.TestStep{ + { + Config: configBasic(projectID, clusterName, indexName, searchType, databaseName, "false"), + Check: checkBasic(projectID, clusterName, indexName, searchType, databaseName, "false"), + }, + { + Config: configBasic(projectID, clusterName, indexName, searchType, databaseName, "true"), + Check: checkBasic(projectID, clusterName, indexName, searchType, databaseName, "true"), + }, + }, + } +} + func basicVectorTestCase(tb testing.TB) *resource.TestCase { tb.Helper() var ( @@ -233,11 +306,19 @@ func checkExists(resourceName string) resource.TestCheckFunc { } } -func configBasic(projectID, clusterName, indexName, indexType, databaseName string) string { - var indexTypeStr string +func configBasic(projectID, clusterName, indexName, indexType, databaseName, storedSource string) string { + var extra string if indexType != "" { - indexTypeStr = fmt.Sprintf("type=%q", indexType) + extra += fmt.Sprintf("type=%q\n", indexType) } + if storedSource != "" { + if storedSource == "true" || storedSource == "false" { + extra += fmt.Sprintf("stored_source=%q\n", storedSource) + } else { + extra += fmt.Sprintf("stored_source= <<-EOF\n%s\nEOF\n", storedSource) + } + } + return fmt.Sprintf(` resource "mongodbatlas_search_index" "test" { cluster_name = %[1]q @@ -255,12 +336,22 @@ func configBasic(projectID, clusterName, indexName, indexType, databaseName stri project_id = mongodbatlas_search_index.test.project_id index_id = mongodbatlas_search_index.test.index_id } - `, clusterName, projectID, indexName, databaseName, collectionName, searchAnalyzer, indexTypeStr) + `, clusterName, projectID, indexName, databaseName, collectionName, searchAnalyzer, extra) } -func checkBasic(projectID, clusterName, indexName, indexType, databaseName string) resource.TestCheckFunc { +func checkBasic(projectID, clusterName, indexName, indexType, databaseName, storedSource string) resource.TestCheckFunc { mappingsDynamic := "true" - return checkAggr(projectID, clusterName, indexName, indexType, databaseName, mappingsDynamic) + checks := []resource.TestCheckFunc{ + resource.TestCheckResourceAttr(resourceName, "stored_source", storedSource), + resource.TestCheckResourceAttr(datasourceName, "stored_source", storedSource), + } + if storedSource != "" && storedSource != "true" && storedSource != "false" { + checks = []resource.TestCheckFunc{ + resource.TestCheckResourceAttrWith(resourceName, "stored_source", acc.JSONEquals(storedSource)), + resource.TestCheckResourceAttrWith(datasourceName, "stored_source", acc.JSONEquals(storedSource)), + } + } + return checkAggr(projectID, clusterName, indexName, indexType, databaseName, mappingsDynamic, checks...) } func configWithMapping(projectID, indexName, databaseName, clusterName string) string { @@ -437,8 +528,9 @@ const ( with = true without = false - analyzersTF = "\nanalyzers = <<-EOF\n" + analyzersJSON + "\nEOF\n" - mappingsFieldsTF = "\nmappings_fields = <<-EOF\n" + mappingsFieldsJSON + "\nEOF\n" + analyzersTF = "\nanalyzers = <<-EOF\n" + analyzersJSON + "\nEOF\n" + incorrectFormatAnalyzersTF = "\nanalyzers = <<-EOF\n" + incorrectFormatAnalyzersJSON + "\nEOF\n" + mappingsFieldsTF = "\nmappings_fields = <<-EOF\n" + mappingsFieldsJSON + "\nEOF\n" analyzersJSON = ` [ @@ -466,7 +558,21 @@ const ( ] } ] -` + ` + + incorrectFormatAnalyzersJSON = ` + [ + { + "wrongField":[ + { + "type":"length", + "min":20, + "max":33 + } + ] + } + ] + ` mappingsFieldsJSON = ` { @@ -509,4 +615,16 @@ const ( "similarity": "euclidean" }] ` + + storedSourceIncludeJSON = ` + { + "include": ["include1","include2"] + } + ` + + storedSourceExcludeJSON = ` + { + "exclude": ["exclude1", "exclude2"] + } + ` ) diff --git a/internal/testutil/acc/advanced_cluster.go b/internal/testutil/acc/advanced_cluster.go index 31c6b27a04..45ccad7a9e 100644 --- a/internal/testutil/acc/advanced_cluster.go +++ b/internal/testutil/acc/advanced_cluster.go @@ -40,51 +40,6 @@ func CheckDestroyCluster(s *terraform.State) error { return nil } -func ConfigClusterGlobal(orgID, projectName, clusterName string) string { - return fmt.Sprintf(` - - resource "mongodbatlas_project" "test" { - org_id = %[1]q - name = %[2]q - } - - resource "mongodbatlas_cluster" test { - project_id = mongodbatlas_project.test.id - name = %[3]q - disk_size_gb = 80 - num_shards = 1 - cloud_backup = false - cluster_type = "GEOSHARDED" - - // Provider Settings "block" - provider_name = "AWS" - provider_instance_size_name = "M30" - - replication_specs { - zone_name = "Zone 1" - num_shards = 2 - regions_config { - region_name = "US_EAST_1" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - - replication_specs { - zone_name = "Zone 2" - num_shards = 2 - regions_config { - region_name = "US_WEST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - } - `, orgID, projectName, clusterName) -} - func ImportStateClusterIDFunc(resourceName string) resource.ImportStateIdFunc { return func(s *terraform.State) (string, error) { rs, ok := s.RootModule().Resources[resourceName] diff --git a/internal/testutil/acc/cluster.go b/internal/testutil/acc/cluster.go index c581f88464..45542aefaa 100644 --- a/internal/testutil/acc/cluster.go +++ b/internal/testutil/acc/cluster.go @@ -6,95 +6,179 @@ import ( "testing" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "go.mongodb.org/atlas-sdk/v20240530002/admin" ) +// ClusterRequest contains configuration for a cluster where all fields are optional and AddDefaults is used for required fields. +// Used together with GetClusterInfo which will set ProjectID if it is unset. type ClusterRequest struct { - ProviderName string - ExtraConfig string + Tags map[string]string + ProjectID string + ResourceSuffix string + AdvancedConfiguration map[string]any ResourceDependencyName string + ClusterName string + MongoDBMajorVersion string + ReplicationSpecs []ReplicationSpecRequest + DiskSizeGb int CloudBackup bool Geosharded bool + RetainBackupsEnabled bool + PitEnabled bool +} + +// AddDefaults ensures the required fields are populated to generate a resource. +func (r *ClusterRequest) AddDefaults() { + if r.ResourceSuffix == "" { + r.ResourceSuffix = defaultClusterResourceSuffix + } + if len(r.ReplicationSpecs) == 0 { + r.ReplicationSpecs = []ReplicationSpecRequest{{}} + } + if r.ClusterName == "" { + r.ClusterName = RandomClusterName() + } +} + +func (r *ClusterRequest) ClusterType() string { + if r.Geosharded { + return "GEOSHARDED" + } + return "REPLICASET" } type ClusterInfo struct { - ProjectIDStr string - ProjectID string - ClusterName string - ClusterNameStr string - ClusterTerraformStr string + ProjectID string + Name string + ResourceName string + TerraformNameRef string + TerraformStr string } +const defaultClusterResourceSuffix = "cluster_info" + // GetClusterInfo is used to obtain a project and cluster configuration resource. -// When `MONGODB_ATLAS_CLUSTER_NAME` and `MONGODB_ATLAS_PROJECT_ID` are defined, creation of resources is avoided. This is useful for local execution but not intended for CI executions. -// Clusters will be created in project ProjectIDExecution. +// When `MONGODB_ATLAS_CLUSTER_NAME` and `MONGODB_ATLAS_PROJECT_ID` are defined, a data source is created instead. This is useful for local execution but not intended for CI executions. +// Clusters will be created in project ProjectIDExecution or in req.ProjectID which can be both a direct id, e.g., `664610ec80cc36255e634074` or a config reference `mongodbatlas_project.test.id`. func GetClusterInfo(tb testing.TB, req *ClusterRequest) ClusterInfo { tb.Helper() if req == nil { req = new(ClusterRequest) } - if req.ProviderName == "" { - req.ProviderName = constant.AWS - } - clusterName := os.Getenv("MONGODB_ATLAS_CLUSTER_NAME") - projectID := os.Getenv("MONGODB_ATLAS_PROJECT_ID") - if clusterName != "" && projectID != "" { - return ClusterInfo{ - ProjectIDStr: fmt.Sprintf("%q", projectID), - ProjectID: projectID, - ClusterName: clusterName, - ClusterNameStr: fmt.Sprintf("%q", clusterName), - ClusterTerraformStr: "", + hclCreator := ClusterResourceHcl + if req.ProjectID == "" { + if ExistingClusterUsed() { + projectID, clusterName := existingProjectIDClusterName() + req.ProjectID = projectID + req.ClusterName = clusterName + hclCreator = ClusterDatasourceHcl + } else { + req.ProjectID = ProjectIDExecution(tb) } } - projectID = ProjectIDExecution(tb) - clusterName = RandomClusterName() - clusterTypeStr := "REPLICASET" - if req.Geosharded { - clusterTypeStr = "GEOSHARDED" - } - dependsOnClause := "" - if req.ResourceDependencyName != "" { - dependsOnClause = fmt.Sprintf(` - depends_on = [ - %[1]s - ] - `, req.ResourceDependencyName) - } - clusterTerraformStr := fmt.Sprintf(` - resource "mongodbatlas_cluster" "test_cluster" { - project_id = %[1]q - name = %[2]q - cloud_backup = %[3]t - auto_scaling_disk_gb_enabled = false - provider_name = %[4]q - provider_instance_size_name = "M10" - - cluster_type = %[5]q - replication_specs { - num_shards = 1 - zone_name = "Zone 1" - regions_config { - region_name = "US_WEST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } - } - %[6]s - %[7]s - } - `, projectID, clusterName, req.CloudBackup, req.ProviderName, clusterTypeStr, req.ExtraConfig, dependsOnClause) + clusterTerraformStr, clusterName, clusterResourceName, err := hclCreator(req) + if err != nil { + tb.Error(err) + } return ClusterInfo{ - ProjectIDStr: fmt.Sprintf("%q", projectID), - ProjectID: projectID, - ClusterName: clusterName, - ClusterNameStr: "mongodbatlas_cluster.test_cluster.name", - ClusterTerraformStr: clusterTerraformStr, + ProjectID: req.ProjectID, + Name: clusterName, + TerraformNameRef: fmt.Sprintf("%s.name", clusterResourceName), + ResourceName: clusterResourceName, + TerraformStr: clusterTerraformStr, } } func ExistingClusterUsed() bool { - clusterName := os.Getenv("MONGODB_ATLAS_CLUSTER_NAME") - projectID := os.Getenv("MONGODB_ATLAS_PROJECT_ID") + projectID, clusterName := existingProjectIDClusterName() return clusterName != "" && projectID != "" } + +func existingProjectIDClusterName() (projectID, clusterName string) { + return os.Getenv("MONGODB_ATLAS_PROJECT_ID"), os.Getenv("MONGODB_ATLAS_CLUSTER_NAME") +} + +// ReplicationSpecRequest can be used to customize the ReplicationSpecs of a Cluster. +// No fields are required. +// Use `ExtraRegionConfigs` to specify multiple region configs. +type ReplicationSpecRequest struct { + ZoneName string + Region string + InstanceSize string + ProviderName string + EbsVolumeType string + ExtraRegionConfigs []ReplicationSpecRequest + NodeCount int + NodeCountReadOnly int + Priority int + AutoScalingDiskGbEnabled bool +} + +func (r *ReplicationSpecRequest) AddDefaults() { + if r.Priority == 0 { + r.Priority = 7 + } + if r.NodeCount == 0 { + r.NodeCount = 3 + } + if r.ZoneName == "" { + r.ZoneName = "Zone 1" + } + if r.Region == "" { + r.Region = "US_WEST_2" + } + if r.InstanceSize == "" { + r.InstanceSize = "M10" + } + if r.ProviderName == "" { + r.ProviderName = constant.AWS + } +} + +func (r *ReplicationSpecRequest) AllRegionConfigs() []admin.CloudRegionConfig20250101 { + config := cloudRegionConfig(*r) + configs := []admin.CloudRegionConfig20250101{config} + for i := range r.ExtraRegionConfigs { + extra := r.ExtraRegionConfigs[i] + configs = append(configs, cloudRegionConfig(extra)) + } + return configs +} + +func replicationSpec(req *ReplicationSpecRequest) admin.ReplicationSpec20250101 { + if req == nil { + req = new(ReplicationSpecRequest) + } + req.AddDefaults() + regionConfigs := req.AllRegionConfigs() + return admin.ReplicationSpec20250101{ + ZoneName: &req.ZoneName, + RegionConfigs: ®ionConfigs, + } +} + +func cloudRegionConfig(req ReplicationSpecRequest) admin.CloudRegionConfig20250101 { + req.AddDefaults() + var readOnly admin.DedicatedHardwareSpec20250101 + if req.NodeCountReadOnly != 0 { + readOnly = admin.DedicatedHardwareSpec20250101{ + NodeCount: &req.NodeCountReadOnly, + InstanceSize: &req.InstanceSize, + } + } + return admin.CloudRegionConfig20250101{ + RegionName: &req.Region, + Priority: &req.Priority, + ProviderName: &req.ProviderName, + ElectableSpecs: &admin.HardwareSpec20250101{ + InstanceSize: &req.InstanceSize, + NodeCount: &req.NodeCount, + EbsVolumeType: conversion.StringPtr(req.EbsVolumeType), + }, + ReadOnlySpecs: &readOnly, + AutoScaling: &admin.AdvancedAutoScalingSettings{ + DiskGB: &admin.DiskGBAutoScaling{Enabled: &req.AutoScalingDiskGbEnabled}, + }, + } +} diff --git a/internal/testutil/acc/config_cluster.go b/internal/testutil/acc/config_cluster.go new file mode 100644 index 0000000000..b72c7f3be9 --- /dev/null +++ b/internal/testutil/acc/config_cluster.go @@ -0,0 +1,160 @@ +package acc + +import ( + "errors" + "fmt" + "strings" + + "github.com/hashicorp/hcl/v2/hclwrite" + "github.com/zclconf/go-cty/cty" + "go.mongodb.org/atlas-sdk/v20240530002/admin" +) + +func ClusterDatasourceHcl(req *ClusterRequest) (configStr, clusterName, resourceName string, err error) { + if req == nil || req.ProjectID == "" || req.ClusterName == "" { + return "", "", "", errors.New("must specify a ClusterRequest with at least ProjectID and ClusterName set") + } + req.AddDefaults() + f := hclwrite.NewEmptyFile() + root := f.Body() + resourceType := "mongodbatlas_advanced_cluster" + resourceSuffix := req.ResourceSuffix + cluster := root.AppendNewBlock("data", []string{resourceType, resourceSuffix}).Body() + clusterResourceName := fmt.Sprintf("data.%s.%s", resourceType, resourceSuffix) + clusterName = req.ClusterName + clusterRootAttributes := map[string]any{ + "name": clusterName, + } + projectID := req.ProjectID + if strings.Contains(req.ProjectID, ".") { + err = setAttributeHcl(cluster, fmt.Sprintf("project_id = %s", projectID)) + if err != nil { + return "", "", "", fmt.Errorf("failed to set project_id = %s", projectID) + } + } else { + clusterRootAttributes["project_id"] = projectID + } + addPrimitiveAttributes(cluster, clusterRootAttributes) + return "\n" + string(f.Bytes()), clusterName, clusterResourceName, err +} + +func ClusterResourceHcl(req *ClusterRequest) (configStr, clusterName, resourceName string, err error) { + if req == nil || req.ProjectID == "" { + return "", "", "", errors.New("must specify a ClusterRequest with at least ProjectID set") + } + projectID := req.ProjectID + req.AddDefaults() + specRequests := req.ReplicationSpecs + specs := make([]admin.ReplicationSpec20250101, len(specRequests)) + for i := range specRequests { + specRequest := specRequests[i] + specs[i] = replicationSpec(&specRequest) + } + clusterName = req.ClusterName + resourceSuffix := req.ResourceSuffix + clusterType := req.ClusterType() + + f := hclwrite.NewEmptyFile() + root := f.Body() + resourceType := "mongodbatlas_advanced_cluster" + cluster := root.AppendNewBlock("resource", []string{resourceType, resourceSuffix}).Body() + clusterRootAttributes := map[string]any{ + "cluster_type": clusterType, + "name": clusterName, + "backup_enabled": req.CloudBackup, + "pit_enabled": req.PitEnabled, + "mongo_db_major_version": req.MongoDBMajorVersion, + } + if strings.Contains(req.ProjectID, ".") { + err = setAttributeHcl(cluster, fmt.Sprintf("project_id = %s", projectID)) + if err != nil { + return "", "", "", fmt.Errorf("failed to set project_id = %s", projectID) + } + } else { + clusterRootAttributes["project_id"] = projectID + } + if req.DiskSizeGb != 0 { + clusterRootAttributes["disk_size_gb"] = req.DiskSizeGb + } + if req.RetainBackupsEnabled { + clusterRootAttributes["retain_backups_enabled"] = req.RetainBackupsEnabled + } + addPrimitiveAttributes(cluster, clusterRootAttributes) + cluster.AppendNewline() + if len(req.AdvancedConfiguration) > 0 { + for _, key := range sortStringMapKeysAny(req.AdvancedConfiguration) { + if !knownAdvancedConfig[key] { + return "", "", "", fmt.Errorf("unknown key in advanced configuration: %s", key) + } + } + advancedClusterBlock := cluster.AppendNewBlock("advanced_configuration", nil).Body() + addPrimitiveAttributes(advancedClusterBlock, req.AdvancedConfiguration) + cluster.AppendNewline() + } + for i, spec := range specs { + err = writeReplicationSpec(cluster, spec) + if err != nil { + return "", "", "", fmt.Errorf("error writing hcl for replication spec %d: %w", i, err) + } + } + if len(req.Tags) > 0 { + for _, key := range sortStringMapKeys(req.Tags) { + value := req.Tags[key] + tagBlock := cluster.AppendNewBlock("tags", nil).Body() + tagBlock.SetAttributeValue("key", cty.StringVal(key)) + tagBlock.SetAttributeValue("value", cty.StringVal(value)) + } + } + cluster.AppendNewline() + if req.ResourceDependencyName != "" { + if !strings.Contains(req.ResourceDependencyName, ".") { + return "", "", "", fmt.Errorf("req.ResourceDependencyName must have a '.'") + } + err = setAttributeHcl(cluster, fmt.Sprintf("depends_on = [%s]", req.ResourceDependencyName)) + if err != nil { + return "", "", "", err + } + } + clusterResourceName := fmt.Sprintf("%s.%s", resourceType, resourceSuffix) + return "\n" + string(f.Bytes()), clusterName, clusterResourceName, err +} + +func writeReplicationSpec(cluster *hclwrite.Body, spec admin.ReplicationSpec20250101) error { + replicationBlock := cluster.AppendNewBlock("replication_specs", nil).Body() + err := addPrimitiveAttributesViaJSON(replicationBlock, spec) + if err != nil { + return err + } + for _, rc := range spec.GetRegionConfigs() { + if rc.Priority == nil { + rc.SetPriority(7) + } + replicationBlock.AppendNewline() + rcBlock := replicationBlock.AppendNewBlock("region_configs", nil).Body() + err = addPrimitiveAttributesViaJSON(rcBlock, rc) + if err != nil { + return err + } + autoScalingBlock := rcBlock.AppendNewBlock("auto_scaling", nil).Body() + if rc.AutoScaling == nil { + autoScalingBlock.SetAttributeValue("disk_gb_enabled", cty.BoolVal(false)) + } else { + autoScaling := rc.GetAutoScaling() + asDisk := autoScaling.GetDiskGB() + autoScalingBlock.SetAttributeValue("disk_gb_enabled", cty.BoolVal(asDisk.GetEnabled())) + if autoScaling.Compute != nil { + return fmt.Errorf("auto_scaling.compute is not supportd yet %v", autoScaling) + } + } + nodeSpec := rc.GetElectableSpecs() + nodeSpecBlock := rcBlock.AppendNewBlock("electable_specs", nil).Body() + err = addPrimitiveAttributesViaJSON(nodeSpecBlock, nodeSpec) + + readOnlySpecs := rc.GetReadOnlySpecs() + if readOnlySpecs.GetNodeCount() != 0 { + readOnlyBlock := rcBlock.AppendNewBlock("read_only_specs", nil).Body() + err = addPrimitiveAttributesViaJSON(readOnlyBlock, readOnlySpecs) + } + } + return err +} diff --git a/internal/testutil/acc/config_cluster_test.go b/internal/testutil/acc/config_cluster_test.go new file mode 100644 index 0000000000..306c0fc15d --- /dev/null +++ b/internal/testutil/acc/config_cluster_test.go @@ -0,0 +1,387 @@ +package acc_test + +import ( + "testing" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +var standardClusterResource = ` +resource "mongodbatlas_advanced_cluster" "cluster_info" { + backup_enabled = false + cluster_type = "REPLICASET" + name = "my-name" + pit_enabled = false + project_id = "project" + + replication_specs { + zone_name = "Zone 1" + + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_WEST_2" + auto_scaling { + disk_gb_enabled = false + } + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } + +} +` +var overrideClusterResource = ` +resource "mongodbatlas_advanced_cluster" "cluster_info" { + project_id = mongodbatlas_project.test.id + backup_enabled = true + cluster_type = "GEOSHARDED" + mongo_db_major_version = "6.0" + name = "my-name" + pit_enabled = true + retain_backups_enabled = true + + advanced_configuration { + oplog_min_retention_hours = 8 + } + + replication_specs { + zone_name = "Zone X" + + region_configs { + priority = 7 + provider_name = "AZURE" + region_name = "MY_REGION_1" + auto_scaling { + disk_gb_enabled = false + } + electable_specs { + ebs_volume_type = "STANDARD" + instance_size = "M30" + node_count = 30 + } + } + } + +} +` + +var dependsOnClusterResource = ` +resource "mongodbatlas_advanced_cluster" "cluster_info" { + backup_enabled = false + cluster_type = "REPLICASET" + name = "my-name" + pit_enabled = false + project_id = "project" + + replication_specs { + zone_name = "Zone 1" + + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_WEST_2" + auto_scaling { + disk_gb_enabled = false + } + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } + + depends_on = [mongodbatlas_project.project_execution] +} +` +var dependsOnMultiResource = ` +resource "mongodbatlas_advanced_cluster" "cluster_info" { + backup_enabled = false + cluster_type = "REPLICASET" + name = "my-name" + pit_enabled = false + project_id = "project" + + replication_specs { + zone_name = "Zone 1" + + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_WEST_2" + auto_scaling { + disk_gb_enabled = false + } + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } + + depends_on = [mongodbatlas_private_endpoint_regional_mode.atlasrm, mongodbatlas_privatelink_endpoint_service.atlasple] +} +` +var twoReplicationSpecs = ` +resource "mongodbatlas_advanced_cluster" "cluster_info" { + backup_enabled = false + cluster_type = "REPLICASET" + name = "my-name" + pit_enabled = false + project_id = "project" + + replication_specs { + zone_name = "Zone 1" + + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_WEST_1" + auto_scaling { + disk_gb_enabled = false + } + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } + replication_specs { + zone_name = "Zone 2" + + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_2" + auto_scaling { + disk_gb_enabled = false + } + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } + +} +` +var twoRegionConfigs = ` +resource "mongodbatlas_advanced_cluster" "cluster_info" { + backup_enabled = false + cluster_type = "REPLICASET" + name = "my-name" + pit_enabled = false + project_id = "project" + + replication_specs { + zone_name = "Zone 1" + + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_WEST_1" + auto_scaling { + disk_gb_enabled = false + } + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_1" + auto_scaling { + disk_gb_enabled = false + } + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } + +} +` + +var autoScalingDiskEnabled = ` +resource "mongodbatlas_advanced_cluster" "cluster_info" { + backup_enabled = false + cluster_type = "REPLICASET" + name = "my-name" + pit_enabled = false + project_id = "project" + + replication_specs { + zone_name = "Zone 1" + + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "US_WEST_2" + auto_scaling { + disk_gb_enabled = true + } + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } + tags { + key = "ArchiveTest" + value = "true" + } + tags { + key = "Owner" + value = "test" + } + +} +` +var readOnlyAndPriority = ` +resource "mongodbatlas_advanced_cluster" "cluster_info" { + backup_enabled = false + cluster_type = "REPLICASET" + name = "my-name" + pit_enabled = false + project_id = "project" + + replication_specs { + zone_name = "Zone 1" + + region_configs { + priority = 5 + provider_name = "AWS" + region_name = "US_EAST_1" + auto_scaling { + disk_gb_enabled = false + } + electable_specs { + instance_size = "M10" + node_count = 5 + } + read_only_specs { + instance_size = "M10" + node_count = 1 + } + } + } + +} +` + +func Test_ClusterResourceHcl(t *testing.T) { + var ( + clusterName = "my-name" + testCases = map[string]struct { + expected string + req acc.ClusterRequest + }{ + "defaults": { + standardClusterResource, + acc.ClusterRequest{ClusterName: clusterName}, + }, + "dependsOn": { + dependsOnClusterResource, + acc.ClusterRequest{ClusterName: clusterName, ResourceDependencyName: "mongodbatlas_project.project_execution"}, + }, + "dependsOnMulti": { + dependsOnMultiResource, + acc.ClusterRequest{ClusterName: clusterName, ResourceDependencyName: "mongodbatlas_private_endpoint_regional_mode.atlasrm, mongodbatlas_privatelink_endpoint_service.atlasple"}, + }, + "twoReplicationSpecs": { + twoReplicationSpecs, + acc.ClusterRequest{ClusterName: clusterName, ReplicationSpecs: []acc.ReplicationSpecRequest{ + {Region: "US_WEST_1", ZoneName: "Zone 1"}, + {Region: "EU_WEST_2", ZoneName: "Zone 2"}, + }}, + }, + "overrideClusterResource": { + overrideClusterResource, + acc.ClusterRequest{ + ProjectID: "mongodbatlas_project.test.id", + ClusterName: clusterName, + Geosharded: true, + CloudBackup: true, + MongoDBMajorVersion: "6.0", + RetainBackupsEnabled: true, + ReplicationSpecs: []acc.ReplicationSpecRequest{ + {Region: "MY_REGION_1", ZoneName: "Zone X", InstanceSize: "M30", NodeCount: 30, ProviderName: constant.AZURE, EbsVolumeType: "STANDARD"}, + }, + PitEnabled: true, + AdvancedConfiguration: map[string]any{ + acc.ClusterAdvConfigOplogMinRetentionHours: 8, + }, + }, + }, + "twoRegionConfigs": { + twoRegionConfigs, + acc.ClusterRequest{ClusterName: clusterName, ReplicationSpecs: []acc.ReplicationSpecRequest{ + { + Region: "US_WEST_1", + InstanceSize: "M10", + NodeCount: 3, + ExtraRegionConfigs: []acc.ReplicationSpecRequest{{Region: "EU_WEST_1", InstanceSize: "M10", NodeCount: 3, ProviderName: constant.AWS}}, + }, + }, + }, + }, + "autoScalingDiskEnabled": { + autoScalingDiskEnabled, + acc.ClusterRequest{ClusterName: clusterName, Tags: map[string]string{ + "ArchiveTest": "true", "Owner": "test", + }, ReplicationSpecs: []acc.ReplicationSpecRequest{ + {AutoScalingDiskGbEnabled: true}, + }}, + }, + "readOnlyAndPriority": { + readOnlyAndPriority, + acc.ClusterRequest{ + ClusterName: clusterName, + ReplicationSpecs: []acc.ReplicationSpecRequest{ + {Priority: 5, NodeCount: 5, Region: "US_EAST_1", NodeCountReadOnly: 1}, + }}, + }, + } + ) + for name, tc := range testCases { + t.Run(name, func(t *testing.T) { + req := tc.req + if req.ProjectID == "" { + req.ProjectID = "project" + } + config, actualClusterName, actualResourceName, err := acc.ClusterResourceHcl(&req) + require.NoError(t, err) + assert.Equal(t, "mongodbatlas_advanced_cluster.cluster_info", actualResourceName) + assert.Equal(t, clusterName, actualClusterName) + assert.Equal(t, tc.expected, config) + }) + } +} + +var expectedDatasource = ` +data "mongodbatlas_advanced_cluster" "cluster_info" { + name = "my-datasource-cluster" + project_id = "datasource-project" +} +` + +func Test_ClusterDatasourceHcl(t *testing.T) { + expectedClusterName := "my-datasource-cluster" + config, clusterName, resourceName, err := acc.ClusterDatasourceHcl(&acc.ClusterRequest{ + ClusterName: expectedClusterName, + ProjectID: "datasource-project", + }) + require.NoError(t, err) + assert.Equal(t, "data.mongodbatlas_advanced_cluster.cluster_info", resourceName) + assert.Equal(t, expectedClusterName, clusterName) + assert.Equal(t, expectedDatasource, config) +} diff --git a/internal/testutil/acc/config_formatter.go b/internal/testutil/acc/config_formatter.go index 93b9e40ced..6ee705c87f 100644 --- a/internal/testutil/acc/config_formatter.go +++ b/internal/testutil/acc/config_formatter.go @@ -1,9 +1,16 @@ package acc import ( + "encoding/json" "fmt" + "regexp" "sort" "strings" + + "github.com/hashicorp/hcl/v2" + "github.com/hashicorp/hcl/v2/hclsyntax" + "github.com/hashicorp/hcl/v2/hclwrite" + "github.com/zclconf/go-cty/cty" ) func FormatToHCLMap(m map[string]string, indent, varName string) string { @@ -41,7 +48,6 @@ func FormatToHCLLifecycleIgnore(keys ...string) string { return strings.Join(lines, "\n") } -// make test deterministic func sortStringMapKeys(m map[string]string) []string { keys := make([]string, 0, len(m)) for k := range m { @@ -50,3 +56,107 @@ func sortStringMapKeys(m map[string]string) []string { sort.Strings(keys) return keys } +func sortStringMapKeysAny(m map[string]any) []string { + keys := make([]string, 0, len(m)) + for k := range m { + keys = append(keys, k) + } + sort.Strings(keys) + return keys +} + +var matchFirstCap = regexp.MustCompile("(.)([A-Z][a-z]+)") +var matchAllCap = regexp.MustCompile("([a-z0-9])([A-Z])") + +func ToSnakeCase(str string) string { + snake := matchFirstCap.ReplaceAllString(str, "${1}_${2}") + snake = matchAllCap.ReplaceAllString(snake, "${1}_${2}") + return strings.ToLower(snake) +} + +var ( + ClusterAdvConfigOplogMinRetentionHours = "oplog_min_retention_hours" + knownAdvancedConfig = map[string]bool{ + ClusterAdvConfigOplogMinRetentionHours: true, + } +) + +// addPrimitiveAttributesViaJSON adds "primitive" bool/string/int/float attributes of a struct. +func addPrimitiveAttributesViaJSON(b *hclwrite.Body, obj any) error { + var objMap map[string]any + inrec, err := json.Marshal(obj) + if err != nil { + return err + } + err = json.Unmarshal(inrec, &objMap) + if err != nil { + return err + } + addPrimitiveAttributes(b, objMap) + return nil +} + +func addPrimitiveAttributes(b *hclwrite.Body, values map[string]any) { + for _, keyCamel := range sortStringMapKeysAny(values) { + key := ToSnakeCase(keyCamel) + value := values[keyCamel] + switch value := value.(type) { + case bool: + b.SetAttributeValue(key, cty.BoolVal(value)) + case string: + if value != "" { + b.SetAttributeValue(key, cty.StringVal(value)) + } + case int: + b.SetAttributeValue(key, cty.NumberIntVal(int64(value))) + // int gets parsed as float64 for json + case float64: + b.SetAttributeValue(key, cty.NumberIntVal(int64(value))) + default: + continue + } + } +} + +// Sometimes it is easier to set a value using hcl/tf syntax instead of creating complex values like list hcl.Traversal. +func setAttributeHcl(body *hclwrite.Body, tfExpression string) error { + src := []byte(tfExpression) + + f, diags := hclwrite.ParseConfig(src, "", hcl.Pos{Line: 1, Column: 1}) + if diags.HasErrors() { + return fmt.Errorf("extract attribute error %s\nparsing %s", diags, tfExpression) + } + expressionAttributes := f.Body().Attributes() + if len(expressionAttributes) != 1 { + return fmt.Errorf("must be a single attribute in expression: %s", tfExpression) + } + tokens := hclwrite.Tokens{} + for _, attr := range expressionAttributes { + tokens = attr.BuildTokens(tokens) + } + if len(tokens) == 0 { + return fmt.Errorf("no tokens found for expression %s", tfExpression) + } + var attributeName string + valueTokens := []*hclwrite.Token{} + equalFound := false + for _, token := range tokens { + if attributeName == "" && token.Type == hclsyntax.TokenIdent { + attributeName = string(token.Bytes) + } + if equalFound { + valueTokens = append(valueTokens, token) + } + if token.Type == hclsyntax.TokenEqual { + equalFound = true + } + } + if attributeName == "" { + return fmt.Errorf("unable to find the attribute name set for expr=%s", tfExpression) + } + if len(valueTokens) == 0 { + return fmt.Errorf("unable to find the attribute value set for expr=%s", tfExpression) + } + body.SetAttributeRaw(attributeName, valueTokens) + return nil +} diff --git a/internal/testutil/acc/pre_check.go b/internal/testutil/acc/pre_check.go index 97f91a1d7b..339f092e05 100644 --- a/internal/testutil/acc/pre_check.go +++ b/internal/testutil/acc/pre_check.go @@ -180,6 +180,14 @@ func PreCheckAwsEnv(tb testing.TB) { } } +func PreCheckAwsRegionCases(tb testing.TB) { + tb.Helper() + if os.Getenv("AWS_REGION_UPPERCASE") == "" || + os.Getenv("AWS_REGION_LOWERCASE") == "" { + tb.Fatal("`AWS_REGION_UPPERCASE`, `AWS_REGION_LOWERCASE` must be set for acceptance testing") + } +} + func PreCheckAwsEnvPrivateLinkEndpointService(tb testing.TB) { tb.Helper() if os.Getenv("AWS_ACCESS_KEY_ID") == "" || diff --git a/modules/examples/atlas-basic/main.tf b/modules/examples/atlas-basic/main.tf deleted file mode 100644 index 1fb35f72a0..0000000000 --- a/modules/examples/atlas-basic/main.tf +++ /dev/null @@ -1,23 +0,0 @@ -module "atlas-basic" { - source = "../../terraform-mongodbatlas-basic" - - public_key = "" - private_key = "" - atlas_org_id = "" - - database_name = ["test1","test2"] - db_users = ["user1","user2"] - db_passwords = ["",""] - database_names = ["test-db1","test-db2"] - region = "US_EAST_1" - - aws_vpc_cidr_block = "1.0.0.0/16" - aws_vpc_egress = "0.0.0.0/0" - aws_vpc_ingress = "0.0.0.0/0" - aws_subnet_cidr_block1 = "1.0.1.0/24" - aws_subnet_cidr_block2 = "1.0.2.0/24" - - cidr_block = ["10.1.0.0/16","12.2.0.0/16"] - ip_address = ["208.169.90.207","63.167.210.250"] - -} \ No newline at end of file diff --git a/modules/examples/atlas-basic/versions.tf b/modules/examples/atlas-basic/versions.tf deleted file mode 100644 index 1d70a22799..0000000000 --- a/modules/examples/atlas-basic/versions.tf +++ /dev/null @@ -1,10 +0,0 @@ -terraform { - required_version = ">= 1.0" - - required_providers { - aws = { - source = "hashicorp/aws" - version = ">= 5.0" - } - } -} \ No newline at end of file diff --git a/modules/examples/sagemaker/main.tf b/modules/examples/sagemaker/main.tf deleted file mode 100644 index f295a7684a..0000000000 --- a/modules/examples/sagemaker/main.tf +++ /dev/null @@ -1,26 +0,0 @@ - -# NOTE: -# go through the sagemaker-example/README.md file to create prerequisites and pass the inputs for the below - - -module "mongodb-atlas-analytics-amazon-sagemaker-integration" { - source = "../../terraform-mongodbatlas-amazon-sagemaker-integration" - - public_key = "" - private_key = "" - atlas_org_id = "" - - atlas_project_id = "" - realm_app_id = "" - database_name = "" - collection_name = "" - service_id = "" - - trigger_name = "" - - model_ecr_image_uri = "" - pull_lambda_ecr_image_uri = "" - model_data_s3_uri = "" - push_lambda_ecr_image_uri = "" - mongo_endpoint = "" -} diff --git a/modules/examples/sagemaker/versions.tf b/modules/examples/sagemaker/versions.tf deleted file mode 100644 index 1d70a22799..0000000000 --- a/modules/examples/sagemaker/versions.tf +++ /dev/null @@ -1,10 +0,0 @@ -terraform { - required_version = ">= 1.0" - - required_providers { - aws = { - source = "hashicorp/aws" - version = ">= 5.0" - } - } -} \ No newline at end of file diff --git a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/README.md b/modules/terraform-mongodbatlas-amazon-sagemaker-integration/README.md deleted file mode 100644 index 5f545d0150..0000000000 --- a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# quickstart-mongodb-atlas-analytics-amazon-sagemaker-integration - -## Overview - -![simple-quickstart-arch](https://user-images.githubusercontent.com/5663078/229119386-0dbc6e30-a060-465e-86dd-f89712b0fc49.png) - -This Partner Solutions template enables you to begin working with your machine learning models using MongoDB Atlas Cluster and Amazon SageMaker endpoints. With this template, you can utilize MongoDB as a data source and SageMaker for data analysis, streamlining the process of building and deploying machine learning models. - - -## MongoDB Atlas terraform Resources used by the templates - -- [mongodbatlas_event_trigger](../../mongodbatlas/data_source_mongodbatlas_event_trigger.go) - - -## Environment Configured by the Partner Solutions template -The Partner Solutions template will generate and configure the following resources: - - a [MongoDB Partner Event Bus](http://mongodb.com/docs/atlas/app-services/triggers/aws-eventbridge/#std-label-aws-eventbridge) - - a [database trigger](https://www.mongodb.com/docs/atlas/app-services/triggers/database-triggers/) with your Atlas Cluster - - lambda functions to run the machine learning model and send the classification results to your MongoDB Atlas Cluster. (See [iris_classifier](https://github.com/mongodb/mongodbatlas-cloudformation-resources/tree/master/examples/quickstart-mongodb-atlas-analytics-amazon-sagemaker-integration/sagemaker-example/iris_classifier) for an example of machine learning model to use with this template. See [lambda_functions](https://github.com/mongodb/mongodbatlas-cloudformation-resources/tree/master/examples/quickstart-mongodb-atlas-analytics-amazon-sagemaker-integration/sagemaker-example/lambda_functions) for an example of lambda functions to use to read and write data to your MongoDB Atlas cluster.) - - diff --git a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/outputs.tf b/modules/terraform-mongodbatlas-amazon-sagemaker-integration/outputs.tf deleted file mode 100644 index d19a8d32b1..0000000000 --- a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/outputs.tf +++ /dev/null @@ -1,10 +0,0 @@ - -output "sage_maker_endpoint_arn" { - description = "SageMaker endpoint ARN" - value = aws_sagemaker_endpoint.endpoint.arn -} - -output "event_bus_name" { - description = "Event Bus Name" - value = aws_cloudwatch_event_bus.event_bus_for_capturing_mdb_events.arn -} diff --git a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/sagemaker.tf b/modules/terraform-mongodbatlas-amazon-sagemaker-integration/sagemaker.tf deleted file mode 100644 index 6a8ac985ca..0000000000 --- a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/sagemaker.tf +++ /dev/null @@ -1,280 +0,0 @@ -provider "mongodbatlas" { - public_key = var.public_key - private_key = var.private_key -} - -data "aws_partition" "current" {} - -data "aws_region" "current" {} - -data "aws_caller_identity" "current" {} - - -resource "mongodbatlas_event_trigger" "trigger" { - project_id = var.atlas_project_id - name = var.trigger_name - type = "DATABASE" - app_id = var.realm_app_id - - config_database= var.database_name - config_collection = var.collection_name - config_operation_types = ["INSERT"] - config_service_id = var.service_id - config_full_document = true - - event_processors { - aws_eventbridge { - config_region = data.aws_region.current.name - config_account_id = data.aws_caller_identity.current.account_id - } - } -} - -resource "aws_iam_role" "sage_maker_execution_role" { - assume_role_policy = jsonencode({ - Version = "2012-10-17" - Statement = [ - { - Effect = "Allow" - Principal = { - Service = [ - "sagemaker.amazonaws.com" - ] - } - Action = [ - "sts:AssumeRole" - ] - } - ] - }) - path = "/" - managed_policy_arns = [ - "arn:${data.aws_partition.current.partition}:iam::aws:policy/AmazonSageMakerFullAccess", - "arn:${data.aws_partition.current.partition}:iam::aws:policy/AmazonSageMakerCanvasFullAccess" - ] - - inline_policy { - name = "qs-sagemaker-execution-policy" - policy = jsonencode({ - Version = "2012-10-17", - Statement = [ - { - Effect = "Allow", - Action = "s3:GetObject", - Resource = "arn:${data.aws_partition.current.partition}:s3:::*" - } - ] - }) - } -} - -resource "aws_sagemaker_model" "model" { - primary_container { - image = var.model_ecr_image_uri - model_data_url = var.model_data_s3_uri - mode = "SingleModel" - environment = { - SAGEMAKER_PROGRAM = "inference.py" - SAGEMAKER_SUBMIT_DIRECTORY = var.model_data_s3_uri - } - } - execution_role_arn = aws_iam_role.sage_maker_execution_role.arn -} - -resource "aws_sagemaker_endpoint_configuration" "endpoint_config" { - production_variants { - initial_instance_count = 1 - initial_variant_weight = 1.0 - instance_type = "ml.c5.large" - model_name = aws_sagemaker_model.model.name - variant_name = aws_sagemaker_model.model.name - } -} - -resource "aws_sagemaker_endpoint" "endpoint" { - endpoint_config_name = aws_sagemaker_endpoint_configuration.endpoint_config.name -} - -resource "aws_cloudwatch_event_bus" "event_bus_for_capturing_mdb_events" { - depends_on = [ mongodbatlas_event_trigger.trigger ] - event_source_name = "aws.partner/mongodb.com/stitch.trigger/${mongodbatlas_event_trigger.trigger.trigger_id}" - name = "aws.partner/mongodb.com/stitch.trigger/${mongodbatlas_event_trigger.trigger.trigger_id}" -} - -resource "aws_cloudwatch_event_bus" "event_bus_for_sage_maker_results" { - name = "qs-mongodb-sagemaker-results" -} - -resource "aws_lambda_function" "lambda_function_to_read_mdb_events" { - function_name = "pull-mdb-events" - package_type = "Image" - image_uri = var.pull_lambda_ecr_image_uri - role = aws_iam_role.pull_lambda_function_role.arn - environment { - variables = { - model_endpoint = aws_sagemaker_endpoint.endpoint.name - region_name = data.aws_region.current.name - eventbus_name = aws_cloudwatch_event_bus.event_bus_for_sage_maker_results.arn - } - } - architectures = [ - "x86_64" - ] - memory_size = 1024 - timeout = 300 -} - -resource "aws_cloudwatch_event_rule" "event_rule_to_match_mdb_events" { - description = "Event Rule to match MongoDB change events." - event_bus_name = aws_cloudwatch_event_bus.event_bus_for_capturing_mdb_events.name - event_pattern = jsonencode({ - account = [ - data.aws_caller_identity.current.account_id - ] - }) - is_enabled = true - name = "pull-mdb-events" -} - -resource "aws_cloudwatch_event_target" "read_mdb_event_target" { - event_bus_name = aws_cloudwatch_event_bus.event_bus_for_capturing_mdb_events.name - rule = aws_cloudwatch_event_rule.event_rule_to_match_mdb_events.name - target_id = "EventRuleToReadMatchMDBEventsID" - arn = aws_lambda_function.lambda_function_to_read_mdb_events.arn -} - -resource "aws_iam_role" "pull_lambda_function_role" { - assume_role_policy = jsonencode({ - Version = "2012-10-17" - Statement = [ - { - Effect = "Allow" - Principal = { - Service = [ - "lambda.amazonaws.com" - ] - } - Action = [ - "sts:AssumeRole" - ] - } - ] - }) - path = "/" - managed_policy_arns = [ - "arn:${data.aws_partition.current.partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" - ] - inline_policy { - name = "sagemaker-endpoint-invokation-policy" - policy = jsonencode({ - Version = "2012-10-17", - Statement = [ - { - Effect = "Allow", - Action = "sagemaker:InvokeEndpoint", - Resource = aws_sagemaker_endpoint.endpoint.arn - }, - { - Effect = "Allow", - Action = "events:PutEvents", - Resource = aws_cloudwatch_event_bus.event_bus_for_sage_maker_results.arn - } - ] - }) - } -} - -resource "aws_lambda_function" "lambda_function_to_write_to_mdb" { - function_name = "push_lambda_function" - package_type = "Image" - role = aws_iam_role.push_lambda_function_role.arn - image_uri = var.push_lambda_ecr_image_uri - environment { - variables = { - mongo_endpoint = var.mongo_endpoint - dbname = var.database_name - } - } - architectures = [ - "x86_64" - ] - memory_size = 1024 - timeout = 300 -} - -resource "aws_iam_role" "push_lambda_function_role" { - assume_role_policy = jsonencode({ - Version = "2012-10-17" - Statement = [ - { - Effect = "Allow" - Principal = { - Service = [ - "lambda.amazonaws.com" - ] - } - Action = [ - "sts:AssumeRole" - ] - } - ] - }) - path = "/" - managed_policy_arns = [ - "arn:${data.aws_partition.current.partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" - ] - inline_policy { - name = "sagemaker-endpoint-invokation-policy" - policy = jsonencode({ - Version = "2012-10-17", - Statement = [ - { - Effect = "Allow", - Action = "sagemaker:InvokeEndpoint", - Resource = aws_sagemaker_endpoint.endpoint.arn - }, - { - Effect = "Allow", - Action = "events:PutEvents", - Resource = aws_cloudwatch_event_rule.event_rule_to_match_mdb_events.arn - } - ] - }) - } -} - -resource "aws_cloudwatch_event_rule" "event_rule_to_capture_events_sent_from_lambda_function" { - description = "Event Rule to match result events returned by pull Lambda." - event_bus_name = aws_cloudwatch_event_bus.event_bus_for_sage_maker_results.name - event_pattern = jsonencode({ - source = [ - "user-event" - ] - detail-type = [ - "user-preferences" - ] - }) - is_enabled = true - name = "push-to-mongodb" -} - -resource "aws_cloudwatch_event_target" "write_event_from_lambda_to_target" { - event_bus_name = aws_cloudwatch_event_bus.event_bus_for_sage_maker_results.name - rule = aws_cloudwatch_event_rule.event_rule_to_capture_events_sent_from_lambda_function.name - target_id = "EventRuleToCaptureEventsSentFromLambdaFunctionID" - arn = aws_lambda_function.lambda_function_to_write_to_mdb.arn -} - -resource "aws_lambda_permission" "event_bridge_lambda_permission1" { - function_name = aws_lambda_function.lambda_function_to_read_mdb_events.arn - action = "lambda:InvokeFunction" - principal = "events.amazonaws.com" - source_arn = aws_cloudwatch_event_rule.event_rule_to_match_mdb_events.arn -} - -resource "aws_lambda_permission" "event_bridge_lambda_permission2" { - function_name = aws_lambda_function.lambda_function_to_write_to_mdb.arn - action = "lambda:InvokeFunction" - principal = "events.amazonaws.com" - source_arn = aws_cloudwatch_event_rule.event_rule_to_capture_events_sent_from_lambda_function.arn -} \ No newline at end of file diff --git a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/variables.tf b/modules/terraform-mongodbatlas-amazon-sagemaker-integration/variables.tf deleted file mode 100644 index e788272a9b..0000000000 --- a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/variables.tf +++ /dev/null @@ -1,75 +0,0 @@ -variable "atlas_org_id" { - description = "Atlas organization id" - type = string -} -variable "public_key" { - description = "Public API key to authenticate to Atlas" - type = string -} -variable "private_key" { - description = "Private API key to authenticate to Atlas" - type = string -} - - -variable profile { - description = "A secret with name cfn/atlas/profile/{Profile}" - default = "default" - type = string -} - -variable atlas_project_id { - description = "Atlas Project ID." - type = string -} - -variable database_name { - description = "Database name for the trigger." - type = string -} - -variable collection_name { - description = "Collection name for the trigger." - type = string -} - -variable service_id { - description = "Service ID." - type = string -} - -variable realm_app_id { - description = "Realm App ID." - type = string -} - -variable model_data_s3_uri { - description = "The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix)." - type = string -} - -variable model_ecr_image_uri { - description = "AWS managed Deep Learning Container Image URI or your custom Image URI from ECR to deploy and run the model." - type = string -} - -variable pull_lambda_ecr_image_uri { - description = "ECR image URI of the Lambda function to read MongoDB events from EventBridge." - type = string -} - -variable push_lambda_ecr_image_uri { - description = "ECR image URI of the Lambda function to write results back to MongoDB." - type = string -} - -variable mongo_endpoint { - description = "Your MongoDB endpoint to push results by Lambda function." - type = string -} - -variable "trigger_name" { - description = "value of trigger name" - type = string - -} diff --git a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/versions.tf b/modules/terraform-mongodbatlas-amazon-sagemaker-integration/versions.tf deleted file mode 100644 index 68b0b35a68..0000000000 --- a/modules/terraform-mongodbatlas-amazon-sagemaker-integration/versions.tf +++ /dev/null @@ -1,13 +0,0 @@ -terraform { - required_providers { - mongodbatlas = { - source = "mongodb/mongodbatlas" - version = "1.12.1" - } - aws = { - source = "hashicorp/aws" - version = "~> 5.17.0" - } - } - required_version = ">= 0.13" -} \ No newline at end of file diff --git a/modules/terraform-mongodbatlas-basic/README.md b/modules/terraform-mongodbatlas-basic/README.md deleted file mode 100644 index ff2e72d91b..0000000000 --- a/modules/terraform-mongodbatlas-basic/README.md +++ /dev/null @@ -1,42 +0,0 @@ -# quickstart-mongodb-atlas - - - -## Overview - -![image](https://user-images.githubusercontent.com/5663078/229103723-4c6b9ab1-9492-47ba-b04d-7f29079e3817.png) - -The Atlas Partner Solutions templates allow you to set up all you need to start using MongoDB Atlas. We provide four different templates: - -- Deploy MongoDB Atlas without VPC peering. This option peers MongoDB Atlas with your existing VPC. -- Deploy MongoDB Atlas with VPC peering into a new VPC (end-to-end deployment). This option builds a complete MongoDB Atlas environment within AWS consisting of a project, cluster, and more. -- Deploy MongoDB Atlas with VPC peering into an existing VPC. This option peers MongoDB Atlas with a new VPC. -- Deploy MongoDB Atlas with Private Endpoint. This option connects MongoDB Atlas AWS VPC using Private Endpoint. - -All the quickstart templates create an Atlas Project, Cluster, Database User and enable public access into your cluster. - - - -## MongoDB Atlas CFN Resources used by the templates - -- [MongoDB::Atlas::Cluster](../../mongodbatlas/resource_mongodbatlas_cluster.go) -- [MongoDB::Atlas::ProjectIpAccessList](../../mongodbatlas/fw_resource_mongodbatlas_project_ip_access_list.go) -- [MongoDB::Atlas::DatabaseUser](../../mongodbatlas/fw_resource_mongodbatlas_database_user.go) -- [MongoDB::Atlas::Project](../../mongodbatlas/fw_resource_mongodbatlas_project.go) -- [MongoDB::Atlas::NetworkPeering](../../mongodbatlas/resource_mongodbatlas_network_peering.go) -- [MongoDB::Atlas::NetworkContainer](../../mongodbatlas/resource_mongodbatlas_network_container.go) -- [MongoDB::Atlas::PrivateEndpoint](../../mongodbatlas/resource_mongodbatlas_privatelink_endpoint.go) - - -## Environment Configured by the Partner Solution templates -All Partner Solutions templates will generate the following resources: -- An Atlas Project in the organization that was provided as input. -- An Atlas Cluster with authentication and authorization enabled, which cannot be accessed through the public internet. -- A Database user that can access the cluster. -- The IP address range provided as input will be added to the Atlas access list, allowing the cluster to be accessed through the public internet. - -The specific resources that will be created depend on which Partner Solutions template is used: - -- A new AWS VPC (Virtual Private Cloud) will be created. -- A VPC peering connection will be established between the MongoDB Atlas VPC (where your cluster is located) and the VPC on AWS. - diff --git a/modules/terraform-mongodbatlas-basic/aws-vpc.tf b/modules/terraform-mongodbatlas-basic/aws-vpc.tf deleted file mode 100644 index 6932444053..0000000000 --- a/modules/terraform-mongodbatlas-basic/aws-vpc.tf +++ /dev/null @@ -1,59 +0,0 @@ -resource "aws_vpc_endpoint" "vpce_east" { - vpc_id = aws_vpc.vpc_east.id - service_name = mongodbatlas_privatelink_endpoint.pe_east.endpoint_service_name - vpc_endpoint_type = "Interface" - subnet_ids = [aws_subnet.subnet_east_a.id, aws_subnet.subnet_east_b.id] - security_group_ids = [aws_security_group.sg_east.id] -} - -resource "aws_vpc" "vpc_east" { - cidr_block = var.aws_vpc_cidr_block - enable_dns_hostnames = true - enable_dns_support = true -} - -resource "aws_internet_gateway" "ig_east" { - vpc_id = aws_vpc.vpc_east.id -} - -resource "aws_route" "route_east" { - route_table_id = aws_vpc.vpc_east.main_route_table_id - destination_cidr_block = var.aws_route_table_cidr_block - gateway_id = aws_internet_gateway.ig_east.id -} - -resource "aws_subnet" "subnet_east_a" { - vpc_id = aws_vpc.vpc_east.id - cidr_block = var.aws_subnet_cidr_block1 - map_public_ip_on_launch = true - availability_zone = var.aws_subnet_availability_zone1 -} - -resource "aws_subnet" "subnet_east_b" { - vpc_id = aws_vpc.vpc_east.id - cidr_block = var.aws_subnet_cidr_block2 - map_public_ip_on_launch = false - availability_zone = var.aws_subnet_availability_zone2 -} - -resource "aws_security_group" "sg_east" { - name_prefix = "default-" - description = "Default security group for all instances in vpc" - vpc_id = aws_vpc.vpc_east.id - ingress { - from_port = var.aws_sg_ingress_from_port - to_port = var.aws_sg_ingress_to_port - protocol = var.aws_sg_ingress_protocol - cidr_blocks = [ - var.aws_vpc_cidr_block, - ] - } - egress { - from_port = var.aws_sg_egress_from_port - to_port = var.aws_sg_egress_to_port - protocol = var.aws_sg_egress_protocol - cidr_blocks = [ - var.aws_vpc_cidr_block - ] - } -} diff --git a/modules/terraform-mongodbatlas-basic/main.tf b/modules/terraform-mongodbatlas-basic/main.tf deleted file mode 100644 index a2e222513b..0000000000 --- a/modules/terraform-mongodbatlas-basic/main.tf +++ /dev/null @@ -1,114 +0,0 @@ -provider "mongodbatlas" { - public_key = var.public_key - private_key = var.private_key -} -locals { - ip_address_list = [ - for ip in var.ip_address : - { - ip_address = ip - comment = "IP Address ${ip}" - } - ] - - cidr_block_list = [ - for cidr in var.cidr_block : - { - cidr_block = cidr - comment = "CIDR Block ${cidr}" - } - ] -} - -# Project Resource -resource "mongodbatlas_project" "project" { - name = var.project_name - org_id = var.atlas_org_id -} - - -# IP Access List with IP Address -resource "mongodbatlas_project_ip_access_list" "ip" { - for_each = { - for index, ip in local.ip_address_list : - ip.comment => ip - } - project_id =mongodbatlas_project.project.id - ip_address = each.value.ip_address - comment = each.value.comment -} - -# IP Access List with CIDR Block -resource "mongodbatlas_project_ip_access_list" "cidr" { - - for_each = { - for index, cidr in local.cidr_block_list : - cidr.comment => cidr - } - project_id =mongodbatlas_project.project.id - cidr_block = each.value.cidr_block - comment = each.value.comment -} - -resource "mongodbatlas_cluster" "cluster" { - project_id = mongodbatlas_project.project.id - name = var.cluster_name - mongo_db_major_version = var.mongo_version - cluster_type = var.cluster_type - replication_specs { - num_shards = var.num_shards - regions_config { - region_name = var.region - electable_nodes = var.electable_nodes - priority = var.priority - read_only_nodes = var.read_only_nodes - } - } - # Provider Settings "block" - auto_scaling_disk_gb_enabled = var.auto_scaling_disk_gb_enabled - provider_name = var.provider_name - disk_size_gb = var.disk_size_gb - provider_instance_size_name = var.provider_instance_size_name -} - -# DATABASE USER -resource "mongodbatlas_database_user" "user" { - count = length(var.db_users) - username = var.db_users[count.index] - password = var.db_passwords[count.index] - project_id = mongodbatlas_project.project.id - auth_database_name = "admin" - - roles { - role_name = var.role_name - database_name = var.database_names[count.index] - } - - labels { - key = "Name" - value = var.database_names[count.index] - } - - scopes { - name = mongodbatlas_cluster.cluster.name - type = "CLUSTER" - } -} - -resource "mongodbatlas_privatelink_endpoint" "pe_east" { - project_id = mongodbatlas_project.project.id - provider_name = var.provider_name - region = var.aws_region -} - -resource "mongodbatlas_privatelink_endpoint_service" "pe_east_service" { - project_id = mongodbatlas_project.project.id - private_link_id = mongodbatlas_privatelink_endpoint.pe_east.private_link_id - endpoint_service_id = aws_vpc_endpoint.vpce_east.id - provider_name = var.provider_name -} - - -output "project_id" { - value = mongodbatlas_project.project.id -} \ No newline at end of file diff --git a/modules/terraform-mongodbatlas-basic/outputs.tf b/modules/terraform-mongodbatlas-basic/outputs.tf deleted file mode 100644 index 8b13789179..0000000000 --- a/modules/terraform-mongodbatlas-basic/outputs.tf +++ /dev/null @@ -1 +0,0 @@ - diff --git a/modules/terraform-mongodbatlas-basic/variables.tf b/modules/terraform-mongodbatlas-basic/variables.tf deleted file mode 100644 index 871ba46898..0000000000 --- a/modules/terraform-mongodbatlas-basic/variables.tf +++ /dev/null @@ -1,217 +0,0 @@ -variable "atlas_org_id" { - description = "Atlas organization id" - type = string -} -variable "public_key" { - description = "Public API key to authenticate to Atlas" - type = string -} -variable "private_key" { - description = "Private API key to authenticate to Atlas" - type = string -} - -# project -variable "project_name" { - description = "Atlas project name" - default = "TenantUpgradeTest" - type = string -} - -#cluster -variable "cluster_name" { - description = "Atlas cluster name" - default = "cluster" - type = string -} - -variable "cluster_type" { - description = "Atlas cluster type" - default = "REPLICASET" - type = string -} - -variable "num_shards" { - description = "Atlas cluster number of shards" - default = 1 - type = number -} - -variable "priority" { - description = "Atlas cluster priority" - default = 7 - type = number -} - -variable "read_only_nodes" { - description = "Atlas cluster number of read only nodes" - default = 0 - type = number -} -variable "electable_nodes" { - description = "Atlas cluster number of electable nodes" - default = 3 - type = number -} - -variable "auto_scaling_disk_gb_enabled" { - description = "Atlas cluster auto scaling disk enabled" - default = false - type = bool -} - -variable "disk_size_gb" { - description = "Atlas cluster disk size in GB" - default = 10 - type = number -} -variable "provider_name" { - description = "Atlas cluster provider name" - default = "AWS" - type = string -} -variable "backing_provider_name" { - description = "Atlas cluster backing provider name" - default = "AWS" - type = string -} -variable "provider_instance_size_name" { - description = "Atlas cluster provider instance name" - default = "M10" - type = string -} - -variable "region" { - description = "Atlas cluster region" - default = "US_EAST_1" - type = string -} -variable "aws_region"{ - description = "AWS region" - default = "us-east-1" - type = string -} - -variable "mongo_version" { - description = "Atlas cluster version" - default = "4.4" - type = string -} - - -variable "user" { - description = "MongoDB Atlas User" - type = list(string) - default = ["dbuser1", "dbuser2"] -} -variable "db_passwords" { - description = "MongoDB Atlas User Password" - type = list(string) -} -variable "database_names" { - description = "The Database in the cluster" - type = list(string) -} - -# database user -variable "role_name" { - description = "Atlas database user role name" - default = "readWrite" - type = string -} - -# IP Access List -variable "cidr_block" { - description = "IP Access List CIDRs" - type = list(string) -} - -variable "ip_address" { - description = "IP Access List IP Addresses" - type = list(string) -} -# aws - -variable "aws_vpc_cidr_block" { - description = "AWS VPC CIDR block" - default = "10.0.0.0/16" - type = string -} - -# aws vpc -variable "aws_vpc_ingress" { - description = "AWS VPC ingress CIDR block" - type = string -} - -variable "aws_vpc_egress" { - description = "AWS VPC egress CIDR block" - type = string -} - -variable "aws_route_table_cidr_block" { - description = "AWS route table CIDR block" - default = "0.0.0.0/0" - type = string -} - -variable "aws_subnet_cidr_block1" { - description = "AWS subnet CIDR block" - type = string -} -variable "aws_subnet_cidr_block2" { - description = "AWS subnet CIDR block" - type = string -} - -variable "aws_subnet_availability_zone1" { - description = "AWS subnet availability zone" - default = "us-east-1a" - type = string -} -variable "aws_subnet_availability_zone2" { - description = "AWS subnet availability zone" - default = "us-east-1b" - type = string -} - -variable "aws_sg_ingress_from_port" { - description = "AWS security group ingress from port" - default = 27017 - type = number -} - -variable "aws_sg_ingress_to_port" { - description = "AWS security group ingress to port" - default = 27017 - type = number -} - -variable "aws_sg_ingress_protocol" { - description = "AWS security group ingress protocol" - default = "tcp" - type = string -} - -variable "aws_sg_egress_from_port" { - description = "AWS security group egress from port" - default = 0 - type = number -} - -variable "aws_sg_egress_to_port" { - description = "AWS security group egress to port" - default = 0 - type = number -} - -variable "aws_sg_egress_protocol" { - description = "AWS security group egress protocol" - default = "-1" - type = string -} - -variable "db_users" { - description = "Atlas database users" - type = list(string) -} \ No newline at end of file diff --git a/modules/terraform-mongodbatlas-basic/versions.tf b/modules/terraform-mongodbatlas-basic/versions.tf deleted file mode 100644 index 051942514f..0000000000 --- a/modules/terraform-mongodbatlas-basic/versions.tf +++ /dev/null @@ -1,14 +0,0 @@ -terraform { - required_providers { - mongodbatlas = { - source = "mongodb/mongodbatlas" - version = "1.12.1" - } - aws = { - source = "hashicorp/aws" - version = "~> 5.0" - } - } - required_version = ">= 0.13" -} - diff --git a/scripts/check-upgrade-guide-exists.sh b/scripts/check-upgrade-guide-exists.sh index 70c94c82ac..d1cdf7f136 100755 --- a/scripts/check-upgrade-guide-exists.sh +++ b/scripts/check-upgrade-guide-exists.sh @@ -10,7 +10,7 @@ IFS='.' read -r MAJOR MINOR PATCH <<< "$RELEASE_NUMBER" # Check if it's a major release (patch version is 0) if [ "$PATCH" -eq 0 ]; then - UPGRADE_GUIDE_PATH="website/docs/guides/$MAJOR.$MINOR.$PATCH-upgrade-guide.html.markdown" + UPGRADE_GUIDE_PATH="docs/guides/$MAJOR.$MINOR.$PATCH-upgrade-guide.md" echo "Checking for the presence of $UPGRADE_GUIDE_PATH" if [ ! -f "$UPGRADE_GUIDE_PATH" ]; then echo "Stopping release process, upgrade guide $UPGRADE_GUIDE_PATH does not exist. Please visit our docs for more details: https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/RELEASING.md" diff --git a/scripts/generate-doc.sh b/scripts/generate-doc.sh index 9adf5616dd..8e2973e6af 100755 --- a/scripts/generate-doc.sh +++ b/scripts/generate-doc.sh @@ -32,7 +32,7 @@ set -euo pipefail -TF_VERSION="${TF_VERSION:-"1.7"}" # TF version to use when running tfplugindocs. Default: 1.7 +TF_VERSION="${TF_VERSION:-"1.9.2"}" # TF version to use when running tfplugindocs. Default: 1.9.2 TEMPLATE_FOLDER_PATH="${TEMPLATE_FOLDER_PATH:-"templates"}" # PATH to the templates folder. Default: templates @@ -67,39 +67,35 @@ if [ ! -f "${TEMPLATE_FOLDER_PATH}/data-sources/${resource_name}s.md.tmpl" ]; th printf "Skipping this check: We assume that the resource does not have a plural data source.\n\n" fi -# tfplugindocs uses this folder to generate the documentations -mkdir -p docs +tfplugindocs generate --tf-version "${TF_VERSION}" --website-source-dir "${TEMPLATE_FOLDER_PATH}" --rendered-website-dir "docs-out" -tfplugindocs generate --tf-version "${TF_VERSION}" --website-source-dir "${TEMPLATE_FOLDER_PATH}" - -if [ ! -f "docs/resources/${resource_name}.md" ]; then +if [ ! -f "docs-out/resources/${resource_name}.md" ]; then echo "Error: We cannot find the documentation file for the resource ${resource_name}.md" echo "Please, make sure to include the resource template under templates/resources/${resource_name}.md.tmpl" printf "Skipping this step: We assume that only a data source is being generated.\n\n" else - printf "\nMoving the generated file %s.md to the website folder" "${resource_name}" - mv "docs/resources/${resource_name}.md" "website/docs/r/${resource_name}.html.markdown" + printf "Moving the generated resource file %s.md to the website folder.\n" "${resource_name}" + mv "docs-out/resources/${resource_name}.md" "docs/resources/${resource_name}.md" fi -if [ ! -f "docs/data-sources/${resource_name}.md" ]; then +if [ ! -f "docs-out/data-sources/${resource_name}.md" ]; then echo "Error: We cannot find the documentation file for the data source ${resource_name}.md" echo "Please, make sure to include the data source template under templates/data-sources/${resource_name}.md.tmpl" exit 1 else - printf "\nMoving the generated file %s.md to the website folder" "${resource_name}" - mv "docs/data-sources/${resource_name}.md" "website/docs/d/${resource_name}.html.markdown" + printf "Moving the generated data-source file %s.md to the website folder.\n" "${resource_name}" + mv "docs-out/data-sources/${resource_name}.md" "docs/data-sources/${resource_name}.md" fi -if [ ! -f "docs/data-sources/${resource_name}s.md" ]; then - echo "Warning: We cannot find the documentation file for the data source ${resource_name}s.md." +if [ ! -f "docs-out/data-sources/${resource_name}s.md" ]; then + echo "Warning: We cannot find the documentation file for the plural data source ${resource_name}s.md." echo "Please, make sure to include the data source template under templates/data-sources/${resource_name}s.md.tmpl" printf "Skipping this step: We assume that the resource does not have a plural data source.\n\n" else - printf "\nMoving the generated file %s.md to the website folder" "${resource_name}s" - mv "docs/data-sources/${resource_name}s.md" "website/docs/d/${resource_name}s.html.markdown" + printf "\nMoving the generated plural data-source file %s.md to the website folder.\n" "${resource_name}s" + mv "docs-out/data-sources/${resource_name}s.md" "docs/data-sources/${resource_name}s.md" fi -# Delete the docs/ folder -rm -R docs/ +rm -R docs-out/ printf "\nThe documentation for %s has been created.\n" "${resource_name}" diff --git a/scripts/tf-validate.sh b/scripts/tf-validate.sh index d97035651a..9d6c1ffa05 100755 --- a/scripts/tf-validate.sh +++ b/scripts/tf-validate.sh @@ -16,29 +16,30 @@ set -Eeou pipefail -arch_name=$(uname -m) +# Delete Terraform execution files so the script can be run multiple times +find ./examples -type d -name ".terraform" -exec rm -rf {} + +find ./examples -type f -name ".terraform.lock.hcl" -exec rm -f {} + + +export TF_CLI_CONFIG_FILE="$PWD/bin-examples/tf-validate.tfrc" + +# Use local provider to validate examples +go build -o bin-examples/terraform-provider-mongodbatlas . + +cat << EOF > "$TF_CLI_CONFIG_FILE" +provider_installation { + dev_overrides { + "mongodb/mongodbatlas" = "$PWD/bin-examples" + } + direct {} +} +EOF for DIR in $(find ./examples -type f -name '*.tf' -exec dirname {} \; | sort -u); do [ ! -d "$DIR" ] && continue - - - # Skip directories with "v08" or "v09" in their name for ARM64 - if [[ "$arch_name" == "arm64" ]] && echo "$DIR" | grep -qE "v08|v09"; then - echo "Skip directories with \"v08\" or \"v09\" in their name for ARM64" - echo "TF provider does not have a package available for ARM64 for version < 1.0" - echo "Skipping directory: $DIR" - continue - fi - pushd "$DIR" - - echo; echo -e "\e[1;35m===> Initializing Example: $DIR <===\e[0m"; echo - terraform init - - echo; echo -e "\e[1;35m===> Format Checking Example: $DIR <===\e[0m"; echo + echo; echo -e "\e[1;35m===> Example: $DIR <===\e[0m"; echo + terraform init > /dev/null # supress output as it's very verbose terraform fmt -check -recursive - - echo; echo -e "\e[1;35m===> Validating Example: $DIR <===\e[0m"; echo terraform validate popd done diff --git a/scripts/tflint.sh b/scripts/tflint.sh deleted file mode 100755 index 9f404abac0..0000000000 --- a/scripts/tflint.sh +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env bash - -# Copyright 2021 MongoDB Inc -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -Eeou pipefail - -for DIR in $(find ./examples -type f -name '*.tf' -exec dirname {} \; | sort -u); do - [ ! -d "$DIR" ] && continue - - pushd "$DIR" - - echo; echo -e "\e[1;35m===> Validating Syntax Example: $DIR <===\e[0m"; echo - # Terraform syntax checks - tflint \ - --enable-rule=terraform_deprecated_interpolation \ - --enable-rule=terraform_deprecated_index \ - --enable-rule=terraform_unused_declarations \ - --enable-rule=terraform_comment_syntax \ - --enable-rule=terraform_required_version \ - --minimum-failure-severity=warning - popd -done diff --git a/scripts/update-examples-reference-in-docs.sh b/scripts/update-examples-reference-in-docs.sh index 2df63f2e12..90ff16ba54 100755 --- a/scripts/update-examples-reference-in-docs.sh +++ b/scripts/update-examples-reference-in-docs.sh @@ -4,7 +4,7 @@ set -euo pipefail : "${1?"Tag of new release must be provided"}" -FILE_PATH="./website/docs/index.html.markdown" +FILE_PATH="./docs/index.md" RELEASE_TAG=$1 # Define the old URL pattern and new URL diff --git a/scripts/update-tf-compatibility-matrix.sh b/scripts/update-tf-compatibility-matrix.sh index 9d9b172a66..8e1925d6a3 100755 --- a/scripts/update-tf-compatibility-matrix.sh +++ b/scripts/update-tf-compatibility-matrix.sh @@ -17,7 +17,7 @@ set -euo pipefail input_array=$(./scripts/get-terraform-supported-versions.sh "true") -indexFile="website/docs/index.html.markdown" +indexFile="docs/index.md" transform_array() { local arr="$1" diff --git a/templates/data-source.md.tmpl b/templates/data-source.md.tmpl index 233a276c54..32b76776d1 100644 --- a/templates/data-source.md.tmpl +++ b/templates/data-source.md.tmpl @@ -1,15 +1,5 @@ ---- -layout: "mongodbatlas" -page_title: {{ if .Name }}"MongoDB Atlas: {{.Name}}"{{ end }} -sidebar_current: {{ if .Type }}"docs-{{ .ProviderShortName }}-{{ $arr := split .Type " "}}{{ range $element := $arr }}{{ $element | lower}}{{ end }}{{ $name := slice (split .Name "_") 1 }}{{ range $element := $name }}-{{ $element | lower}}{{end}}"{{ end }} -description: |- - {{ if ne .Name "" }}"Provides a {{ .Name }} data source."{{ end }} ---- - # {{ if .Name }}{{.Type}}: {{.Name}}{{ end }} -{{ if .Description }} {{ .Description | trimspace }} {{ end }} - ## Example Usages {{ if .Name }} {{ if eq .Name "mongodbatlas_network_peering" }} diff --git a/templates/data-sources/control_plane_ip_addresses.md.tmpl b/templates/data-sources/control_plane_ip_addresses.md.tmpl index 35f2ceed24..32993054eb 100644 --- a/templates/data-sources/control_plane_ip_addresses.md.tmpl +++ b/templates/data-sources/control_plane_ip_addresses.md.tmpl @@ -1,15 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: {{.Name}}" -sidebar_current: "docs-{{ .ProviderShortName }}-{{ $arr := split .Type " "}}{{ range $element := $arr }}{{ $element | lower}}{{ end }}{{ $name := slice (split .Name "_") 1 }}{{ range $element := $name }}-{{ $element | lower}}{{end}}" -description: |- - "Provides a data source that returns all control plane IP addresses" ---- - # {{.Type}}: {{.Name}} -{{ .Description | trimspace }} -Provides a data source that returns all control plane IP addresses. +`{{.Name}}` returns all control plane IP addresses. ## Example Usages {{ tffile (printf "examples/%s/main.tf" .Name )}} diff --git a/templates/data-sources/push_based_log_export.md.tmpl b/templates/data-sources/push_based_log_export.md.tmpl index 59e0bbffdf..0c25f4821c 100644 --- a/templates/data-sources/push_based_log_export.md.tmpl +++ b/templates/data-sources/push_based_log_export.md.tmpl @@ -1,15 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: {{.Name}}" -sidebar_current: "docs-{{ .ProviderShortName }}-{{ $arr := split .Type " "}}{{ range $element := $arr }}{{ $element | lower}}{{ end }}{{ $name := slice (split .Name "_") 1 }}{{ range $element := $name }}-{{ $element | lower}}{{end}}" -description: |- - "Provides a data source for push-based log export feature." ---- - # {{.Type}}: {{.Name}} -{{ .Description | trimspace }} -`mongodbatlas_push_based_log_export` describes the configured project level settings for the push-based log export feature. +`{{.Name}}` describes the configured project level settings for the push-based log export feature. ## Example Usages {{ tffile (printf "examples/%s/main.tf" .Name )}} diff --git a/templates/data-sources/search_deployment.md.tmpl b/templates/data-sources/search_deployment.md.tmpl index 228acf91d4..b746ea483e 100644 --- a/templates/data-sources/search_deployment.md.tmpl +++ b/templates/data-sources/search_deployment.md.tmpl @@ -1,15 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: {{.Name}}" -sidebar_current: "docs-{{ .ProviderShortName }}-{{ $arr := split .Type " "}}{{ range $element := $arr }}{{ $element | lower}}{{ end }}{{ $name := slice (split .Name "_") 1 }}{{ range $element := $name }}-{{ $element | lower}}{{end}}" -description: |- - "Provides a Search Deployment data source." ---- - # {{.Type}}: {{.Name}} -{{ .Description | trimspace }} -`mongodbatlas_search_deployment` describes a search node deployment. +`{{.Name}}` describes a search node deployment. ## Example Usages {{ tffile (printf "examples/%s/main.tf" .Name )}} diff --git a/templates/resources.md.tmpl b/templates/resources.md.tmpl index 6951cac4b1..8b86768a70 100644 --- a/templates/resources.md.tmpl +++ b/templates/resources.md.tmpl @@ -1,14 +1,5 @@ ---- -layout: "mongodbatlas" -page_title: {{ if .Name }}"MongoDB Atlas: {{.Name}}{{ end }}" -sidebar_current: {{ if .Type }}"docs-{{ .ProviderShortName }}-{{ $arr := split .Type " "}}{{ range $element := $arr }}{{ $element | lower}}{{ end }}{{ $name := slice (split .Name "_") 1 }}{{ range $element := $name }}-{{ $element | lower}}{{end}}"{{ end }} -description: |- - {{ if .Name }}"Provides a {{ .Name }} resource."{{ end }} ---- #{{ if .Name }} {{.Type}}: {{.Name}}{{ end }} -{{ if .Name }}{{ .Description | trimspace }}{{ end }} - ## Example Usages {{ if .Name }} {{ if eq .Name "mongodbatlas_network_peering" }} diff --git a/templates/resources/push_based_log_export.md.tmpl b/templates/resources/push_based_log_export.md.tmpl index a12f730a72..aadfb9d954 100644 --- a/templates/resources/push_based_log_export.md.tmpl +++ b/templates/resources/push_based_log_export.md.tmpl @@ -1,15 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: {{.Name}}" -sidebar_current: "docs-{{ .ProviderShortName }}-{{ $arr := split .Type " "}}{{ range $element := $arr }}{{ $element | lower}}{{ end }}{{ $name := slice (split .Name "_") 1 }}{{ range $element := $name }}-{{ $element | lower}}{{end}}" -description: |- - "Provides resource for push-based log export feature." ---- - # {{.Type}}: {{.Name}} -{{ .Description | trimspace }} -`mongodbatlas_push_based_log_export` provides a resource for push-based log export feature. The resource lets you configure, enable & disable the project level settings for the push-based log export feature. Using this resource you +`{{.Name}}` provides a resource for push-based log export feature. The resource lets you configure, enable & disable the project level settings for the push-based log export feature. Using this resource you can continually push logs from mongod, mongos, and audit logs to an Amazon S3 bucket. Atlas exports logs every 5 minutes. diff --git a/templates/resources/search_deployment.md.tmpl b/templates/resources/search_deployment.md.tmpl index f7aaa97efa..0b6c72b40f 100644 --- a/templates/resources/search_deployment.md.tmpl +++ b/templates/resources/search_deployment.md.tmpl @@ -1,15 +1,6 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas: {{.Name}}" -sidebar_current: "docs-{{ .ProviderShortName }}-{{ $arr := split .Type " "}}{{ range $element := $arr }}{{ $element | lower}}{{ end }}{{ $name := slice (split .Name "_") 1 }}{{ range $element := $name }}-{{ $element | lower}}{{end}}" -description: |- - "Provides a Search Deployment resource." ---- - # {{.Type}}: {{.Name}} -{{ .Description | trimspace }} -`mongodbatlas_search_deployment` provides a Search Deployment resource. The resource lets you create, edit and delete dedicated search nodes in a cluster. +`{{.Name}}` provides a Search Deployment resource. The resource lets you create, edit and delete dedicated search nodes in a cluster. -> **NOTE:** For details on supported cloud providers and existing limitations you can visit the [Search Node Documentation](https://www.mongodb.com/docs/atlas/cluster-config/multi-cloud-distribution/#search-nodes-for-workload-isolation). diff --git a/website/docs/guides/howto-guide.html.markdown b/website/docs/guides/howto-guide.html.markdown deleted file mode 100644 index 6e835d1722..0000000000 --- a/website/docs/guides/howto-guide.html.markdown +++ /dev/null @@ -1,107 +0,0 @@ ---- -layout: "mongodbatlas" -page_title: "MongoDB Atlas Provider How-To Guide" -sidebar_current: "docs-mongodbatlas-guides-how-to-guide" -description: |- -MongoDB Atlas Provider : How-To Guide ---- - -# MongoDB Atlas Provider: How-To Guide - -The Terraform MongoDB Atlas Provider guide to perform common tasks with the provider. - -##How to Get A Pre-existing Container ID - -The following is an end to end example of how to get an existing container id. - -1) Start with an empty project - -2) Empty state file - -3) Apply a curl command to build cluster - -4) Run `terraform apply` to retrieve the container id - -The following illustrates step 3 and 4 above, assuming 1 & 2 were true: - -1) Create a cluster using a curl command to simulate non-Terraform created cluster. This will also create a container. - -``` -curl --user "pub:priv" --digest \ ---header "Content-Type: application/json" \ ---include \ ---request POST "https://cloud.mongodb.com/api/atlas/v1.0/groups/grpid/clusters?pretty=true" \ ---data ' -{ - "name": "SingleRegionCluster", - "numShards": 1, - "providerSettings": { - "providerName": "AWS", - "instanceSizeName": "M40", - "regionName": "US_EAST_1" - }, - "clusterType": "REPLICASET", - "replicationFactor": 3, - "replicationSpecs": [ - { - "numShards": 1, - "regionsConfig": { - "US_EAST_1": { - "analyticsNodes": 0, - "electableNodes": 3, - "priority": 7, - "readOnlyNodes": 0 - } - }, - "zoneName": "Zone 1" - } - ], - "backupEnabled": false, - "autoScaling": { - "diskGBEnabled": true - } -}' -``` - - - -2) Then apply this Terraform config to then read the information from the appropriate Data Sources and output the container id. - - -``` -data "mongodbatlas_cluster" "admin" { - name = "SingleRegionCluster" - project_id = local.mongodbatlas_project_id -} - -data "mongodbatlas_network_container" "admin" { - project_id = local.mongodbatlas_project_id - container_id = data.mongodbatlas_cluster.admin.container_id -} - -output "container" { - value = data.mongodbatlas_network_container.admin.container_id -} - -Apply complete! Resources: 0 added, 0 changed, 0 destroyed. - -Outputs: - -container = "62ffe4ecb79e2e007c375935" -``` - - -This example was tested using versions: -- darwin_amd64 -- provider registry.terraform.io/hashicorp/aws v4.26.0 -- provider registry.terraform.io/mongodb/mongodbatlas v1.4.3 - - -### Helpful Links - -* [Report bugs](https://github.com/mongodb/terraform-provider-mongodbatlas/issues) - -* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723) - -* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above. - \ No newline at end of file