From ca002b54ee67979f387425e4c475d3f9dcf7e60b Mon Sep 17 00:00:00 2001 From: Oriol Date: Tue, 26 Nov 2024 09:33:24 +0100 Subject: [PATCH] feat: Adds `mongodbatlas_flex_cluster` resource and data sources (#2816) * use SDK dev-preview * chore: Generate Flex Cluster file structure and resource schema (#2702) * generate resource and schema * pr comments * correct Map * feat: Implements `mongodbatlas_flex_cluster` resource (#2716) * create implementation * read implementation * update and delete implementation * Implement model * schema refactor * format * wip: model tests * initial model test * test for NewTFModel * Test NewAtlasCreateReq * test NewAtlasUpdateReq * fix tests * refactor for tags * fixes * implement import * simplify update * basic acceptance test * wip: acc tests * fix model test * final changes on tests * changelog entry * add resource to fallback template * Update .changelog/2716.txt Co-authored-by: Agustin Bettati * set resource to preview --------- Co-authored-by: Melanija Cvetic Co-authored-by: Agustin Bettati * chore: Enable mongodbatlas_flex_cluster test in CI (#2720) * create implementation * read implementation * update and delete implementation * initial model test * Test NewAtlasCreateReq * fixes * changelong and fix to acceptance-tests-runner.yml * fix to changelog * removed changelog * resolve rebase issue * fix to implementation * change test to use existing project id * Adjusting predeployed project id name and added preview enable * fix the naming convension of env var --------- Co-authored-by: Oriol Arbusi * chore: Generate Flex Cluster data source file and schemas (#2725) * generate data source file and schema for singular Flex Cluster * Generate plural data source schema * Changing type of tags in schema * Fix to data source schema * chore: Adds state transition logic to mongodbatlas_flex_cluster resource (#2721) * implements state transition * use state transition in create, update and delete * PR comments * chore: Update operation improvements for `mongodbatlas_flex_cluster` resource (#2729) * schema refactors * revert rename * implement isUpdateAllowed * failed update test case * refactor equal checks * make more exhaustive the failed update test * remove UseStateForUnknown in state_name * wip: use plan modifier for non updatable fields * wip: use plan modifiers on all non updatable attributes * use plan modifiers to fail on update of certain attributes * rename planmodifier * simplify planmodifier * PR comments * remove attribute parameter * markdowndescription over description * test: Enable mongodbatlas_flex_cluster tests in QA environment (#2741) * feat: Implements and tests `mongodbatlas_flex_cluster` data source (#2738) * Implementing read funciton in data source * Acc test for data source * Added changelog * implementing review suggestions * fix to rebase * Adjusting default timeout of `flex_cluster` to 3 hours (#2757) * chore: Rebase dev_branch onto master_branch (#2764) * feat: Adds `is_slow_operation_thresholding_enabled` attribute to `mongodbatlas_project` (#2698) * feat: Initial support for is_slow_operation_thresholding_enabled * test: fix test cases * chore: add changelog entry * test: Add extra check for plural data source * refactor: only set SetSlowOperationThresholding once during update * chore: fix lint error * fix: need to set SetSlowOperationThresholding before reading project props from API * chore: address PR comment * doc: Add documentation for `is_slow_operation_thresholding_enabled` to project (#2700) * chore: Updates CHANGELOG.md for #2698 * chore: Use `qa` for TestSuite on Sundays (#2701) * build(deps): bump go.mongodb.org/atlas-sdk (#2705) Co-authored-by: lantoli <430982+lantoli@users.noreply.github.com> * wait in second test (#2707) * chore: Supports running Test Suite in QA on sundays (#2711) * Revert "chore: Use `qa` for TestSuite on Sundays (#2701)" This reverts commit 942e0977c2d299ccdd3d0d10179ef240e3351871. * chore: Supports running qa only on Sundays * address PR comments * refactor: move job to steps of variables * chore: Add top level description * chore: Adds SDK Preview (#2713) * add client * SDK update GHA * simulate Preview update * example of use for Admin Preview * Revert "example of use for Admin Preview" This reverts commit c5edef52847034af8cdb5daf7310743ed630f801. * rename adminPreview to adminpreview in client * example of use of SDK Preview * reverse example * go mod tidy * build(deps): bump go.mongodb.org/atlas-sdk (#2715) Co-authored-by: lantoli <430982+lantoli@users.noreply.github.com> * ci: fix env-vars for azure QA (#2709) * doc: Advanced Cluster Differences and TF Core Upgrades (#2633) * adv cluster differences doc updates * Update docs/index.md * Update index.md * Update advanced_cluster.md * Update advanced_cluster.md * Update advanced_cluster.md * Update cluster.md * Update docs/index.md Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * Update docs/resources/cluster.md Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> * Update docs/resources/cluster.md Co-authored-by: Melissa Plunkett * Update advanced_cluster.md * Update cluster-to-advanced-cluster-migration-guide.md --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Melissa Plunkett * revert SDK preview changes (#2719) * fix: Fixes inconsistent result when using a multi-region cluster by always using a single spec (#2685) * fix: Initial workaround for multiple specs issuing a warning of spec missmatch * fix: can only use multiple specs in data source (schema limitation) * test: Add a test to confirm multi-region cluster can be used with search deployment * chore: changelog file * chore: fix test enum value * chore: revert timeout changes * Update .changelog/2685.txt Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2685 * chore: Updates examples link in index.md for v1.21.2 release * chore: Updates CHANGELOG.md header for v1.21.2 release * fix TestAccBackupSnapshotExportJob_basic (#2723) * chore: Bump github.com/hashicorp/terraform-plugin-framework-validators (#2727) Bumps [github.com/hashicorp/terraform-plugin-framework-validators](https://github.com/hashicorp/terraform-plugin-framework-validators) from 0.13.0 to 0.14.0. - [Release notes](https://github.com/hashicorp/terraform-plugin-framework-validators/releases) - [Changelog](https://github.com/hashicorp/terraform-plugin-framework-validators/blob/main/CHANGELOG.md) - [Commits](https://github.com/hashicorp/terraform-plugin-framework-validators/compare/v0.13.0...v0.14.0) --- updated-dependencies: - dependency-name: github.com/hashicorp/terraform-plugin-framework-validators dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * fix: Fixes `mongodbatlas_project` when user doesn't have project owner permission (#2731) * update doc * add deprecated message * changelog * deprecate params * initial TestAccProject_slowOperationNotOwner * update version to 1.24.0 * changeRoles * Revert "changeRoles" This reverts commit c8ba3c84c2e419d468be1135b5bf408961f8f92a. * Revert "initial TestAccProject_slowOperationNotOwner" This reverts commit d1473711e888887ab5e3991f3cd7314ec52a23e2. * don't update the value if it's in Create so value is unknown * pass warnings * Update internal/service/project/resource_project.go Co-authored-by: Marco Suma * apply feedback in doc * clarify docs --------- Co-authored-by: Marco Suma * chore: Updates CHANGELOG.md for #2731 * chore: Merging schema generation internal tool PoC into master branch (#2735) * feat: Adds initial schema and config models for PoC - Model generation (#2638) * update computability type (#2668) * chore: PoC - Model generation - support primitive types at root level (#2673) * chore: PoC - Schema code generation - Initial support of primitive types (#2682) * initial commit with schema generation function and test fixture * small changes wip * include specific type generators * handling element types and imports * remove unrelated file * extract template logic to separate file * small revert change * extract import to const * follow up adjustments from PR comments and sync with Aastha * chore: PoC - Schema code generation - Supporting nested attributes (#2689) * support nested attribute types * rebasing changes related to unit testing adjustment * chore: PoC - Model generation - Supporting nested schema (List, ListNested, Set & SetNested) (#2693) * chore: PoC - Model generation - Supporting nested schema (objects - Map, MapNested, SingleNested Attributes) (#2704) * chore: PoC - Schema code generation - Supporting generation of typed model (#2703) * support typed model generation inlcuding root and nested attributes * minor fix for type of types map * add clarifying comment * improve name of generated object types, refactor function naming for readability * fix list map and set nested attribute generation (#2708) * chore: PoC - Model generation - support config aliases, ignores, and description overrides (#2712) * chore: PoC - Define make command to call code generation logic (#2706) * wip * iterating over all resources * add config for search deployment * update golden file test with fix in package name * use xstring implementation for pascal case * simplify write file logic * merge fixes * chore: PoC - Support configuration of timeout in schema (#2717) * wip * rebase fixes * fix logic avoiding adding timeout in nested schemas * fix generation * fix enum processing * fix golden file in timeout test * comment out unsupported config properties * simplify interfaces for attribute generation leaving common code in a separate function * chore: PoC - handle merging computability in nested attributes (#2722) * adjusting contributing guide (#2732) --------- Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Aastha Mahendru * doc: Adds documentation for `replication_specs.*.external_id` computed attribute for `mongodbatlas_advanced_cluster` (#2734) * adds M0 (#2730) * docs: Remove legacy project_id attribute in docs as it has been removed (#2733) * chore: Updates examples link in index.md for v1.21.3 release * chore: Updates CHANGELOG.md header for v1.21.3 release * chore: Unify file name for plural data source schema (#2739) * change name * fix name * rename to plural_data_source.go * check cluster creation times (#2728) * fix: Adds new attribute `results` and deprecates `resource_policies` for `mongodbatlas_resource_policies` data source (#2740) * new attribute results and deprecate resource_policies * changelog entry * change test to check new attribute * fix changelog * changelog fix * fix docs * Update .changelog/2740.txt Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * migration test * skip mig test until next version * fix comment --------- Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * chore: Updates CHANGELOG.md for #2740 * build(deps): bump go.mongodb.org/atlas-sdk (#2745) Co-authored-by: wtrocki <981838+wtrocki@users.noreply.github.com> * fix: Fixes change assigments in `mongodbatlas_project_api_key` (#2737) * fix changed assigments * fix TestAccProjectAPIKey_updateRole * changelog * join configMultiple to configBasic * don't allow duplicate projectIDs * fix TestAccProjectAPIKey_dataSources * refactor Create * refactor getAPIProjectAssignments and flattenProjectAssignments * refactor Read * refactor Delete * fix TestAccProjectAPIKey_recreateWhenDeletedExternally * refactor Import * remove flattenProjectAssignments, flattenProjectAPIKeyRoles and getAPIProjectAssignments * fix plural ds * fix basicTestCase * refactor Update description * refactor Update * refactor expandProjectAssignments * update changelog * initial checkAggr * checkExists * delete redundant TestAccProjectAPIKey_dataSources * refactor configDuplicatedProject * refactor configChangingProject * model file * more detailed changelog * revert import * doc * refactor checks * chore: Updates CHANGELOG.md for #2737 * chore: Updates examples link in index.md for v1.21.4 release * chore: Updates CHANGELOG.md header for v1.21.4 release * chore: Support computability override in schema generation config (#2743) * test without parsing config options * add support for override of computability in config * refactor tests removing redudant code * extract common api spec path, handle computed only case * chore: Bump crazy-max/ghaction-import-gpg from 6.1.0 to 6.2.0 (#2754) Bumps [crazy-max/ghaction-import-gpg](https://github.com/crazy-max/ghaction-import-gpg) from 6.1.0 to 6.2.0. - [Release notes](https://github.com/crazy-max/ghaction-import-gpg/releases) - [Commits](https://github.com/crazy-max/ghaction-import-gpg/compare/01dd5d3ca463c7f10f7f4f7b4f177225ac661ee4...cb9bde2e2525e640591a934b1fd28eef1dcaf5e5) --- updated-dependencies: - dependency-name: crazy-max/ghaction-import-gpg dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump actions/setup-go from 5.0.2 to 5.1.0 (#2753) Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.0.2 to 5.1.0. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](https://github.com/actions/setup-go/compare/0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32...41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed) --- updated-dependencies: - dependency-name: actions/setup-go dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump github.com/pb33f/libopenapi from 0.18.1 to 0.18.6 (#2750) Bumps [github.com/pb33f/libopenapi](https://github.com/pb33f/libopenapi) from 0.18.1 to 0.18.6. - [Release notes](https://github.com/pb33f/libopenapi/releases) - [Commits](https://github.com/pb33f/libopenapi/compare/v0.18.1...v0.18.6) --- updated-dependencies: - dependency-name: github.com/pb33f/libopenapi dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump tj-actions/verify-changed-files (#2751) Bumps [tj-actions/verify-changed-files](https://github.com/tj-actions/verify-changed-files) from 54483a2138ca67989bc40785aa22faee8b085894 to 530d86d0a237225c87beaa000750988f8965ee31. - [Release notes](https://github.com/tj-actions/verify-changed-files/releases) - [Changelog](https://github.com/tj-actions/verify-changed-files/blob/main/HISTORY.md) - [Commits](https://github.com/tj-actions/verify-changed-files/compare/54483a2138ca67989bc40785aa22faee8b085894...530d86d0a237225c87beaa000750988f8965ee31) --- updated-dependencies: - dependency-name: tj-actions/verify-changed-files dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore: Bump actions/checkout from 4.2.1 to 4.2.2 (#2752) * chore: Bump actions/checkout from 4.2.1 to 4.2.2 Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.1 to 4.2.2. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/checkout/compare/eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871...11bd71901bbe5b1630ceea73d27597364c9af683) --- updated-dependencies: - dependency-name: actions/checkout dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Trigger Build --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Leo Antoli <430982+lantoli@users.noreply.github.com> * chore: Automatically updates Terraform version used in repository and test-suite (#2755) * rename workflow (#2761) * chore: Updates Atlas Go SDK (#2748) * build(deps): bump go.mongodb.org/atlas-sdk * Pin cluster POST and PATCH operations to 2024-08-05 avoiding any unintentional shard level autoscaling * unifying version clarification in comments * use sdk utility function for checking error code value * adjust conditional of error checking preserving behaviour --------- Co-authored-by: AgustinBettati <20469408+AgustinBettati@users.noreply.github.com> Co-authored-by: Agustin Bettati * chore: Generate Flex Cluster file structure and resource schema (#2702) * generate resource and schema * pr comments * correct Map * feat: Implements `mongodbatlas_flex_cluster` resource (#2716) * create implementation * read implementation * update and delete implementation * Implement model * schema refactor * format * wip: model tests * initial model test * test for NewTFModel * Test NewAtlasCreateReq * test NewAtlasUpdateReq * fix tests * refactor for tags * fixes * implement import * simplify update * basic acceptance test * wip: acc tests * fix model test * final changes on tests * changelog entry * add resource to fallback template * Update .changelog/2716.txt Co-authored-by: Agustin Bettati * set resource to preview --------- Co-authored-by: Melanija Cvetic Co-authored-by: Agustin Bettati * chore: Enable mongodbatlas_flex_cluster test in CI (#2720) * create implementation * read implementation * update and delete implementation * initial model test * Test NewAtlasCreateReq * fixes * changelong and fix to acceptance-tests-runner.yml * fix to changelog * removed changelog * resolve rebase issue * fix to implementation * change test to use existing project id * Adjusting predeployed project id name and added preview enable * fix the naming convension of env var --------- Co-authored-by: Oriol Arbusi * chore: Generate Flex Cluster data source file and schemas (#2725) * generate data source file and schema for singular Flex Cluster * Generate plural data source schema * Changing type of tags in schema * Fix to data source schema * chore: Adds state transition logic to mongodbatlas_flex_cluster resource (#2721) * implements state transition * use state transition in create, update and delete * PR comments * chore: Update operation improvements for `mongodbatlas_flex_cluster` resource (#2729) * schema refactors * revert rename * implement isUpdateAllowed * failed update test case * refactor equal checks * make more exhaustive the failed update test * remove UseStateForUnknown in state_name * wip: use plan modifier for non updatable fields * wip: use plan modifiers on all non updatable attributes * use plan modifiers to fail on update of certain attributes * rename planmodifier * simplify planmodifier * PR comments * remove attribute parameter * markdowndescription over description * test: Enable mongodbatlas_flex_cluster tests in QA environment (#2741) * feat: Implements and tests `mongodbatlas_flex_cluster` data source (#2738) * Implementing read funciton in data source * Acc test for data source * Added changelog * implementing review suggestions * fix to rebase * Adjusting default timeout of `flex_cluster` to 3 hours (#2757) * fixing imports * Adopting latest preview SDK version * fix flex varibale names after bump * fix atlasuser - set to continue using older SDK version * Fix to variable type names in flex unit test * fix duplicate * fix to api call in atlasUser tests --------- Signed-off-by: dependabot[bot] Co-authored-by: Espen Albert Co-authored-by: svc-apix-bot Co-authored-by: svc-apix-Bot <142542575+svc-apix-Bot@users.noreply.github.com> Co-authored-by: lantoli <430982+lantoli@users.noreply.github.com> Co-authored-by: Zuhair Ahmed Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Melissa Plunkett Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Marco Suma Co-authored-by: Agustin Bettati Co-authored-by: Aastha Mahendru Co-authored-by: Oriol Co-authored-by: wtrocki <981838+wtrocki@users.noreply.github.com> Co-authored-by: AgustinBettati <20469408+AgustinBettati@users.noreply.github.com> * feat: Implements `mongodbatlas_flex_cluster` plural data source (#2767) * Fix naming convensions in plural DS schema * Implementing plural DS read function * testing and plural ds model * fixes after rebase * additional unit testing for model * changelog * lint fix * removing parellel testing * implemented review feedback * lint fix * chore: Improvements to SDK version update (#2770) * comment use of older API version in atlasUser * add connv220240805 to factory.go * doc: Adds documentation and example for mongodbatlas_flex_cluster resource and data source (#2744) * wip: docs and example (missing data source * change version to make the CI check successful * fmt * regenerate * data source examples * readme created - to adjust preview feature note on release of flex to prod * data source templates * fix typo * generate data source documentation * fix to flex_cluster example * fix to flex_cluster example * fix to generated docs * Added flex to dedicated migration guide * fix typo * Implementing feedback and improving flex-dedicated migratoin guide * adjust flex example README.md * fix to example --------- Co-authored-by: Melanija Cvetic * deprecate: Deprecates Serverless functionality (#2742) * deprecates serverless functionality * documentation * deprecate attributes in data sources * set link to migration guide with supported migration paths * doc: Adds migration guides to transition out of Serverless and Shared-tier clusters (#2746) * wip: migration guides * add details on 1.22 guide * remove guide, will be in resource docs * wip: correct details on 1.22 guide * added migration guide for serverless to flex and serverless to dedicated * added migration guide for shared-tier to flex * added migration guide for serverless to free * implementing review feedback * implementing feedback to guide * changes to migration guide * Adjustments to migration guides * put URL instead of placeholder * move post before pre autoconversion guides, clarify serverless to dedicated * pr comment * pr comments * Update docs/guides/1.22.0-upgrade-guide.md Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> * Update docs/guides/serverless-shared-migration-guide.md Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> * Update docs/guides/1.22.0-upgrade-guide.md Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> * Update docs/guides/serverless-shared-migration-guide.md Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> * Update docs/guides/serverless-shared-migration-guide.md Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> * Update docs/guides/serverless-shared-migration-guide.md Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> * Update docs/guides/serverless-shared-migration-guide.md Co-authored-by: cveticm <119604954+cveticm@users.noreply.github.com> * Update docs/guides/serverless-shared-migration-guide.md Co-authored-by: cveticm <119604954+cveticm@users.noreply.github.com> --------- Co-authored-by: Melanija Cvetic Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> Co-authored-by: cveticm <119604954+cveticm@users.noreply.github.com> * chore: Removes mentions and examples of Serverless and Shared-tier instances (#2811) * wip remove mentions of serverless and shared tier * remove examples from template for serverless privatelink * change from M5 to M0 for tenant examples of adv cluster * add note on M0 * chore: Refactor tags attribute schema and conversion logic (#2788) * Implements tag as common function * Refactor project and flex cluster to use new common tag functions * fix to testing and lint * fix changelog entry * january or later for shared tier autoconversion * auto-generate singular data source, temporarily with checks * remove singular data source check * update Description in plural data source --------- Signed-off-by: dependabot[bot] Co-authored-by: Melanija Cvetic Co-authored-by: Agustin Bettati Co-authored-by: cveticm <119604954+cveticm@users.noreply.github.com> Co-authored-by: Espen Albert Co-authored-by: svc-apix-bot Co-authored-by: svc-apix-Bot <142542575+svc-apix-Bot@users.noreply.github.com> Co-authored-by: lantoli <430982+lantoli@users.noreply.github.com> Co-authored-by: Zuhair Ahmed Co-authored-by: maastha <122359335+maastha@users.noreply.github.com> Co-authored-by: Melissa Plunkett Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Marco Suma Co-authored-by: Aastha Mahendru Co-authored-by: wtrocki <981838+wtrocki@users.noreply.github.com> Co-authored-by: AgustinBettati <20469408+AgustinBettati@users.noreply.github.com> Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> --- .changelog/2716.txt | 3 + .changelog/2738.txt | 3 + .changelog/2742.txt | 39 ++ .changelog/2767.txt | 3 + .github/workflows/acceptance-tests-runner.yml | 30 ++ .github/workflows/acceptance-tests.yml | 1 + .github/workflows/update-tf-versions.yml | 4 +- docs/data-sources/advanced_cluster.md | 4 +- docs/data-sources/advanced_clusters.md | 4 +- docs/data-sources/flex_cluster.md | 84 ++++ docs/data-sources/flex_clusters.md | 94 +++++ ...privatelink_endpoint_service_serverless.md | 6 + ...rivatelink_endpoints_service_serverless.md | 6 + docs/data-sources/serverless_instance.md | 4 +- docs/data-sources/serverless_instances.md | 4 +- docs/guides/1.22.0-upgrade-guide.md | 30 ++ ...ter-to-dedicated-cluster-migraton-guide.md | 65 +++ .../serverless-shared-migration-guide.md | 267 ++++++++++++ docs/resources/advanced_cluster.md | 4 +- docs/resources/cluster.md | 13 - docs/resources/flex_cluster.md | 97 +++++ .../privatelink_endpoint_serverless.md | 6 + ...privatelink_endpoint_service_serverless.md | 6 + docs/resources/serverless_instance.md | 4 +- examples/mongodbatlas_flex_cluster/README.md | 8 + examples/mongodbatlas_flex_cluster/main.tf | 26 ++ .../mongodbatlas_flex_cluster/provider.tf | 4 + .../mongodbatlas_flex_cluster/variables.tf | 17 + .../versions.tf | 6 +- .../aws/serverless-instance/README.md | 116 ------ .../serverless-instance/atlas-privatelink.tf | 14 - .../atlas-serverless-instance.tf | 13 - .../aws/serverless-instance/aws-vpc.tf | 57 --- .../aws/serverless-instance/output.tf | 14 - .../aws/serverless-instance/provider.tf | 9 - .../aws/serverless-instance/variables.tf | 25 -- .../aws/README.md | 97 ----- .../aws/atlas-cluster.tf | 26 -- .../aws/aws-vpc.tf | 59 --- .../aws/main.tf | 23 -- .../aws/provider.tf | 9 - .../aws/variables.tf | 25 -- .../aws/versions.tf | 13 - .../azure/Readme.md | 84 ---- .../azure/main.tf | 71 ---- .../azure/variables.tf | 30 -- .../azure/versions.tf | 13 - internal/common/constant/deprecation.go | 17 +- internal/common/conversion/tags.go | 33 ++ internal/common/conversion/tags_test.go | 53 +++ .../customplanmodifier/non_updatable.go | 36 ++ internal/common/retrystrategy/retry_state.go | 1 + internal/provider/provider.go | 4 + .../atlasuser/data_source_atlas_user.go | 3 +- .../atlasuser/data_source_atlas_user_test.go | 3 +- .../atlasuser/data_source_atlas_users.go | 3 +- .../atlasuser/data_source_atlas_users_test.go | 3 +- internal/service/flexcluster/data_source.go | 51 +++ .../service/flexcluster/data_source_schema.go | 108 +++++ internal/service/flexcluster/main_test.go | 15 + internal/service/flexcluster/model.go | 133 ++++++ internal/service/flexcluster/model_test.go | 385 ++++++++++++++++++ .../service/flexcluster/plural_data_source.go | 65 +++ .../flexcluster/plural_data_source_schema.go | 31 ++ internal/service/flexcluster/resource.go | 199 +++++++++ .../service/flexcluster/resource_schema.go | 198 +++++++++ internal/service/flexcluster/resource_test.go | 195 +++++++++ .../service/flexcluster/state_transition.go | 60 +++ .../flexcluster/state_transition_test.go | 163 ++++++++ .../tfplugingen/generator_config.yml | 21 + ...esource_privatelink_endpoint_serverless.go | 2 + ...privatelink_endpoint_service_serverless.go | 4 +- ...rivatelink_endpoints_service_serverless.go | 5 +- ...privatelink_endpoint_service_serverless.go | 2 + internal/service/project/model_project.go | 29 +- .../service/project/model_project_test.go | 53 +-- internal/service/project/resource_project.go | 9 +- .../data_source_serverless_instance.go | 16 +- .../resource_serverless_instance.go | 15 +- templates/data-source.md.tmpl | 2 - templates/data-sources/flex_cluster.md.tmpl | 10 + templates/data-sources/flex_clusters.md.tmpl | 10 + templates/resources.md.tmpl | 3 - templates/resources/flex_cluster.md.tmpl | 17 + tools/codegen/config.yml | 3 - 85 files changed, 2659 insertions(+), 841 deletions(-) create mode 100644 .changelog/2716.txt create mode 100644 .changelog/2738.txt create mode 100644 .changelog/2742.txt create mode 100644 .changelog/2767.txt create mode 100644 docs/data-sources/flex_cluster.md create mode 100644 docs/data-sources/flex_clusters.md create mode 100644 docs/guides/1.22.0-upgrade-guide.md create mode 100644 docs/guides/flex-cluster-to-dedicated-cluster-migraton-guide.md create mode 100644 docs/guides/serverless-shared-migration-guide.md create mode 100644 docs/resources/flex_cluster.md create mode 100644 examples/mongodbatlas_flex_cluster/README.md create mode 100644 examples/mongodbatlas_flex_cluster/main.tf create mode 100644 examples/mongodbatlas_flex_cluster/provider.tf create mode 100644 examples/mongodbatlas_flex_cluster/variables.tf rename examples/{mongodbatlas_privatelink_endpoint/aws/serverless-instance => mongodbatlas_flex_cluster}/versions.tf (57%) delete mode 100644 examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/README.md delete mode 100644 examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/atlas-privatelink.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/atlas-serverless-instance.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/aws-vpc.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/output.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/provider.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/variables.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/README.md delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/atlas-cluster.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/aws-vpc.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/main.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/provider.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/variables.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/versions.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/Readme.md delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/main.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/variables.tf delete mode 100644 examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/versions.tf create mode 100644 internal/common/conversion/tags.go create mode 100644 internal/common/conversion/tags_test.go create mode 100644 internal/common/customplanmodifier/non_updatable.go create mode 100644 internal/service/flexcluster/data_source.go create mode 100644 internal/service/flexcluster/data_source_schema.go create mode 100644 internal/service/flexcluster/main_test.go create mode 100644 internal/service/flexcluster/model.go create mode 100644 internal/service/flexcluster/model_test.go create mode 100644 internal/service/flexcluster/plural_data_source.go create mode 100644 internal/service/flexcluster/plural_data_source_schema.go create mode 100644 internal/service/flexcluster/resource.go create mode 100644 internal/service/flexcluster/resource_schema.go create mode 100644 internal/service/flexcluster/resource_test.go create mode 100644 internal/service/flexcluster/state_transition.go create mode 100644 internal/service/flexcluster/state_transition_test.go create mode 100644 internal/service/flexcluster/tfplugingen/generator_config.yml create mode 100644 templates/data-sources/flex_cluster.md.tmpl create mode 100644 templates/data-sources/flex_clusters.md.tmpl create mode 100644 templates/resources/flex_cluster.md.tmpl diff --git a/.changelog/2716.txt b/.changelog/2716.txt new file mode 100644 index 0000000000..f89d6c2f27 --- /dev/null +++ b/.changelog/2716.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +mongodbatlas_flex_cluster +``` diff --git a/.changelog/2738.txt b/.changelog/2738.txt new file mode 100644 index 0000000000..d163afad6c --- /dev/null +++ b/.changelog/2738.txt @@ -0,0 +1,3 @@ +```release-note:new-datasource +mongodbatlas_flex_cluster +``` diff --git a/.changelog/2742.txt b/.changelog/2742.txt new file mode 100644 index 0000000000..eb7fed6d0c --- /dev/null +++ b/.changelog/2742.txt @@ -0,0 +1,39 @@ +```release-note:note +resource/mongodbatlas_serverless_instance: Deprecates `continuous_backup_enabled` attribute +``` + +```release-note:note +resource/mongodbatlas_serverless_instance: Deprecates `auto_indexing` attribute +``` + +```release-note:note +data-source/mongodbatlas_serverless_instance: Deprecates `continuous_backup_enabled` attribute +``` + +```release-note:note +data-source/mongodbatlas_serverless_instance: Deprecates `auto_indexing` attribute +``` + +```release-note:note +data-source/mongodbatlas_serverless_instances: Deprecates `continuous_backup_enabled` attribute +``` + +```release-note:note +data-source/mongodbatlas_serverless_instances: Deprecates `auto_indexing` attribute +``` + +```release-note:note +resource/mongodbatlas_privatelink_endpoint_serverless: Deprecates resource +``` + +```release-note:note +resource/mongodbatlas_privatelink_endpoint_service_serverless: Deprecates resource +``` + +```release-note:note +data-source/mongodbatlas_privatelink_endpoint_service_serverless: Deprecates data source +``` + +```release-note:note +data-source/mongodbatlas_privatelink_endpoints_service_serverless: Deprecates data source +``` diff --git a/.changelog/2767.txt b/.changelog/2767.txt new file mode 100644 index 0000000000..63ae120d36 --- /dev/null +++ b/.changelog/2767.txt @@ -0,0 +1,3 @@ +```release-note:new-datasource +mongodbatlas_flex_clusters +``` diff --git a/.github/workflows/acceptance-tests-runner.yml b/.github/workflows/acceptance-tests-runner.yml index 81418aa803..bb9fda3689 100644 --- a/.github/workflows/acceptance-tests-runner.yml +++ b/.github/workflows/acceptance-tests-runner.yml @@ -95,6 +95,9 @@ on: azure_private_endpoint_region: type: string required: true + mongodb_atlas_flex_project_id: + type: string + required: true secrets: # all secrets are passed explicitly in this workflow mongodb_atlas_public_key: required: true @@ -213,6 +216,7 @@ jobs: encryption: ${{ steps.filter.outputs.encryption == 'true' || env.mustTrigger == 'true' }} event_trigger: ${{ steps.filter.outputs.event_trigger == 'true' || env.mustTrigger == 'true' }} federated: ${{ steps.filter.outputs.federated == 'true' || env.mustTrigger == 'true' }} + flex_cluster: ${{ steps.filter.outputs.flex_cluster == 'true' || env.mustTrigger == 'true' }} generic: ${{ steps.filter.outputs.generic == 'true' || env.mustTrigger == 'true' }} global_cluster_config: ${{ steps.filter.outputs.global_cluster_config == 'true' || env.mustTrigger == 'true' }} ldap: ${{ steps.filter.outputs.ldap == 'true' || env.mustTrigger == 'true' }} @@ -278,6 +282,8 @@ jobs: - 'internal/service/federatedsettingsidentityprovider/*.go' - 'internal/service/federatedsettingsorgconfig/*.go' - 'internal/service/federatedsettingsorgrolemapping/*.go' + flex_cluster: + - 'internal/service/flexcluster/*.go' generic: - 'internal/service/auditing/*.go' - 'internal/service/backupcompliancepolicy/*.go' @@ -654,6 +660,30 @@ jobs: ./internal/service/federatedsettingsorgrolemapping run: make testacc + flex_cluster: + needs: [ change-detection, get-provider-version ] + if: ${{ needs.change-detection.outputs.flex_cluster == 'true' || inputs.test_group == 'flex_cluster' }} + runs-on: ubuntu-latest + permissions: {} + steps: + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 + with: + ref: ${{ inputs.ref || github.ref }} + - uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed + with: + go-version-file: 'go.mod' + - uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd + with: + terraform_version: ${{ inputs.terraform_version }} + terraform_wrapper: false + - name: Acceptance Tests + env: + MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }} + MONGODB_ATLAS_FLEX_PROJECT_ID: ${{ inputs.mongodb_atlas_flex_project_id }} + MONGODB_ATLAS_ENABLE_PREVIEW: "true" + ACCTEST_PACKAGES: ./internal/service/flexcluster + run: make testacc + generic: needs: [ change-detection, get-provider-version ] if: ${{ needs.change-detection.outputs.generic == 'true' || inputs.test_group == 'generic' }} diff --git a/.github/workflows/acceptance-tests.yml b/.github/workflows/acceptance-tests.yml index cf8fb734e2..22b164959d 100644 --- a/.github/workflows/acceptance-tests.yml +++ b/.github/workflows/acceptance-tests.yml @@ -115,3 +115,4 @@ jobs: mongodb_atlas_enable_preview: ${{ vars.MONGODB_ATLAS_ENABLE_PREVIEW }} azure_private_endpoint_region: ${{ vars.AZURE_PRIVATE_ENDPOINT_REGION }} mongodb_atlas_rp_org_id: ${{ inputs.atlas_cloud_env == 'qa' && vars.MONGODB_ATLAS_RP_ORG_ID_QA || vars.MONGODB_ATLAS_RP_ORG_ID_DEV }} + mongodb_atlas_flex_project_id: ${{ inputs.atlas_cloud_env == 'qa' && vars.MONGODB_ATLAS_FLEX_PROJECT_ID_QA || vars.MONGODB_ATLAS_FLEX_PROJECT_ID }} diff --git a/.github/workflows/update-tf-versions.yml b/.github/workflows/update-tf-versions.yml index 00437410d4..45500dbb4d 100644 --- a/.github/workflows/update-tf-versions.yml +++ b/.github/workflows/update-tf-versions.yml @@ -38,13 +38,13 @@ jobs: pull-requests: write steps: - name: Checkout - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 + uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 - name: Update files env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: make update-tf-version-in-repository - name: Verify Changed files - uses: tj-actions/verify-changed-files@530d86d0a237225c87beaa000750988f8965ee31 + uses: tj-actions/verify-changed-files@54483a2138ca67989bc40785aa22faee8b085894 id: verify-changed-files - name: Create PR uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f diff --git a/docs/data-sources/advanced_cluster.md b/docs/data-sources/advanced_cluster.md index 0b69cd8796..be655ab1cd 100644 --- a/docs/data-sources/advanced_cluster.md +++ b/docs/data-sources/advanced_cluster.md @@ -19,7 +19,7 @@ resource "mongodbatlas_advanced_cluster" "example" { replication_specs { region_configs { electable_specs { - instance_size = "M5" + instance_size = "M0" } provider_name = "TENANT" backing_provider_name = "AWS" @@ -35,6 +35,8 @@ data "mongodbatlas_advanced_cluster" "example" { } ``` +**NOTE:** There can only be one M0 cluster per project. + ## Example using latest sharding configurations with independent shard scaling in the cluster ```terraform diff --git a/docs/data-sources/advanced_clusters.md b/docs/data-sources/advanced_clusters.md index c1d26b12e3..4a86b4df40 100644 --- a/docs/data-sources/advanced_clusters.md +++ b/docs/data-sources/advanced_clusters.md @@ -19,7 +19,7 @@ resource "mongodbatlas_advanced_cluster" "example" { replication_specs { region_configs { electable_specs { - instance_size = "M5" + instance_size = "M0" } provider_name = "TENANT" backing_provider_name = "AWS" @@ -34,6 +34,8 @@ data "mongodbatlas_advanced_clusters" "example" { } ``` +**NOTE:** There can only be one M0 cluster per project. + ## Example using latest sharding configurations with independent shard scaling in the cluster ```terraform diff --git a/docs/data-sources/flex_cluster.md b/docs/data-sources/flex_cluster.md new file mode 100644 index 0000000000..89fb1595ac --- /dev/null +++ b/docs/data-sources/flex_cluster.md @@ -0,0 +1,84 @@ +# Data Source: mongodbatlas_flex_cluster + +`mongodbatlas_flex_cluster` describes a flex cluster. + +## Example Usages +```terraform +resource "mongodbatlas_flex_cluster" "example-cluster" { + project_id = var.project_id + name = var.cluster_name + provider_settings = { + backing_provider_name = "AWS" + region_name = "US_EAST_1" + } + termination_protection_enabled = true +} + +data "mongodbatlas_flex_cluster" "example-cluster" { + project_id = var.project_id + name = mongodbatlas_flex_cluster.example-cluster.name +} + +data "mongodbatlas_flex_clusters" "example-clusters" { + project_id = var.project_id +} + +output "mongodbatlas_flex_cluster" { + value = data.mongodbatlas_flex_cluster.example-cluster.name +} + +output "mongodbatlas_flex_clusters_names" { + value = [for cluster in data.mongodbatlas_flex_clusters.example-clusters.results : cluster.name] +} +``` + + +## Schema + +### Required + +- `name` (String) Human-readable label that identifies the instance. +- `project_id` (String) Unique 24-hexadecimal character string that identifies the project. + +### Read-Only + +- `backup_settings` (Attributes) Flex backup configuration (see [below for nested schema](#nestedatt--backup_settings)) +- `cluster_type` (String) Flex cluster topology. +- `connection_strings` (Attributes) Collection of Uniform Resource Locators that point to the MongoDB database. (see [below for nested schema](#nestedatt--connection_strings)) +- `create_date` (String) Date and time when MongoDB Cloud created this instance. This parameter expresses its value in ISO 8601 format in UTC. +- `id` (String) Unique 24-hexadecimal digit string that identifies the instance. +- `mongo_db_version` (String) Version of MongoDB that the instance runs. +- `provider_settings` (Attributes) Group of cloud provider settings that configure the provisioned MongoDB flex cluster. (see [below for nested schema](#nestedatt--provider_settings)) +- `state_name` (String) Human-readable label that indicates the current operating condition of this instance. +- `tags` (Map of String) Map that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the instance. +- `termination_protection_enabled` (Boolean) Flag that indicates whether termination protection is enabled on the cluster. If set to `true`, MongoDB Cloud won't delete the cluster. If set to `false`, MongoDB Cloud will delete the cluster. +- `version_release_system` (String) Method by which the cluster maintains the MongoDB versions. + + +### Nested Schema for `backup_settings` + +Read-Only: + +- `enabled` (Boolean) Flag that indicates whether backups are performed for this flex cluster. Backup uses [TODO](TODO) for flex clusters. + + + +### Nested Schema for `connection_strings` + +Read-Only: + +- `standard` (String) Public connection string that you can use to connect to this cluster. This connection string uses the mongodb:// protocol. +- `standard_srv` (String) Public connection string that you can use to connect to this flex cluster. This connection string uses the `mongodb+srv://` protocol. + + + +### Nested Schema for `provider_settings` + +Read-Only: + +- `backing_provider_name` (String) Cloud service provider on which MongoDB Cloud provisioned the flex cluster. +- `disk_size_gb` (Number) Storage capacity available to the flex cluster expressed in gigabytes. +- `provider_name` (String) Human-readable label that identifies the cloud service provider. +- `region_name` (String) Human-readable label that identifies the geographic location of your MongoDB flex cluster. The region you choose can affect network latency for clients accessing your databases. For a complete list of region names, see [AWS](https://docs.atlas.mongodb.com/reference/amazon-aws/#std-label-amazon-aws), [GCP](https://docs.atlas.mongodb.com/reference/google-gcp/), and [Azure](https://docs.atlas.mongodb.com/reference/microsoft-azure/). + +For more information see: [MongoDB Atlas API - Flex Cluster](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Flex-Clusters/operation/getFlexCluster) Documentation. diff --git a/docs/data-sources/flex_clusters.md b/docs/data-sources/flex_clusters.md new file mode 100644 index 0000000000..737b532b6c --- /dev/null +++ b/docs/data-sources/flex_clusters.md @@ -0,0 +1,94 @@ +# Data Source: mongodbatlas_flex_clusters + +`mongodbatlas_flex_clusters` returns all flex clusters in a project. + +## Example Usages +```terraform +resource "mongodbatlas_flex_cluster" "example-cluster" { + project_id = var.project_id + name = var.cluster_name + provider_settings = { + backing_provider_name = "AWS" + region_name = "US_EAST_1" + } + termination_protection_enabled = true +} + +data "mongodbatlas_flex_cluster" "example-cluster" { + project_id = var.project_id + name = mongodbatlas_flex_cluster.example-cluster.name +} + +data "mongodbatlas_flex_clusters" "example-clusters" { + project_id = var.project_id +} + +output "mongodbatlas_flex_cluster" { + value = data.mongodbatlas_flex_cluster.example-cluster.name +} + +output "mongodbatlas_flex_clusters_names" { + value = [for cluster in data.mongodbatlas_flex_clusters.example-clusters.results : cluster.name] +} +``` + + +## Schema + +### Required + +- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access. + +**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups. + +### Read-Only + +- `results` (Attributes List) List of returned documents that MongoDB Cloud provides when completing this request. (see [below for nested schema](#nestedatt--results)) + + +### Nested Schema for `results` + +Read-Only: + +- `backup_settings` (Attributes) Flex backup configuration (see [below for nested schema](#nestedatt--results--backup_settings)) +- `cluster_type` (String) Flex cluster topology. +- `connection_strings` (Attributes) Collection of Uniform Resource Locators that point to the MongoDB database. (see [below for nested schema](#nestedatt--results--connection_strings)) +- `create_date` (String) Date and time when MongoDB Cloud created this instance. This parameter expresses its value in ISO 8601 format in UTC. +- `id` (String) Unique 24-hexadecimal digit string that identifies the instance. +- `mongo_db_version` (String) Version of MongoDB that the instance runs. +- `name` (String) Human-readable label that identifies the instance. +- `project_id` (String) Unique 24-hexadecimal character string that identifies the project. +- `provider_settings` (Attributes) Group of cloud provider settings that configure the provisioned MongoDB flex cluster. (see [below for nested schema](#nestedatt--results--provider_settings)) +- `state_name` (String) Human-readable label that indicates the current operating condition of this instance. +- `tags` (Map of String) Map that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the instance. +- `termination_protection_enabled` (Boolean) Flag that indicates whether termination protection is enabled on the cluster. If set to `true`, MongoDB Cloud won't delete the cluster. If set to `false`, MongoDB Cloud will delete the cluster. +- `version_release_system` (String) Method by which the cluster maintains the MongoDB versions. + + +### Nested Schema for `results.backup_settings` + +Read-Only: + +- `enabled` (Boolean) Flag that indicates whether backups are performed for this flex cluster. Backup uses [TODO](TODO) for flex clusters. + + + +### Nested Schema for `results.connection_strings` + +Read-Only: + +- `standard` (String) Public connection string that you can use to connect to this cluster. This connection string uses the mongodb:// protocol. +- `standard_srv` (String) Public connection string that you can use to connect to this flex cluster. This connection string uses the `mongodb+srv://` protocol. + + + +### Nested Schema for `results.provider_settings` + +Read-Only: + +- `backing_provider_name` (String) Cloud service provider on which MongoDB Cloud provisioned the flex cluster. +- `disk_size_gb` (Number) Storage capacity available to the flex cluster expressed in gigabytes. +- `provider_name` (String) Human-readable label that identifies the cloud service provider. +- `region_name` (String) Human-readable label that identifies the geographic location of your MongoDB flex cluster. The region you choose can affect network latency for clients accessing your databases. For a complete list of region names, see [AWS](https://docs.atlas.mongodb.com/reference/amazon-aws/#std-label-amazon-aws), [GCP](https://docs.atlas.mongodb.com/reference/google-gcp/), and [Azure](https://docs.atlas.mongodb.com/reference/microsoft-azure/). + +For more information see: [MongoDB Atlas API - Flex Clusters](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Streams/operation/listFlexClusters) Documentation. diff --git a/docs/data-sources/privatelink_endpoint_service_serverless.md b/docs/data-sources/privatelink_endpoint_service_serverless.md index 224a17c1e0..bc054fa0fc 100644 --- a/docs/data-sources/privatelink_endpoint_service_serverless.md +++ b/docs/data-sources/privatelink_endpoint_service_serverless.md @@ -1,3 +1,9 @@ +--- +subcategory: "Deprecated" +--- + +**WARNING:** This data source is deprecated and will be removed in March 2025. For more datails see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide) + # Data Source: privatelink_endpoint_service_serverless `privatelink_endpoint_service_serverless` provides a Serverless PrivateLink Endpoint Service resource. diff --git a/docs/data-sources/privatelink_endpoints_service_serverless.md b/docs/data-sources/privatelink_endpoints_service_serverless.md index 6740e49d52..997b84a29f 100644 --- a/docs/data-sources/privatelink_endpoints_service_serverless.md +++ b/docs/data-sources/privatelink_endpoints_service_serverless.md @@ -1,3 +1,9 @@ +--- +subcategory: "Deprecated" +--- + +**WARNING:** This data source is deprecated and will be removed in March 2025. For more datails see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide) + # Data Source: privatelink_endpoints_service_serverless `privatelink_endpoints_service_serverless` describes the list of all Serverless PrivateLink Endpoint Service resource. diff --git a/docs/data-sources/serverless_instance.md b/docs/data-sources/serverless_instance.md index 48a0be84d7..6776c53b2f 100644 --- a/docs/data-sources/serverless_instance.md +++ b/docs/data-sources/serverless_instance.md @@ -43,9 +43,9 @@ Follow this example to [setup private connection to a serverless instance using * `provider_settings_provider_name` - Cloud service provider that applies to the provisioned the serverless instance. * `provider_settings_region_name` - Human-readable label that identifies the physical location of your MongoDB serverless instance. The region you choose can affect network latency for clients accessing your databases. * `state_name` - Stage of deployment of this serverless instance when the resource made its request. -* `continuous_backup_enabled` - Flag that indicates whether the serverless instance uses Serverless Continuous Backup. +* `continuous_backup_enabled` - (Deprecated) Flag that indicates whether the serverless instance uses Serverless Continuous Backup. * `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster. -* `auto_indexing` - Flag that indicates whether the serverless instance uses [Serverless Auto Indexing](https://www.mongodb.com/docs/atlas/performance-advisor/auto-index-serverless/). +* `auto_indexing` - (Deprecated) Flag that indicates whether the serverless instance uses [Serverless Auto Indexing](https://www.mongodb.com/docs/atlas/performance-advisor/auto-index-serverless/). * `tags` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags). ### Tags diff --git a/docs/data-sources/serverless_instances.md b/docs/data-sources/serverless_instances.md index 5dfb38816f..1c7a976d53 100644 --- a/docs/data-sources/serverless_instances.md +++ b/docs/data-sources/serverless_instances.md @@ -34,9 +34,9 @@ data "mongodbatlas_serverless_instances" "data_serverless" { * `provider_settings_provider_name` - Cloud service provider that applies to the provisioned the serverless instance. * `provider_settings_region_name` - Human-readable label that identifies the physical location of your MongoDB serverless instance. The region you choose can affect network latency for clients accessing your databases. * `state_name` - Stage of deployment of this serverless instance when the resource made its request. -* `continuous_backup_enabled` - Flag that indicates whether the serverless instance uses Serverless Continuous Backup. +* `continuous_backup_enabled` - (Deprecated) Flag that indicates whether the serverless instance uses Serverless Continuous Backup. * `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster. -* `auto_indexing` - Flag that indicates whether the serverless instance uses [Serverless Auto Indexing](https://www.mongodb.com/docs/atlas/performance-advisor/auto-index-serverless/). +* `auto_indexing` - (Deprecated) Flag that indicates whether the serverless instance uses [Serverless Auto Indexing](https://www.mongodb.com/docs/atlas/performance-advisor/auto-index-serverless/). * `tags` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags). ### Tags diff --git a/docs/guides/1.22.0-upgrade-guide.md b/docs/guides/1.22.0-upgrade-guide.md new file mode 100644 index 0000000000..7f0f96cb1d --- /dev/null +++ b/docs/guides/1.22.0-upgrade-guide.md @@ -0,0 +1,30 @@ +--- +page_title: "Upgrade Guide 1.22.0" +--- + +# MongoDB Atlas Provider 1.22.0: Upgrade and Information Guide + +The Terraform MongoDB Atlas Provider version 1.22.0 has a number of new and exciting features. + +## New Resources, Data Sources, and Features + +- You can now manage Flex Clusters with the new `mongodbatlas_flex_cluster` resource and corresponding data sources. The feature is available as a preview feature. To learn more, please review `mongodbatlas_flex_cluster` [resource documentation](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/flex_cluster). + +## Deprecations and removals + +- `continuous_backup_enabled` attribute is deprecated in `mongodbatlas_serverless_instance` resource and data sources. If your workload requires this feature, we recommend switching to Atlas Dedicated clusters. To learn more, see the [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide). +- `auto_indexing` attribute is deprecated in `mongodbatlas_serverless_instance` resource and data sources. If your workload requires this feature, we recommend switching to Atlas Dedicated clusters. To learn more, please see the [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide). +- `mongodbatlas_privatelink_endpoint_serverless` resource is deprecated. If your workload requires this feature, we recommend switching to Atlas Dedicated clusters. To learn more, please see the [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide). +- `mongodbatlas_privatelink_endpoint_service_serverless` resource, `mongodbatlas_privatelink_endpoint_service_serverless` and `mongodbatlas_privatelink_endpoints_service_serverless` data sources are deprecated. If your workload requires this feature, we recommend switching to Atlas Dedicated clusters. To learn more, please see the [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide). + +## Terraform MongoDB Atlas modules + +You can now leverage our [Terraform Modules](https://registry.terraform.io/namespaces/terraform-mongodbatlas-modules) to easily get started with MongoDB Atlas and critical features like [Push-based log export](https://registry.terraform.io/modules/terraform-mongodbatlas-modules/push-based-log-export/mongodbatlas/latest), [Private Endpoints](https://registry.terraform.io/modules/terraform-mongodbatlas-modules/private-endpoint/mongodbatlas/latest), etc. + +## Helpful Links + +* [Report bugs](https://github.com/mongodb/terraform-provider-mongodbatlas/issues) + +* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723) + +* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above. diff --git a/docs/guides/flex-cluster-to-dedicated-cluster-migraton-guide.md b/docs/guides/flex-cluster-to-dedicated-cluster-migraton-guide.md new file mode 100644 index 0000000000..466eb03107 --- /dev/null +++ b/docs/guides/flex-cluster-to-dedicated-cluster-migraton-guide.md @@ -0,0 +1,65 @@ +--- +page_title: "Migration Guide: Flex Cluster to Dedicated Cluster" +--- + +# Migration Guide: Flex Cluster to Dedicated Cluster + +**Objective**: This guide explains how to replace the `mongodbatlas_flex_cluster` resource with the `mongodbatlas_advanced_cluster` resource. + +Currently, the only method to migrate your Flex cluster to a Dedicated cluster is via the Atlas UI. + + + +## Best Practices Before Migrating +Before doing any migration, create a backup of your [Terraform state file](https://developer.hashicorp.com/terraform/cli/commands/state). + +### Procedure + + +See [Modify a Cluster](https://www.mongodb.com/docs/atlas/scale-cluster/) for how to migrate via the Atlas UI. + +Complete the following procedure to resolves the configuration drift in Terraform. This does not affect the underlying cluster infrastructure. + +1. Find the import IDs of the new Dedicated cluster your Flex cluster has migrated to: `{PROJECT_ID}-{CLUSTER_NAME}`, such as `664619d870c247237f4b86a6-clusterName` +2. Add an import block to one of your `.tf` files: + ```terraform + import { + to = mongodbatlas_advanced_cluster.this + id = "664619d870c247237f4b86a6-clusterName" # from step 1 + } + ``` + 3. Run `terraform plan -generate-config-out=adv_cluster.tf`. This should generate a `adv_cluster.tf` file. + 4. Run `terraform apply`. You should see the resource imported: `Apply complete! Resources: 1 imported, 0 added, 0 changed, 0 destroyed.` + 5. Remove the "default" fields. Many fields of this resource are optional. Look for fields with a `null` or `0` value or blocks you didn't specify before. Required fields have been outlined in the below example resource block: + ``` terraform + resource "mongodbatlas_advanced_cluster" "this" { + cluster_type = "REPLICASET" + name = "clusterName" + project_id = "664619d870c247237f4b86a6" + replication_specs { + zone_name = "Zone 1" + region_configs { + priority = 7 + provider_name = "AWS" + region_name = "EU_WEST_1" + analytics_specs { + instance_size = "M10" + node_count = 0 + } + electable_specs { + instance_size = "M10" + node_count = 3 + } + } + } + } + ``` + 6. Re-use existing [Terraform expressions](https://developer.hashicorp.com/terraform/language/expressions). All fields in the generated configuration have static values. Look in your previous configuration for: + - variables, for example: `var.project_id` + - Terraform keywords, for example: `for_each`, `count`, and `depends_on` + 7. Re-run `terraform apply` to ensure you have no planned changes: `No changes. Your infrastructure matches the configuration.` + 8. Update the references from your previous cluster resource: `mongodbatlas_flex_cluster.this.X` to the new `mongodbatlas_advanced_cluster.this.X`. + 9. Update any data source blocks to refer to `mongodbatlas_advanced_cluster`. + 10. Replace your existing clusters with the ones from `adv_cluster.tf` and run `terraform state rm mongodbatlas_flex_cluster.this`. Without this step, Terraform creates a plan to delete your existing cluster. + 11. Remove the import block created in step 2. + 12. Re-run `terraform apply` to ensure you have no planned changes: `No changes. Your infrastructure matches the configuration.` diff --git a/docs/guides/serverless-shared-migration-guide.md b/docs/guides/serverless-shared-migration-guide.md new file mode 100644 index 0000000000..04c0c7a859 --- /dev/null +++ b/docs/guides/serverless-shared-migration-guide.md @@ -0,0 +1,267 @@ +--- +page_title: "Migration Guide: Transition out of Serverless Instances and Shared-tier clusters" +--- + +# Migration Guide: Transition out of Serverless Instances and Shared-tier clusters + +The goal of this guide is to help users transition from Serverless Instances and Shared-tier clusters (M2/M5) to Free, Flex or Dedicated Clusters. + +Starting in January 2025 or later, all Shared-tier clusters (in both `mongodbatlas_cluster` and `mongodbatlas_advanced_cluster`) will automatically convert to Flex clusters. Similarly, in March 2025 all Serverless instances (`mongodb_serverless_instance`) will be converted into Free/Flex/Dedicated clusters, [depending on your existing configuration](https://www.mongodb.com/docs/atlas/flex-migration/). +If a Serverless instance has $0 MRR, it automatically converts into a Free cluster. Else, if it does not fit the constraints of a Flex cluster, it will convert into a Dedicated cluster, resulting in downtime and workload disruption. Otherwise, it will convert to a Flex cluster. +Some of these conversions will result in configuration drift in Terraform. + + +You can migrate from Serverless instances and Shared-tier clusters manually before autoconversion. + +**--> NOTE:** We recommend waiting until March 2025 or later for Serverless instances and Shared-tier clusters to autoconvert. + +For Shared-tier clusters, we are working on enhancing the User Experience such that Terraform Atlas Providers users can make even fewer required changes to their scripts from what is shown below. More updates to come over the next few months. For more details reach out to: zuhair.ahmed@mongodb.com + +### Jump to: +- [Shared-tier to Flex](#from-shared-tier-clusters-to-flex) +- [Serverless to Free](#from-serverless-to-free) +- [Serverless to Flex](#from-serverless-to-flex) +- [Serverless to Dedicated](#from-serverless-to-dedicated) + +## From Shared-tier clusters to Flex + +### Post-Autoconversion Migration Procedure + +Shared-tier clusters will automatically convert in January 2025 or later to Flex clusters in Atlas, retaining all data. We recommend to migrate to `mongodbatlas_flex_cluster` resource once the autoconversion is done. + +The following steps explain how to move your exising Shared-tier cluster resource to the new `mongodbatlas_flex_cluster` resource and does not affect the underlying cluster infrastructure: + +1. Find the import IDs of the Flex clusters: `{PROJECT_ID}-{CLUSTER_NAME}`, such as `664619d870c247237f4b86a6-flexClusterName` +2. Add an import block per cluster to one of your `.tf` files: + ```terraform + import { + to = mongodbatlas_flex_cluster.this + id = "664619d870c247237f4b86a6-flexClusterName" # from step 1 + } + ``` +3. Run `terraform plan -generate-config-out=flex_cluster.tf`. This should generate a `flex_cluster.tf` file with your Flex cluster in it. +4. Run `terraform apply`. You should see the resource(s) imported: `Apply complete! Resources: 1 imported, 0 added, 0 changed, 0 destroyed.` +5. Remove the "default" fields. Many fields of this resource are optional. Look for fields with a `null` or `0` value. +6. Re-use existing [Terraform expressions](https://developer.hashicorp.com/terraform/language/expressions). All fields in the generated configuration have static values. Look in your previous configuration for: + - variables, for example: `var.project_id` + - Terraform keywords, for example: `for_each`, `count`, and `depends_on` +7. Update the references from your previous cluster resource: `mongodbatlas_advanced_cluster.this.X` or `mongodbatlas_cluster.this.X` to the new `mongodbatlas_flex_cluster.this.X`. +8. Update any shared-tier data source blocks to refer to `mongodbatlas_flex_cluster`. +9. Replace your existing clusters with the ones from `flex_cluster.tf` and run + + `terraform state rm mongodbatlas_advanced_cluster.this` + + or `terraform state rm mongodbatlas_cluster.this`. + + Without this step, Terraform creates a plan to delete your existing cluster. + +10. Remove the import block created in step 2. +11. Re-run `terraform plan` to ensure you have no planned changes: `No changes. Your infrastructure matches the configuration.` + +### Pre-Autoconversion Migration Procedure + +**NOTE:** We recommend waiting until January 2025 or later for Shared-tier clusters to autoconvert. Manually doing the migration can cause downtime and workload disruption. + +1. Create a new Flex Cluster directly from your `.tf` file, e.g.: + + ```terraform + resource "mongodbatlas_flex_cluster" "this" { + project_id = var.project_id + name = "flexClusterName" + provider_settings = { + backing_provider_name = "AWS" + region_name = "US_EAST_1" + } + termination_protection_enabled = true + } + ``` +2. Run `terraform apply` to create the new resource. +3. Migrate data from your Shared-tier cluster to the Flex cluster using `mongodump` and `mongostore`. + + Please see the following guide on how to retrieve data from one cluster and store it in another cluster: [Convert a Serverless Instance to a Dedicated Cluster](https://www.mongodb.com/docs/atlas/tutorial/convert-serverless-to-dedicated/) + + Verify that your data is present within the Flex cluster before proceeding. +4. Delete the Shared-tier cluster by running a destroy command against it. + + For *mongodbatlas_advanced_cluster*: + + `terraform destroy -target=mongodbatlas_advanced_cluster.this` + + For *mongodbatlas_cluster*: + + `terraform destroy -target=mongodbatlas_cluster.this` + + 5. Remove the resource block for the Shared-tier cluster from your `.tf` file. + +## From Serverless to Free + +**Please ensure your Serverless instance meets the following requirements to migrate to Free:** +- $0 MRR + +### Post-Autoconversion Migration Procedure + +Given your Serverless Instance has $0 MRR, it will automatically convert in March 2025 into a Free cluster in Atlas, retaining all data. + +The following steps resolve the configuration drift in Terraform without affecting the underlying cluster infrastructure: + +1. Find the import IDs of the Free clusters: `{PROJECT_ID}-{CLUSTER_NAME}`, such as `664619d870c247237f4b86a6-freeClusterName` +2. Add an import block per cluster to one of your `.tf` files: + ```terraform + import { + to = mongodbatlas_advanced_cluster.this + id = "664619d870c247237f4b86a6-freeClusterName" # from step 1 + } + ``` +3. Run `terraform plan -generate-config-out=free_cluster.tf`. This should generate a `free_cluster.tf` file with your Free cluster in it. +4. Run `terraform apply`. You should see the resource(s) imported: `Apply complete! Resources: 1 imported, 0 added, 0 changed, 0 destroyed.` +5. Remove the "default" fields. Many fields of this resource are optional. Look for fields with a `null` or `0` value. +6. Re-use existing [Terraform expressions](https://developer.hashicorp.com/terraform/language/expressions). All fields in the generated configuration have static values. Look in your previous configuration for: + - variables, for example: `var.project_id` + - Terraform keywords, for example: `for_each`, `count`, and `depends_on` +7. Update the references from your previous cluster resource: `mongodbatlas_serverless_instance.this.X` to the new `mongodbatlas_advanced_cluster.this.X`. +8. Update any shared-tier data source blocks to refer to `mongodbatlas_advanced_cluster`. +9. Replace your existing clusters with the ones from `free_cluster.tf` and run `terraform state rm mongodbatlas_serverless_instance.this`. Without this step, Terraform creates a plan to delete your existing cluster. +10. Remove the import block created in step 2. +11. Re-run `terraform plan` to ensure you have no planned changes: `No changes. Your infrastructure matches the configuration.` + +### Pre-Autoconversion Migration Procedure + +**NOTE:** We recommend waiting until March 2025 or later for Serverless instances to autoconvert. Manually doing the migration can cause downtime and workload disruption. + +1. Create a new Free Cluster directly from your `.tf` file, e.g.: + + ```terraform + resource "mongodbatlas_advanced_cluster" "this" { + project_id = var.atlas_project_id + name = "freeClusterName" + cluster_type = "REPLICASET" + + replication_specs { + region_configs { + electable_specs { + instance_size = "M0" + } + provider_name = "TENANT" + backing_provider_name = "AWS" + region_name = "US_EAST_1" + priority = 7 + } + } + } + ``` +2. Run `terraform apply` to create the new resource. +3. Migrate data from your Serverless Instance to the Free cluster using `mongodump` and `mongostore`. + + Please see the following guide on how to retrieve data from one cluster and store it in another cluster: [Convert a Serverless Instance to a Dedicated Cluster](https://www.mongodb.com/docs/atlas/tutorial/convert-serverless-to-dedicated/) + + Verify that your data is present within the Free cluster before proceeding. +4. Delete the Serverless Instance by running a destroy command against the Serverless Instance: + + `terraform destroy -target=mongodbatlas_serverless_instance.this` + + 5. Remove the resource block for the Serverless Instance from your `.tf` file. + +## From Serverless to Flex + +**Please ensure your Serverless instance meets the following requirements to migrate to Flex:** +- <= 5GB of data +- no privatelink or continuous backup +- < 500 ops/sec consistently. + +### Post-Autoconversion Migration Procedure + +Given your Serverless Instance fits the constraints of a Flex cluster, it will automatically convert in March 2025 into a Flex cluster in Atlas, retaining all data. We recommend to migrate to `mongodbatlas_flex_cluster` resource once the autoconversion is done. + +The following steps explain how to move your exising Serverless instance resource to the new `mongodbatlas_flex_cluster` resource and does not affect the underlying cluster infrastructure: + +1. Find the import IDs of the Flex clusters: `{PROJECT_ID}-{CLUSTER_NAME}`, such as `664619d870c247237f4b86a6-flexClusterName` +2. Add an import block per cluster to one of your `.tf` files: + ```terraform + import { + to = mongodbatlas_flex_cluster.this + id = "664619d870c247237f4b86a6-flexClusterName" # from step 1 + } + ``` +3. Run `terraform plan -generate-config-out=flex_cluster.tf`. This should generate a `flex_cluster.tf` file with your Flex cluster in it. +4. Run `terraform apply`. You should see the resource(s) imported: `Apply complete! Resources: 1 imported, 0 added, 0 changed, 0 destroyed.` +5. Remove the "default" fields. Many fields of this resource are optional. Look for fields with a `null` or `0` value. +6. Re-use existing [Terraform expressions](https://developer.hashicorp.com/terraform/language/expressions). All fields in the generated configuration have static values. Look in your previous configuration for: + - variables, for example: `var.project_id` + - Terraform keywords, for example: `for_each`, `count`, and `depends_on` +7. Update the references from your previous cluster resource: `mongodbatlas_serverless_instance.this.X` to the new `mongodbatlas_flex_cluster.this.X`. +8. Update any shared-tier data source blocks to refer to `mongodbatlas_flex_cluster`. +9. Replace your existing clusters with the ones from `flex_cluster.tf` and run `terraform state rm mongodbatlas_serverless_instance.this`. Without this step, Terraform creates a plan to delete your existing cluster. +10. Remove the import block created in step 2. +11. Re-run `terraform plan` to ensure you have no planned changes: `No changes. Your infrastructure matches the configuration.` + +### Pre-Autoconversion Migration Procedure + +**NOTE:** We recommend waiting until March 2025 or later for Serverless instances to autoconvert. Manual migration can cause downtime and workload disruption. + +1. Create a new Flex Cluster directly from your `.tf` file, e.g.: + + ```terraform + resource "mongodbatlas_flex_cluster" "this" { + project_id = var.project_id + name = "flexClusterName" + provider_settings = { + backing_provider_name = "AWS" + region_name = "US_EAST_1" + } + termination_protection_enabled = true + } + ``` +2. Run `terraform apply` to create the new resource. +3. Migrate data from your Serverless Instance to the Flex cluster using `mongodump` and `mongostore`. + + Please see the following guide on how to retrieve data from one cluster and store it in another cluster: [Convert a Serverless Instance to a Dedicated Cluster](https://www.mongodb.com/docs/atlas/tutorial/convert-serverless-to-dedicated/) + + Verify that your data is present within the Flex cluster before proceeding. +4. Delete the Serverless Instance by running a destroy command against it: + + `terraform destroy -target=mongodbatlas_serverless_instance.this` + + 5. You may now safely remove the resource block for the Serverless Instance from your `.tf` file. + +## From Serverless to Dedicated + +**Please note your Serverless instance will need to migrate to Decidated if it meets the following requirements:** +- \>= 5GB of data +- needs privatelink or continuous backup +- \> 500 ops/sec consistently. + +You cannot migrate from Serverless to Dedicated using the Terraform provider. + +### Pre-Autoconversion Migration Procedure + +**NOTE:** In early 2025, we will release a UI-based tool for migrating your workloads from Serverless instances to Dedicated clusters. This tool will ensure correct migration with little downtime. You won't need to change connection strings. + +To migrate from Serverless to Dedicated prior to early 2025, please see the following guide: [Convert a Serverless Instance to a Dedicated Cluster](https://www.mongodb.com/docs/atlas/tutorial/convert-serverless-to-dedicated/). **NOTE:** Manual migration can cause downtime and workload disruption. + +### Post-Autoconversion Migration Procedure + +**NOTE:** Auto-conversion from Serverless to Dedicated will cause downtime and workload disruption. This guide is only valid after the auto-conversion is done. + +Given your Serverless Instance doesn't fit the constraints of a Flex cluster, it will automatically convert in March 2025 into a Dedicated cluster in Atlas, retaining all data. + +The following steps resolve the configuration drift in Terraform and does not affect the underlying cluster infrastructure: + +1. Find the import IDs of the Dedicated clusters: `{PROJECT_ID}-{CLUSTER_NAME}`, such as `664619d870c247237f4b86a6-advancedClusterName` +2. Add an import block per cluster to one of your `.tf` files: + ```terraform + import { + to = mongodbatlas_advanced_cluster.this + id = "664619d870c247237f4b86a6-advancedClusterName" # from step 1 + } + ``` +3. Run `terraform plan -generate-config-out=dedicated_cluster.tf`. This should generate a `dedicated_cluster.tf` file with your Dedicated cluster in it. +4. Run `terraform apply`. You should see the resource(s) imported: `Apply complete! Resources: 1 imported, 0 added, 0 changed, 0 destroyed.` +5. Remove the "default" fields. Many fields of this resource are optional. Look for fields with a `null` or `0` value. +6. Re-use existing [Terraform expressions](https://developer.hashicorp.com/terraform/language/expressions). All fields in the generated configuration have static values. Look in your previous configuration for: + - variables, for example: `var.project_id` + - Terraform keywords, for example: `for_each`, `count`, and `depends_on` +7. Update the references from your previous cluster resource: `mongodbatlas_serverless_instance.this.X` to the new `mongodbatlas_advanced_cluster.this.X`. +8. Update any shared-tier data source blocks to refer to `mongodbatlas_advanced_cluster`. +9. Replace your existing clusters with the ones from `dedicated_cluster.tf` and run `terraform state rm mongodbatlas_serverless_instance.this`. Without this step, Terraform creates a plan to delete your existing cluster. +10. Remove the import block created in step 2. +11. Re-run `terraform plan` to ensure you have no planned changes: `No changes. Your infrastructure matches the configuration.` diff --git a/docs/resources/advanced_cluster.md b/docs/resources/advanced_cluster.md index a69410b922..cc94cd4cad 100644 --- a/docs/resources/advanced_cluster.md +++ b/docs/resources/advanced_cluster.md @@ -56,7 +56,7 @@ resource "mongodbatlas_advanced_cluster" "test" { replication_specs { region_configs { electable_specs { - instance_size = "M5" + instance_size = "M0" } provider_name = "TENANT" backing_provider_name = "AWS" @@ -67,6 +67,8 @@ resource "mongodbatlas_advanced_cluster" "test" { } ``` +**NOTE:** There can only be one M0 cluster per project. + **NOTE**: Upgrading the shared tier is supported. Any change from a shared tier cluster (a tenant) to a different instance size will be considered a tenant upgrade. When upgrading from the shared tier, change the `provider_name` from "TENANT" to your preferred provider (AWS, GCP or Azure) and remove the variable `backing_provider_name`. See the [Example Tenant Cluster Upgrade](#Example-Tenant-Cluster-Upgrade) below. You can upgrade a shared tier cluster only to a single provider on an M10-tier cluster or greater. When upgrading from the shared tier, *only* the upgrade changes will be applied. This helps avoid a corrupt state file in the event that the upgrade succeeds but subsequent updates fail within the same `terraform apply`. To apply additional cluster changes, run a secondary `terraform apply` after the upgrade succeeds. diff --git a/docs/resources/cluster.md b/docs/resources/cluster.md index 25b7bd76e6..87fa66f929 100644 --- a/docs/resources/cluster.md +++ b/docs/resources/cluster.md @@ -187,19 +187,6 @@ resource "mongodbatlas_cluster" "cluster-test" { } } ``` -### Example AWS Shared Tier (M2/M5) cluster -```terraform -resource "mongodbatlas_cluster" "cluster-test" { - project_id = "" - name = "cluster-test-global" - - # Provider Settings "block" - provider_name = "TENANT" - backing_provider_name = "AWS" - provider_region_name = "US_EAST_1" - provider_instance_size_name = "M2" -} -``` ### Example AWS Free Tier cluster ```terraform resource "mongodbatlas_cluster" "cluster-test" { diff --git a/docs/resources/flex_cluster.md b/docs/resources/flex_cluster.md new file mode 100644 index 0000000000..01aab188bd --- /dev/null +++ b/docs/resources/flex_cluster.md @@ -0,0 +1,97 @@ +# Resource: mongodbatlas_flex_cluster + +`mongodbatlas_flex_cluster` provides a Flex Cluster resource. The resource lets you create, update, delete and import a flex cluster. + +## Example Usages + +```terraform +resource "mongodbatlas_flex_cluster" "example-cluster" { + project_id = var.project_id + name = var.cluster_name + provider_settings = { + backing_provider_name = "AWS" + region_name = "US_EAST_1" + } + termination_protection_enabled = true +} + +data "mongodbatlas_flex_cluster" "example-cluster" { + project_id = var.project_id + name = mongodbatlas_flex_cluster.example-cluster.name +} + +data "mongodbatlas_flex_clusters" "example-clusters" { + project_id = var.project_id +} + +output "mongodbatlas_flex_cluster" { + value = data.mongodbatlas_flex_cluster.example-cluster.name +} + +output "mongodbatlas_flex_clusters_names" { + value = [for cluster in data.mongodbatlas_flex_clusters.example-clusters.results : cluster.name] +} +``` + + +## Schema + +### Required + +- `name` (String) Human-readable label that identifies the instance. +- `project_id` (String) Unique 24-hexadecimal character string that identifies the project. +- `provider_settings` (Attributes) Group of cloud provider settings that configure the provisioned MongoDB flex cluster. (see [below for nested schema](#nestedatt--provider_settings)) + +### Optional + +- `tags` (Map of String) Map that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the instance. +- `termination_protection_enabled` (Boolean) Flag that indicates whether termination protection is enabled on the cluster. If set to `true`, MongoDB Cloud won't delete the cluster. If set to `false`, MongoDB Cloud will delete the cluster. + +### Read-Only + +- `backup_settings` (Attributes) Flex backup configuration (see [below for nested schema](#nestedatt--backup_settings)) +- `cluster_type` (String) Flex cluster topology. +- `connection_strings` (Attributes) Collection of Uniform Resource Locators that point to the MongoDB database. (see [below for nested schema](#nestedatt--connection_strings)) +- `create_date` (String) Date and time when MongoDB Cloud created this instance. This parameter expresses its value in ISO 8601 format in UTC. +- `id` (String) Unique 24-hexadecimal digit string that identifies the instance. +- `mongo_db_version` (String) Version of MongoDB that the instance runs. +- `state_name` (String) Human-readable label that indicates the current operating condition of this instance. +- `version_release_system` (String) Method by which the cluster maintains the MongoDB versions. + + +### Nested Schema for `provider_settings` + +Required: + +- `backing_provider_name` (String) Cloud service provider on which MongoDB Cloud provisioned the flex cluster. +- `region_name` (String) Human-readable label that identifies the geographic location of your MongoDB flex cluster. The region you choose can affect network latency for clients accessing your databases. For a complete list of region names, see [AWS](https://docs.atlas.mongodb.com/reference/amazon-aws/#std-label-amazon-aws), [GCP](https://docs.atlas.mongodb.com/reference/google-gcp/), and [Azure](https://docs.atlas.mongodb.com/reference/microsoft-azure/). + +Read-Only: + +- `disk_size_gb` (Number) Storage capacity available to the flex cluster expressed in gigabytes. +- `provider_name` (String) Human-readable label that identifies the cloud service provider. + + + +### Nested Schema for `backup_settings` + +Read-Only: + +- `enabled` (Boolean) Flag that indicates whether backups are performed for this flex cluster. Backup uses [TODO](TODO) for flex clusters. + + + +### Nested Schema for `connection_strings` + +Read-Only: + +- `standard` (String) Public connection string that you can use to connect to this cluster. This connection string uses the mongodb:// protocol. +- `standard_srv` (String) Public connection string that you can use to connect to this flex cluster. This connection string uses the `mongodb+srv://` protocol. + +# Import +You can import the Flex Cluster resource by using the Project ID and Flex Cluster name, in the format `PROJECT_ID-FLEX_CLUSTER_NAME`. For example: +``` +$ terraform import mongodbatlas_flex_cluster.test 6117ac2fe2a3d04ed27a987v-yourFlexClusterName +``` + +For more information see: [MongoDB Atlas API - Flex Cluster](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Flex-Clusters/operation/createFlexcluster) Documentation. diff --git a/docs/resources/privatelink_endpoint_serverless.md b/docs/resources/privatelink_endpoint_serverless.md index d5edd9dc3e..bcd41d0238 100644 --- a/docs/resources/privatelink_endpoint_serverless.md +++ b/docs/resources/privatelink_endpoint_serverless.md @@ -1,3 +1,9 @@ +--- +subcategory: "Deprecated" +--- + +**WARNING:** This resource is deprecated and will be removed in March 2025. For more datails see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide) + # Resource: privatelink_endpoint_serverless `privatelink_endpoint_serverless` Provides a Serverless PrivateLink Endpoint resource. diff --git a/docs/resources/privatelink_endpoint_service_serverless.md b/docs/resources/privatelink_endpoint_service_serverless.md index c541969aa0..8cf2ec9211 100644 --- a/docs/resources/privatelink_endpoint_service_serverless.md +++ b/docs/resources/privatelink_endpoint_service_serverless.md @@ -1,3 +1,9 @@ +--- +subcategory: "Deprecated" +--- + +**WARNING:** This resource is deprecated and will be removed in March 2025. For more datails see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide) + # Resource: privatelink_endpoint_service_serverless `privatelink_endpoint_service_serverless` Provides a Serverless PrivateLink Endpoint Service resource. diff --git a/docs/resources/serverless_instance.md b/docs/resources/serverless_instance.md index 03b283632f..df3daf7c34 100644 --- a/docs/resources/serverless_instance.md +++ b/docs/resources/serverless_instance.md @@ -36,9 +36,9 @@ Follow this example to [setup private connection to a serverless instance using * `provider_settings_provider_name` - (Required) Cloud service provider that applies to the provisioned the serverless instance. * `provider_settings_region_name` - (Required) Human-readable label that identifies the physical location of your MongoDB serverless instance. The region you choose can affect network latency for clients accessing your databases. -* `continuous_backup_enabled` - (Optional) Flag that indicates whether the serverless instance uses [Serverless Continuous Backup](https://www.mongodb.com/docs/atlas/configure-serverless-backup). If this parameter is false or not used, the serverless instance uses [Basic Backup](https://www.mongodb.com/docs/atlas/configure-serverless-backup). +* `continuous_backup_enabled` - (Deprecated, Optional) Flag that indicates whether the serverless instance uses [Serverless Continuous Backup](https://www.mongodb.com/docs/atlas/configure-serverless-backup). If this parameter is false or not used, the serverless instance uses [Basic Backup](https://www.mongodb.com/docs/atlas/configure-serverless-backup). * `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster. -* `auto_indexing` - (Optional) Flag that indicates whether the serverless instance uses [Serverless Auto Indexing](https://www.mongodb.com/docs/atlas/performance-advisor/auto-index-serverless/). This parameter defaults to true. +* `auto_indexing` - (Deprecated, Optional) Flag that indicates whether the serverless instance uses [Serverless Auto Indexing](https://www.mongodb.com/docs/atlas/performance-advisor/auto-index-serverless/). This parameter defaults to true. * `tags` - (Optional) Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags). ### Tags diff --git a/examples/mongodbatlas_flex_cluster/README.md b/examples/mongodbatlas_flex_cluster/README.md new file mode 100644 index 0000000000..bdf65398b8 --- /dev/null +++ b/examples/mongodbatlas_flex_cluster/README.md @@ -0,0 +1,8 @@ +# MongoDB Atlas Provider -- Atlas Flex Cluster +This example creates one flex cluster in a project. + +Variables Required to be set: +- `public_key`: Atlas public key +- `private_key`: Atlas private key +- `project_id`: Project ID where flex cluster will be created +- `cluster_name`: Name of flex cluster that will be created \ No newline at end of file diff --git a/examples/mongodbatlas_flex_cluster/main.tf b/examples/mongodbatlas_flex_cluster/main.tf new file mode 100644 index 0000000000..a08df1eaf3 --- /dev/null +++ b/examples/mongodbatlas_flex_cluster/main.tf @@ -0,0 +1,26 @@ +resource "mongodbatlas_flex_cluster" "example-cluster" { + project_id = var.project_id + name = var.cluster_name + provider_settings = { + backing_provider_name = "AWS" + region_name = "US_EAST_1" + } + termination_protection_enabled = true +} + +data "mongodbatlas_flex_cluster" "example-cluster" { + project_id = var.project_id + name = mongodbatlas_flex_cluster.example-cluster.name +} + +data "mongodbatlas_flex_clusters" "example-clusters" { + project_id = var.project_id +} + +output "mongodbatlas_flex_cluster" { + value = data.mongodbatlas_flex_cluster.example-cluster.name +} + +output "mongodbatlas_flex_clusters_names" { + value = [for cluster in data.mongodbatlas_flex_clusters.example-clusters.results : cluster.name] +} diff --git a/examples/mongodbatlas_flex_cluster/provider.tf b/examples/mongodbatlas_flex_cluster/provider.tf new file mode 100644 index 0000000000..e5aeda8033 --- /dev/null +++ b/examples/mongodbatlas_flex_cluster/provider.tf @@ -0,0 +1,4 @@ +provider "mongodbatlas" { + public_key = var.public_key + private_key = var.private_key +} \ No newline at end of file diff --git a/examples/mongodbatlas_flex_cluster/variables.tf b/examples/mongodbatlas_flex_cluster/variables.tf new file mode 100644 index 0000000000..5dbb16a6af --- /dev/null +++ b/examples/mongodbatlas_flex_cluster/variables.tf @@ -0,0 +1,17 @@ +variable "public_key" { + description = "Public API key to authenticate to Atlas" + type = string +} +variable "private_key" { + description = "Private API key to authenticate to Atlas" + type = string +} +variable "project_id" { + description = "Atlas Project ID" + type = string +} +variable "cluster_name" { + description = "Atlas cluster name" + type = string + default = "string" +} \ No newline at end of file diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/versions.tf b/examples/mongodbatlas_flex_cluster/versions.tf similarity index 57% rename from examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/versions.tf rename to examples/mongodbatlas_flex_cluster/versions.tf index 6b9f728948..3faf38df1a 100644 --- a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/versions.tf +++ b/examples/mongodbatlas_flex_cluster/versions.tf @@ -1,12 +1,8 @@ terraform { required_providers { - aws = { - source = "hashicorp/aws" - version = "~> 5.0" - } mongodbatlas = { source = "mongodb/mongodbatlas" - version = "~> 1.0" + version = "~> 1.21.3" } } required_version = ">= 1.0" diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/README.md b/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/README.md deleted file mode 100644 index 3235900b62..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/README.md +++ /dev/null @@ -1,116 +0,0 @@ -# Example - AWS and Atlas PrivateLink with Terraform - -Setup private connection to a [MongoDB Atlas Serverless Instance](https://www.mongodb.com/use-cases/serverless) utilizing [Amazon Virtual Private Cloud (aws vpc)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). - -## Dependencies - -* Terraform v0.13 -* An AWS account - provider.aws: version = "~> 4" -* A MongoDB Atlas account - provider.mongodbatlas: version = "~> 1.8" - -## Usage - -**1\. Ensure your AWS and MongoDB Atlas credentials are set up.** - -This can be done using environment variables: - -```bash -export MONGODB_ATLAS_PUBLIC_KEY="xxxx" -export MONGODB_ATLAS_PRIVATE_KEY="xxxx" -``` - -``` bash -$ export AWS_SECRET_ACCESS_KEY='your secret key' -$ export AWS_ACCESS_KEY_ID='your key id' -``` - -... or the `~/.aws/credentials` file. - -``` -$ cat ~/.aws/credentials -[default] -aws_access_key_id = your key id -aws_secret_access_key = your secret key - -``` -... or follow as in the `variables.tf` file and create **terraform.tfvars** file with all the variable values, ex: -``` -access_key = "" -secret_key = "" -public_key = "" -private_key = "" -project_id = "" -cluster_name = "aws-private-connection" -``` - -**2\. Review the Terraform plan.** - -Execute the below command and ensure you are happy with the plan. - -``` bash -$ terraform plan -``` -This project currently does the below deployments: - -- MongoDB cluster - M10 -- AWS Custom VPC, Internet Gateway, Route Tables, Subnets with Public and Private access -- PrivateLink Connection at MongoDB Atlas -- Create VPC Endpoint in AWS - -**3\. Configure the security group as required.** - -The security group in this configuration allows All Traffic access in Inbound and Outbound Rules. - -**4\. Execute the Terraform apply.** - -Now execute the plan to provision the AWS and Atlas resources. - -``` bash -$ terraform apply -``` - -**5\. Destroy the resources.** - -Once you are finished your testing, ensure you destroy the resources to avoid unnecessary charges. - -``` bash -$ terraform destroy -``` - -**What's the resource dependency chain?** -1. `mongodbatlas_project` must exist for any of the following -2. `mongodbatlas_serverless_instance` is dependent on the `mongodbatlas_project` -3. `mongodbatlas_privatelink_endpoint_serverless` is dependent on the `mongodbatlas_serverless_instance` -4. `aws_vpc_endpoint` is dependent on `mongodbatlas_privatelink_endpoint_serverless` -5. `mongodbatlas_privatelink_endpoint_service_serverless` is dependent on `aws_vpc_endpoint` -6. `mongodbatlas_serverless_instance` is dependent on `mongodbatlas_privatelink_endpoint_service_serverless` for its `connection_strings_private_endpoint_srv` - -**Important Point on dependency chain** -- `mongodbatlas_serverless_instance` must exist in-order to create a `mongodbatlas_privatelink_endpoint_service_serverless` for that instance. -- `mongodbatlas_privatelink_endpoint_service_serverless` must exist before `mongodbatlas_serverless_instance` can have its `connection_strings_private_endpoint_srv`. - -It is impossible to create both resources and have `connection_strings_private_endpoint_srv` populated in a single `terraform apply`.\ -To circumvent this issue, this example utilitizes the following data source - -``` -data "mongodbatlas_serverless_instance" "aws_private_connection" { - project_id = mongodbatlas_serverless_instance.aws_private_connection.project_id - name = mongodbatlas_serverless_instance.aws_private_connection.name - - depends_on = [mongodbatlas_privatelink_endpoint_service_serverless.pe_east_service] -} -``` - - -Serverless instance `connection_strings_private_endpoint_srv` is a list of strings.\ -To output the private connection strings, follow the [example output.tf](output.tf): - -``` -locals { - private_endpoints = coalesce(data.mongodbatlas_serverless_instance.aws_private_connection.connection_strings_private_endpoint_srv, []) -} - -output "connection_strings" { - value = local.private_endpoints -} -``` \ No newline at end of file diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/atlas-privatelink.tf b/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/atlas-privatelink.tf deleted file mode 100644 index 36f3c90fed..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/atlas-privatelink.tf +++ /dev/null @@ -1,14 +0,0 @@ -resource "mongodbatlas_privatelink_endpoint_serverless" "pe_east" { - project_id = mongodbatlas_serverless_instance.aws_private_connection.project_id - instance_name = mongodbatlas_serverless_instance.aws_private_connection.name - provider_name = "AWS" -} - -resource "mongodbatlas_privatelink_endpoint_service_serverless" "pe_east_service" { - project_id = mongodbatlas_privatelink_endpoint_serverless.pe_east.project_id - instance_name = mongodbatlas_privatelink_endpoint_serverless.pe_east.instance_name - endpoint_id = mongodbatlas_privatelink_endpoint_serverless.pe_east.endpoint_id - provider_name = mongodbatlas_privatelink_endpoint_serverless.pe_east.provider_name - cloud_provider_endpoint_id = aws_vpc_endpoint.vpce_east.id - comment = "New serverless endpoint" -} \ No newline at end of file diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/atlas-serverless-instance.tf b/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/atlas-serverless-instance.tf deleted file mode 100644 index 5039308d40..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/atlas-serverless-instance.tf +++ /dev/null @@ -1,13 +0,0 @@ -resource "mongodbatlas_serverless_instance" "aws_private_connection" { - project_id = var.project_id - name = var.instance_name - provider_settings_backing_provider_name = "AWS" - provider_settings_provider_name = "SERVERLESS" - provider_settings_region_name = "US_EAST_1" - continuous_backup_enabled = true - - tags { - key = "environment" - value = "dev" - } -} \ No newline at end of file diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/aws-vpc.tf b/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/aws-vpc.tf deleted file mode 100644 index a3d3d581d2..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/aws-vpc.tf +++ /dev/null @@ -1,57 +0,0 @@ -resource "aws_vpc_endpoint" "vpce_east" { - vpc_id = aws_vpc.vpc_east.id - service_name = mongodbatlas_privatelink_endpoint_serverless.pe_east.endpoint_service_name - vpc_endpoint_type = "Interface" - subnet_ids = [aws_subnet.subnet_east_a.id, aws_subnet.subnet_east_b.id] - security_group_ids = [aws_security_group.sg_east.id] -} - -resource "aws_vpc" "vpc_east" { - cidr_block = "10.0.0.0/16" - enable_dns_hostnames = true - enable_dns_support = true -} - -resource "aws_internet_gateway" "ig_east" { - vpc_id = aws_vpc.vpc_east.id -} - -resource "aws_route" "route_east" { - route_table_id = aws_vpc.vpc_east.main_route_table_id - destination_cidr_block = "0.0.0.0/0" - gateway_id = aws_internet_gateway.ig_east.id -} - -resource "aws_subnet" "subnet_east_a" { - vpc_id = aws_vpc.vpc_east.id - cidr_block = "10.0.1.0/24" - map_public_ip_on_launch = true - availability_zone = "us-east-1a" -} - -resource "aws_subnet" "subnet_east_b" { - vpc_id = aws_vpc.vpc_east.id - cidr_block = "10.0.2.0/24" - map_public_ip_on_launch = false - availability_zone = "us-east-1b" -} - -resource "aws_security_group" "sg_east" { - name_prefix = "default-" - description = "Default security group for all instances in vpc" - vpc_id = aws_vpc.vpc_east.id - ingress { - from_port = 0 - to_port = 0 - protocol = "tcp" - cidr_blocks = [ - "0.0.0.0/0", - ] - } - egress { - from_port = 0 - to_port = 0 - protocol = "-1" - cidr_blocks = ["0.0.0.0/0"] - } -} diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/output.tf b/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/output.tf deleted file mode 100644 index 6ced909ebf..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/output.tf +++ /dev/null @@ -1,14 +0,0 @@ -data "mongodbatlas_serverless_instance" "aws_private_connection" { - project_id = mongodbatlas_serverless_instance.aws_private_connection.project_id - name = mongodbatlas_serverless_instance.aws_private_connection.name - - depends_on = [mongodbatlas_privatelink_endpoint_service_serverless.pe_east_service] -} - -locals { - private_endpoints = coalesce(data.mongodbatlas_serverless_instance.aws_private_connection.connection_strings_private_endpoint_srv, []) -} - -output "connection_strings" { - value = local.private_endpoints -} \ No newline at end of file diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/provider.tf b/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/provider.tf deleted file mode 100644 index 61ef7cb227..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/provider.tf +++ /dev/null @@ -1,9 +0,0 @@ -provider "aws" { - access_key = var.access_key - secret_key = var.secret_key - region = "us-east-1" -} -provider "mongodbatlas" { - public_key = var.public_key - private_key = var.private_key -} diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/variables.tf b/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/variables.tf deleted file mode 100644 index 17cc8b1259..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint/aws/serverless-instance/variables.tf +++ /dev/null @@ -1,25 +0,0 @@ -variable "public_key" { - description = "The public API key for MongoDB Atlas" - type = string -} -variable "private_key" { - description = "The private API key for MongoDB Atlas" - type = string -} -variable "access_key" { - description = "The access key for AWS Account" - type = string -} -variable "secret_key" { - description = "The secret key for AWS Account" - type = string -} -variable "project_id" { - description = "Atlas project ID" - type = string -} -variable "instance_name" { - description = "Atlas serverless instance name" - default = "aws-private-connection" - type = string -} diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/README.md b/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/README.md deleted file mode 100644 index 88c764b227..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/README.md +++ /dev/null @@ -1,97 +0,0 @@ -# Example - AWS and Atlas PrivateLink with Terraform - -This project aims to provide a very straight-forward example of setting up PrivateLink connection between AWS and MongoDB Atlas Serverless. - - -## Dependencies - -* Terraform v0.13 -* An AWS account - provider.aws: version = "~> 3.3" -* A MongoDB Atlas account - provider.mongodbatlas: version = "~> 0.6" - -## Usage - -**1\. Ensure your AWS and MongoDB Atlas credentials are set up.** - -This can be done using environment variables: - -``` bash -$ export AWS_SECRET_ACCESS_KEY='your secret key' -$ export AWS_ACCESS_KEY_ID='your key id' -``` - -```bash -export MONGODB_ATLAS_PUBLIC_KEY="xxxx" -export MONGODB_ATLAS_PRIVATE_KEY="xxxx" -``` - -... or the `~/.aws/credentials` file. - -``` -$ cat ~/.aws/credentials -[default] -aws_access_key_id = your key id -aws_secret_access_key = your secret key - -``` -... or follow as in the `variables.tf` file and create **terraform.tfvars** file with all the variable values and make sure **not to commit it**. - -**2\. Review the Terraform plan.** - -Execute the below command and ensure you are happy with the plan. - -``` bash -$ terraform plan -``` -This project currently does the below deployments: - -- MongoDB cluster - M10 -- AWS Custom VPC, Internet Gateway, Route Tables, Subnets with Public and Private access -- PrivateLink Connection at MongoDB Atlas -- Create VPC Endpoint in AWS - -**3\. Configure the security group as required.** - -The security group in this configuration allows All Traffic access in Inbound and Outbound Rules. - -**4\. Execute the Terraform apply.** - -Now execute the plan to provision the AWS and Atlas resources. - -``` bash -$ terraform apply -``` - -**5\. Destroy the resources.** - -Once you are finished your testing, ensure you destroy the resources to avoid unnecessary charges. - -``` bash -$ terraform destroy -``` - -**Important Point** - -To fetch the connection string follow the below steps: -``` -output "atlasclusterstring" { - value = data.mongodbatlas_serverless_instance.cluster_atlas.connection_strings_standard_srv -} -``` -**Outputs:** -``` -atlasclusterstring = "mongodb+srv://cluster-atlas.za3fb.mongodb.net" - -``` - -To fetch a private connection string, use the output of terraform as below after second apply: - -``` -output "plstring" { - value = mongodbatlas_serverless_instance.cluster_atlas.connection_strings_private_endpoint_srv[0] -} -``` -**Output:** -``` -plstring = mongodb+srv://cluster-atlas-pe-0.za3fb.mongodb.net -``` diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/atlas-cluster.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/atlas-cluster.tf deleted file mode 100644 index dc4ba8a1b6..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/atlas-cluster.tf +++ /dev/null @@ -1,26 +0,0 @@ -resource "mongodbatlas_serverless_instance" "cluster_atlas" { - project_id = var.atlasprojectid - name = "ClusterAtlas" - provider_settings_backing_provider_name = "AWS" - provider_settings_provider_name = "SERVERLESS" - provider_settings_region_name = "US_EAST_1" - continuous_backup_enabled = true -} - -data "mongodbatlas_serverless_instance" "cluster_atlas" { - project_id = var.atlasprojectid - name = mongodbatlas_serverless_instance.cluster_atlas.name - depends_on = [mongodbatlas_privatelink_endpoint_service_serverless.atlaseplink] -} - - -output "atlasclusterstring" { - value = data.mongodbatlas_serverless_instance.cluster_atlas.connection_strings_standard_srv -} - -/* Note Value not available until second apply*/ -/* -output "plstring" { - value = mongodbatlas_serverless_instance.cluster_atlas.connection_strings_private_endpoint_srv[0] -} -*/ diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/aws-vpc.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/aws-vpc.tf deleted file mode 100644 index e6bd39e188..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/aws-vpc.tf +++ /dev/null @@ -1,59 +0,0 @@ -# Create Primary VPC -resource "aws_vpc" "primary" { - cidr_block = "10.0.0.0/16" - enable_dns_hostnames = true - enable_dns_support = true -} - -# Create IGW -resource "aws_internet_gateway" "primary" { - vpc_id = aws_vpc.primary.id -} - -# Route Table -resource "aws_route" "primary-internet_access" { - route_table_id = aws_vpc.primary.main_route_table_id - destination_cidr_block = "0.0.0.0/0" - gateway_id = aws_internet_gateway.primary.id -} - -# Subnet-A -resource "aws_subnet" "primary-az1" { - vpc_id = aws_vpc.primary.id - cidr_block = "10.0.1.0/24" - map_public_ip_on_launch = true - availability_zone = "${var.aws_region}a" -} - -# Subnet-B -resource "aws_subnet" "primary-az2" { - vpc_id = aws_vpc.primary.id - cidr_block = "10.0.2.0/24" - map_public_ip_on_launch = false - availability_zone = "${var.aws_region}b" -} - -/*Security-Group -Ingress - Port 80 -- limited to instance - Port 22 -- Open to ssh without limitations -Egress - Open to All*/ - -resource "aws_security_group" "primary_default" { - name_prefix = "default-" - description = "Default security group for all instances in ${aws_vpc.primary.id}" - vpc_id = aws_vpc.primary.id - ingress { - from_port = 0 - to_port = 0 - protocol = "tcp" - cidr_blocks = [ - "0.0.0.0/0", - ] - } - egress { - from_port = 0 - to_port = 0 - protocol = "-1" - cidr_blocks = ["0.0.0.0/0"] - } -} diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/main.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/main.tf deleted file mode 100644 index cff417bd28..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/main.tf +++ /dev/null @@ -1,23 +0,0 @@ -resource "mongodbatlas_privatelink_endpoint_serverless" "atlaspl" { - project_id = var.atlasprojectid - provider_name = "AWS" - instance_name = mongodbatlas_serverless_instance.cluster_atlas.name -} - -resource "aws_vpc_endpoint" "ptfe_service" { - vpc_id = aws_vpc.primary.id - service_name = mongodbatlas_privatelink_endpoint_serverless.atlaspl.endpoint_service_name - vpc_endpoint_type = "Interface" - subnet_ids = [aws_subnet.primary-az1.id, aws_subnet.primary-az2.id] - security_group_ids = [aws_security_group.primary_default.id] -} - -resource "mongodbatlas_privatelink_endpoint_service_serverless" "atlaseplink" { - project_id = mongodbatlas_privatelink_endpoint_serverless.atlaspl.project_id - instance_name = mongodbatlas_serverless_instance.cluster_atlas.name - endpoint_id = mongodbatlas_privatelink_endpoint_serverless.atlaspl.endpoint_id - cloud_provider_endpoint_id = aws_vpc_endpoint.ptfe_service.id - provider_name = "AWS" - comment = "test" - -} diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/provider.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/provider.tf deleted file mode 100644 index e075e34d7e..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/provider.tf +++ /dev/null @@ -1,9 +0,0 @@ -provider "mongodbatlas" { - public_key = var.public_key - private_key = var.private_key -} -provider "aws" { - access_key = var.access_key - secret_key = var.secret_key - region = var.aws_region -} diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/variables.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/variables.tf deleted file mode 100644 index 86977d1bde..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/variables.tf +++ /dev/null @@ -1,25 +0,0 @@ -variable "public_key" { - description = "The public API key for MongoDB Atlas" - type = string -} -variable "private_key" { - description = "The private API key for MongoDB Atlas" - type = string -} -variable "atlasprojectid" { - description = "Atlas project ID" - type = string -} -variable "access_key" { - description = "The access key for AWS Account" - type = string -} -variable "secret_key" { - description = "The secret key for AWS Account" - type = string -} -variable "aws_region" { - default = "us-east-1" - description = "AWS Region" - type = string -} diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/versions.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/versions.tf deleted file mode 100644 index 6b9f728948..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/versions.tf +++ /dev/null @@ -1,13 +0,0 @@ -terraform { - required_providers { - aws = { - source = "hashicorp/aws" - version = "~> 5.0" - } - mongodbatlas = { - source = "mongodb/mongodbatlas" - version = "~> 1.0" - } - } - required_version = ">= 1.0" -} diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/Readme.md b/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/Readme.md deleted file mode 100644 index 012e789f2c..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/Readme.md +++ /dev/null @@ -1,84 +0,0 @@ -# Example - Microsoft Azure and MongoDB Atlas Private Endpoint Serverless - -This project aims to provide an example of using Azure and MongoDB Atlas together. - - -## Dependencies - -* Terraform v0.13 -* Microsoft Azure account -* MongoDB Atlas account - -``` -Terraform v0.13.0 -+ provider registry.terraform.io/hashicorp/azuread v1.0.0 -+ provider registry.terraform.io/hashicorp/azurerm v2.31.1 -+ provider registry.terraform.io/terraform-providers/mongodbatlas v0.6.5 -``` - -## Usage - -**1\. Ensure your Azure credentials are set up.** - -1. Install the Azure CLI by following the steps from the [official Azure documentation](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli). -2. Run the command `az login` and this will take you to the default browser and perform the authentication. -3. Once authenticated, it will print the user details as below: - -``` -⇒ az login -You have logged in. Now let us find all the subscriptions to which you have access... -The following tenants don't contain accessible subscriptions. Use 'az login --allow-no-subscriptions' to have tenant level access. -XXXXX -[ - { - "cloudName": "AzureCloud", - "homeTenantId": "XXXXX", - "id": "XXXXX", - "isDefault": true, - "managedByTenants": [], - "name": "Pay-As-You-Go", - "state": "Enabled", - "tenantId": "XXXXX", - "user": { - "name": "person@domain.com", - "type": "user" - } - } -] -``` - -**2\. TFVARS** - -Now create **terraform.tfvars** file with all the variable values and make sure **not to commit it**. - -An serverless cluster in the project will be linked via the `cluster_name` variable. -If included, the azure connection string to the cluster will be output. - -**3\. Review the Terraform plan.** - -Execute the below command and ensure you are happy with the plan. - -``` bash -$ terraform plan -``` -This project currently does the below deployments: - -- MongoDB Atlas Azure Private Endpoint -- Azure Resource Group, VNET, Subnet, Private Endpoint -- Azure-MongoDB Private Link - -**4\. Execute the Terraform apply.** - -Now execute the plan to provision the Azure resources. - -``` bash -$ terraform apply -``` - -**5\. Destroy the resources.** - -Once you are finished your testing, ensure you destroy the resources to avoid unnecessary Azure and Atlas charges. - -``` bash -$ terraform destroy -``` diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/main.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/main.tf deleted file mode 100644 index d40e580bc5..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/main.tf +++ /dev/null @@ -1,71 +0,0 @@ -provider "azurerm" { - subscription_id = var.subscription_id - client_id = var.client_id - client_secret = var.client_secret - tenant_id = var.tenant_id - features { - } -} - -data "azurerm_resource_group" "test" { - name = var.resource_group_name -} - -resource "azurerm_virtual_network" "test" { - name = "acceptanceTestVirtualNetwork1" - address_space = ["10.0.0.0/16"] - location = data.azurerm_resource_group.test.location - resource_group_name = var.resource_group_name -} - -resource "azurerm_subnet" "test" { - name = "testsubnet" - resource_group_name = var.resource_group_name - virtual_network_name = azurerm_virtual_network.test.name - address_prefixes = ["10.0.1.0/24"] - private_link_service_network_policies_enabled = true - private_endpoint_network_policies_enabled = true -} - -resource "mongodbatlas_privatelink_endpoint_serverless" "test" { - project_id = var.project_id - instance_name = mongodbatlas_serverless_instance.test.name - provider_name = "AZURE" -} - -resource "azurerm_private_endpoint" "test" { - name = "endpoint-test" - location = data.azurerm_resource_group.test.location - resource_group_name = var.resource_group_name - subnet_id = azurerm_subnet.test.id - private_service_connection { - name = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_service_name - private_connection_resource_id = mongodbatlas_privatelink_endpoint_serverless.test.private_link_service_resource_id - is_manual_connection = true - request_message = "Azure Private Link test" - } - -} - -resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" { - project_id = mongodbatlas_privatelink_endpoint_serverless.test.project_id - instance_name = mongodbatlas_serverless_instance.test.name - endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id - cloud_provider_endpoint_id = azurerm_private_endpoint.test.id - private_endpoint_ip_address = azurerm_private_endpoint.test.private_service_connection[0].private_ip_address - provider_name = "AZURE" - comment = "test" -} - -resource "mongodbatlas_serverless_instance" "test" { - project_id = var.project_id - name = var.cluster_name - provider_settings_backing_provider_name = "AZURE" - provider_settings_provider_name = "SERVERLESS" - provider_settings_region_name = "US_EAST_2" - continuous_backup_enabled = true -} - -output "private_endpoints" { - value = mongodbatlas_serverless_instance.test.connection_strings_private_endpoint_srv[0] -} \ No newline at end of file diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/variables.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/variables.tf deleted file mode 100644 index 65b1347a9f..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/variables.tf +++ /dev/null @@ -1,30 +0,0 @@ - -variable "project_id" { - default = "PROJECT-ID" - type = string -} -variable "subscription_id" { - default = "AZURE SUBSCRIPTION ID" - type = string -} -variable "client_id" { - default = "AZURE CLIENT ID" - type = string -} -variable "client_secret" { - default = "AZURE CLIENT SECRET" - type = string -} -variable "tenant_id" { - default = "AZURE TENANT ID" - type = string -} -variable "resource_group_name" { - default = "AZURE RESOURCE GROUP NAME" - type = string -} -variable "cluster_name" { - description = "Cluster whose connection string to output" - default = "cluster-serverless" - type = string -} diff --git a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/versions.tf b/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/versions.tf deleted file mode 100644 index 7d50229e5c..0000000000 --- a/examples/mongodbatlas_privatelink_endpoint_service_serverless/azure/versions.tf +++ /dev/null @@ -1,13 +0,0 @@ -terraform { - required_providers { - azurerm = { - source = "hashicorp/azurerm" - version = "~> 3.0" - } - mongodbatlas = { - source = "mongodb/mongodbatlas" - version = "~> 1.0" - } - } - required_version = ">= 1.0" -} diff --git a/internal/common/constant/deprecation.go b/internal/common/constant/deprecation.go index b0fc609f2e..458eaaab70 100644 --- a/internal/common/constant/deprecation.go +++ b/internal/common/constant/deprecation.go @@ -1,11 +1,14 @@ package constant const ( - DeprecationParam = "This parameter is deprecated." - DeprecationParamWithReplacement = "This parameter is deprecated. Please transition to %s." - DeprecationParamByVersion = "This parameter is deprecated and will be removed in version %s." - DeprecationParamByVersionWithReplacement = "This parameter is deprecated and will be removed in version %s. Please transition to %s." - DeprecationParamFutureWithReplacement = "This parameter is deprecated and will be removed in the future. Please transition to %s" - DeprecationResourceByDateWithReplacement = "This resource is deprecated and will be removed in %s. Please transition to %s." - DeprecationDataSourceByDateWithReplacement = "This data source is deprecated and will be removed in %s. Please transition to %s." + DeprecationParam = "This parameter is deprecated." + DeprecationParamWithReplacement = "This parameter is deprecated. Please transition to %s." + DeprecationParamByVersion = "This parameter is deprecated and will be removed in version %s." + DeprecationParamByVersionWithReplacement = "This parameter is deprecated and will be removed in version %s. Please transition to %s." + DeprecationParamFutureWithReplacement = "This parameter is deprecated and will be removed in the future. Please transition to %s" + DeprecationResourceByDateWithReplacement = "This resource is deprecated and will be removed in %s. Please transition to %s." + DeprecationDataSourceByDateWithReplacement = "This data source is deprecated and will be removed in %s. Please transition to %s." + DeprecationResourceByDateWithExternalLink = "This resource is deprecated and will be removed in %s. For more details see %s." + DeprecationDataSourceByDateWithExternalLink = "This data source is deprecated and will be removed in %s. For more details see %s." + DeprecatioParamByDateWithExternalLink = "This parameter is deprecated and will be removed in %s. For more details see %s." ) diff --git a/internal/common/conversion/tags.go b/internal/common/conversion/tags.go new file mode 100644 index 0000000000..2a750409df --- /dev/null +++ b/internal/common/conversion/tags.go @@ -0,0 +1,33 @@ +package conversion + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "go.mongodb.org/atlas-sdk/v20241113001/admin" +) + +func NewResourceTags(ctx context.Context, tags types.Map) *[]admin.ResourceTag { + if tags.IsNull() || len(tags.Elements()) == 0 { + return &[]admin.ResourceTag{} + } + elements := make(map[string]types.String, len(tags.Elements())) + _ = tags.ElementsAs(ctx, &elements, false) + var tagsAdmin []admin.ResourceTag + for key, tagValue := range elements { + tagsAdmin = append(tagsAdmin, admin.ResourceTag{ + Key: key, + Value: tagValue.ValueString(), + }) + } + return &tagsAdmin +} + +func NewTFTags(tags []admin.ResourceTag) types.Map { + typesTags := make(map[string]attr.Value, len(tags)) + for _, tag := range tags { + typesTags[tag.Key] = types.StringValue(tag.Value) + } + return types.MapValueMust(types.StringType, typesTags) +} diff --git a/internal/common/conversion/tags_test.go b/internal/common/conversion/tags_test.go new file mode 100644 index 0000000000..0b2fc3c95e --- /dev/null +++ b/internal/common/conversion/tags_test.go @@ -0,0 +1,53 @@ +package conversion_test + +import ( + "context" + "testing" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "github.com/stretchr/testify/assert" + "go.mongodb.org/atlas-sdk/v20241113001/admin" +) + +func TestNewResourceTags(t *testing.T) { + testCases := map[string]struct { + expected *[]admin.ResourceTag + plan types.Map + }{ + "tags null": {&[]admin.ResourceTag{}, types.MapNull(types.StringType)}, + "tags unknown": {&[]admin.ResourceTag{}, types.MapUnknown(types.StringType)}, + "tags convert normally": {&[]admin.ResourceTag{ + *admin.NewResourceTag("key1", "value1"), + }, types.MapValueMust(types.StringType, map[string]attr.Value{ + "key1": types.StringValue("value1"), + })}, + } + for name, tc := range testCases { + t.Run(name, func(t *testing.T) { + assert.Equal(t, tc.expected, conversion.NewResourceTags(context.Background(), tc.plan)) + }) + } +} + +func TestNewTFTags(t *testing.T) { + var ( + tfMapEmpty = types.MapValueMust(types.StringType, map[string]attr.Value{}) + apiListEmpty = []admin.ResourceTag{} + apiSingleTag = []admin.ResourceTag{*admin.NewResourceTag("key1", "value1")} + tfMapSingleTag = types.MapValueMust(types.StringType, map[string]attr.Value{"key1": types.StringValue("value1")}) + ) + testCases := map[string]struct { + expected types.Map + adminTags []admin.ResourceTag + }{ + "api empty list tf null should give map null": {tfMapEmpty, apiListEmpty}, + "tags single value tf null should give map single": {tfMapSingleTag, apiSingleTag}, + } + for name, tc := range testCases { + t.Run(name, func(t *testing.T) { + assert.Equal(t, tc.expected, conversion.NewTFTags(tc.adminTags)) + }) + } +} diff --git a/internal/common/customplanmodifier/non_updatable.go b/internal/common/customplanmodifier/non_updatable.go new file mode 100644 index 0000000000..7f47282bb1 --- /dev/null +++ b/internal/common/customplanmodifier/non_updatable.go @@ -0,0 +1,36 @@ +package customplanmodifier + +import ( + "context" + "fmt" + + planmodifier "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" +) + +func NonUpdatableStringAttributePlanModifier() planmodifier.String { + return &nonUpdatableStringAttributePlanModifier{} +} + +type nonUpdatableStringAttributePlanModifier struct { +} + +func (d *nonUpdatableStringAttributePlanModifier) Description(ctx context.Context) string { + return d.MarkdownDescription(ctx) +} + +func (d *nonUpdatableStringAttributePlanModifier) MarkdownDescription(ctx context.Context) string { + return "Ensures that update operations fails when updating an attribute." +} + +func (d *nonUpdatableStringAttributePlanModifier) PlanModifyString(ctx context.Context, req planmodifier.StringRequest, resp *planmodifier.StringResponse) { + planAttributeValue := req.PlanValue + stateAttributeValue := req.StateValue + + if !stateAttributeValue.IsNull() && stateAttributeValue.ValueString() != planAttributeValue.ValueString() { + resp.Diagnostics.AddError( + fmt.Sprintf("%s cannot be updated", req.Path), + fmt.Sprintf("%s cannot be updated", req.Path), + ) + return + } +} diff --git a/internal/common/retrystrategy/retry_state.go b/internal/common/retrystrategy/retry_state.go index 00d5f6670e..70804e1a11 100644 --- a/internal/common/retrystrategy/retry_state.go +++ b/internal/common/retrystrategy/retry_state.go @@ -12,6 +12,7 @@ const ( RetryStrategyFailedState = "FAILED" RetryStrategyActiveState = "ACTIVE" RetryStrategyDeletedState = "DELETED" + RetryStrategyCreatingState = "CREATING" RetryStrategyPendingAcceptanceState = "PENDING_ACCEPTANCE" RetryStrategyPendingRecreationState = "PENDING_RECREATION" diff --git a/internal/provider/provider.go b/internal/provider/provider.go index e432f4737d..8a67172aa4 100644 --- a/internal/provider/provider.go +++ b/internal/provider/provider.go @@ -32,6 +32,7 @@ import ( "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/databaseuser" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/encryptionatrest" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/encryptionatrestprivateendpoint" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/flexcluster" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/mongodbemployeeaccessgrant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/project" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/projectipaccesslist" @@ -449,6 +450,8 @@ func (p *MongodbtlasProvider) DataSources(context.Context) []func() datasource.D previewDataSources := []func() datasource.DataSource{ resourcepolicy.DataSource, resourcepolicy.PluralDataSource, + flexcluster.DataSource, + flexcluster.PluralDataSource, } // Data sources not yet in GA if providerEnablePreview { dataSources = append(dataSources, previewDataSources...) @@ -473,6 +476,7 @@ func (p *MongodbtlasProvider) Resources(context.Context) []func() resource.Resou } previewResources := []func() resource.Resource{ resourcepolicy.Resource, + flexcluster.Resource, } // Resources not yet in GA if providerEnablePreview { resources = append(resources, previewResources...) diff --git a/internal/service/atlasuser/data_source_atlas_user.go b/internal/service/atlasuser/data_source_atlas_user.go index 22e9160d3b..f40b4e33d4 100644 --- a/internal/service/atlasuser/data_source_atlas_user.go +++ b/internal/service/atlasuser/data_source_atlas_user.go @@ -10,9 +10,10 @@ import ( "github.com/hashicorp/terraform-plugin-framework/path" "github.com/hashicorp/terraform-plugin-framework/schema/validator" "github.com/hashicorp/terraform-plugin-framework/types" + "go.mongodb.org/atlas-sdk/v20241113001/admin" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20241113001/admin" ) const ( diff --git a/internal/service/atlasuser/data_source_atlas_user_test.go b/internal/service/atlasuser/data_source_atlas_user_test.go index abb3ba7cac..74e7cd612c 100644 --- a/internal/service/atlasuser/data_source_atlas_user_test.go +++ b/internal/service/atlasuser/data_source_atlas_user_test.go @@ -8,9 +8,10 @@ import ( "testing" "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "go.mongodb.org/atlas-sdk/v20241113001/admin" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "go.mongodb.org/atlas-sdk/v20241113001/admin" ) func TestAccConfigDSAtlasUser_ByUserID(t *testing.T) { diff --git a/internal/service/atlasuser/data_source_atlas_users.go b/internal/service/atlasuser/data_source_atlas_users.go index ff97bb1db2..3a0330ac80 100644 --- a/internal/service/atlasuser/data_source_atlas_users.go +++ b/internal/service/atlasuser/data_source_atlas_users.go @@ -4,6 +4,8 @@ import ( "context" "fmt" + "go.mongodb.org/atlas-sdk/v20241113001/admin" + "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" "github.com/hashicorp/terraform-plugin-framework/datasource" "github.com/hashicorp/terraform-plugin-framework/datasource/schema" @@ -13,7 +15,6 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" - "go.mongodb.org/atlas-sdk/v20241113001/admin" ) const ( diff --git a/internal/service/atlasuser/data_source_atlas_users_test.go b/internal/service/atlasuser/data_source_atlas_users_test.go index a70205869c..536a23d5b7 100644 --- a/internal/service/atlasuser/data_source_atlas_users_test.go +++ b/internal/service/atlasuser/data_source_atlas_users_test.go @@ -9,9 +9,10 @@ import ( "github.com/hashicorp/terraform-plugin-testing/helper/resource" "github.com/hashicorp/terraform-plugin-testing/terraform" + "go.mongodb.org/atlas-sdk/v20241113001/admin" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/atlasuser" "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" - "go.mongodb.org/atlas-sdk/v20241113001/admin" ) func TestAccConfigDSAtlasUsers_ByOrgID(t *testing.T) { diff --git a/internal/service/flexcluster/data_source.go b/internal/service/flexcluster/data_source.go new file mode 100644 index 0000000000..4d4ad3e363 --- /dev/null +++ b/internal/service/flexcluster/data_source.go @@ -0,0 +1,51 @@ +package flexcluster + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/datasource" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" +) + +var _ datasource.DataSource = &ds{} +var _ datasource.DataSourceWithConfigure = &ds{} + +func DataSource() datasource.DataSource { + return &ds{ + DSCommon: config.DSCommon{ + DataSourceName: resourceName, + }, + } +} + +type ds struct { + config.DSCommon +} + +func (d *ds) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { + requiredFields := []string{"project_id", "name"} + resp.Schema = conversion.DataSourceSchemaFromResource(ResourceSchema(ctx), requiredFields, nil) +} + +func (d *ds) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { + var tfModel TFModel + resp.Diagnostics.Append(req.Config.Get(ctx, &tfModel)...) + if resp.Diagnostics.HasError() { + return + } + + connV2 := d.Client.AtlasV2 + apiResp, _, err := connV2.FlexClustersApi.GetFlexCluster(ctx, tfModel.ProjectId.ValueString(), tfModel.Name.ValueString()).Execute() + if err != nil { + resp.Diagnostics.AddError("error reading data source", err.Error()) + return + } + + newFlexClusterModel, diags := NewTFModel(ctx, apiResp) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newFlexClusterModel)...) +} diff --git a/internal/service/flexcluster/data_source_schema.go b/internal/service/flexcluster/data_source_schema.go new file mode 100644 index 0000000000..933fd45a31 --- /dev/null +++ b/internal/service/flexcluster/data_source_schema.go @@ -0,0 +1,108 @@ +package flexcluster + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/datasource/schema" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +func DataSourceSchema(ctx context.Context) schema.Schema { + return schema.Schema{ + Attributes: dataSourceSchema(false), + } +} + +func dataSourceSchema(isPlural bool) map[string]schema.Attribute { + return map[string]schema.Attribute{ + "project_id": schema.StringAttribute{ + Required: !isPlural, + Computed: isPlural, + MarkdownDescription: "Unique 24-hexadecimal character string that identifies the project.", + }, + "name": schema.StringAttribute{ + Required: !isPlural, + Computed: isPlural, + MarkdownDescription: "Human-readable label that identifies the instance.", + }, + "provider_settings": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "backing_provider_name": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Cloud service provider on which MongoDB Cloud provisioned the flex cluster.", + }, + "disk_size_gb": schema.Float64Attribute{ + Computed: true, + MarkdownDescription: "Storage capacity available to the flex cluster expressed in gigabytes.", + }, + "provider_name": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Human-readable label that identifies the cloud service provider.", + }, + "region_name": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Human-readable label that identifies the geographic location of your MongoDB flex cluster. The region you choose can affect network latency for clients accessing your databases. For a complete list of region names, see [AWS](https://docs.atlas.mongodb.com/reference/amazon-aws/#std-label-amazon-aws), [GCP](https://docs.atlas.mongodb.com/reference/google-gcp/), and [Azure](https://docs.atlas.mongodb.com/reference/microsoft-azure/).", + }, + }, + Computed: true, + MarkdownDescription: "Group of cloud provider settings that configure the provisioned MongoDB flex cluster.", + }, + "tags": schema.MapAttribute{ + ElementType: types.StringType, + Computed: true, + MarkdownDescription: "Map that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the instance.", + }, + "backup_settings": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "enabled": schema.BoolAttribute{ + Computed: true, + MarkdownDescription: "Flag that indicates whether backups are performed for this flex cluster. Backup uses [TODO](TODO) for flex clusters.", + }, + }, + Computed: true, + MarkdownDescription: "Flex backup configuration", + }, + "cluster_type": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Flex cluster topology.", + }, + "connection_strings": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "standard": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Public connection string that you can use to connect to this cluster. This connection string uses the mongodb:// protocol.", + }, + "standard_srv": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Public connection string that you can use to connect to this flex cluster. This connection string uses the `mongodb+srv://` protocol.", + }, + }, + Computed: true, + MarkdownDescription: "Collection of Uniform Resource Locators that point to the MongoDB database.", + }, + "create_date": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Date and time when MongoDB Cloud created this instance. This parameter expresses its value in ISO 8601 format in UTC.", + }, + "id": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the instance.", + }, + "mongo_db_version": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Version of MongoDB that the instance runs.", + }, + "state_name": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Human-readable label that indicates the current operating condition of this instance.", + }, + "termination_protection_enabled": schema.BoolAttribute{ + Computed: true, + MarkdownDescription: "Flag that indicates whether termination protection is enabled on the cluster. If set to `true`, MongoDB Cloud won't delete the cluster. If set to `false`, MongoDB Cloud will delete the cluster.", + }, + "version_release_system": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Method by which the cluster maintains the MongoDB versions.", + }, + } +} diff --git a/internal/service/flexcluster/main_test.go b/internal/service/flexcluster/main_test.go new file mode 100644 index 0000000000..ca4213b1d7 --- /dev/null +++ b/internal/service/flexcluster/main_test.go @@ -0,0 +1,15 @@ +package flexcluster_test + +import ( + "os" + "testing" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" +) + +func TestMain(m *testing.M) { + cleanup := acc.SetupSharedResources() + exitCode := m.Run() + cleanup() + os.Exit(exitCode) +} diff --git a/internal/service/flexcluster/model.go b/internal/service/flexcluster/model.go new file mode 100644 index 0000000000..5dd30d71f8 --- /dev/null +++ b/internal/service/flexcluster/model.go @@ -0,0 +1,133 @@ +package flexcluster + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/diag" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/hashicorp/terraform-plugin-framework/types/basetypes" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "go.mongodb.org/atlas-sdk/v20241113001/admin" +) + +func NewTFModel(ctx context.Context, apiResp *admin.FlexClusterDescription20241113) (*TFModel, diag.Diagnostics) { + connectionStrings, diags := ConvertConnectionStringsToTF(ctx, apiResp.ConnectionStrings) + if diags.HasError() { + return nil, diags + } + backupSettings, diags := ConvertBackupSettingsToTF(ctx, apiResp.BackupSettings) + if diags.HasError() { + return nil, diags + } + providerSettings, diags := ConvertProviderSettingsToTF(ctx, apiResp.ProviderSettings) + if diags.HasError() { + return nil, diags + } + return &TFModel{ + ProviderSettings: *providerSettings, + ConnectionStrings: *connectionStrings, + Tags: conversion.NewTFTags(apiResp.GetTags()), + CreateDate: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.CreateDate)), + ProjectId: types.StringPointerValue(apiResp.GroupId), + Id: types.StringPointerValue(apiResp.Id), + MongoDbversion: types.StringPointerValue(apiResp.MongoDBVersion), + Name: types.StringPointerValue(apiResp.Name), + ClusterType: types.StringPointerValue(apiResp.ClusterType), + StateName: types.StringPointerValue(apiResp.StateName), + VersionReleaseSystem: types.StringPointerValue(apiResp.VersionReleaseSystem), + BackupSettings: *backupSettings, + TerminationProtectionEnabled: types.BoolPointerValue(apiResp.TerminationProtectionEnabled), + }, nil +} + +func NewTFModelDSP(ctx context.Context, projectID string, input []admin.FlexClusterDescription20241113) (*TFModelDSP, diag.Diagnostics) { + diags := &diag.Diagnostics{} + tfModels := make([]TFModel, len(input)) + for i := range input { + item := &input[i] + tfModel, diagsLocal := NewTFModel(ctx, item) + diags.Append(diagsLocal...) + if tfModel != nil { + tfModels[i] = *tfModel + } + } + if diags.HasError() { + return nil, *diags + } + return &TFModelDSP{ + ProjectId: types.StringValue(projectID), + Results: tfModels, + }, *diags +} + +func NewAtlasCreateReq(ctx context.Context, plan *TFModel) (*admin.FlexClusterDescriptionCreate20241113, diag.Diagnostics) { + providerSettings := &TFProviderSettings{} + if diags := plan.ProviderSettings.As(ctx, providerSettings, basetypes.ObjectAsOptions{}); diags.HasError() { + return nil, diags + } + return &admin.FlexClusterDescriptionCreate20241113{ + Name: plan.Name.ValueString(), + ProviderSettings: admin.FlexProviderSettingsCreate20241113{ + BackingProviderName: providerSettings.BackingProviderName.ValueString(), + RegionName: providerSettings.RegionName.ValueString(), + }, + TerminationProtectionEnabled: plan.TerminationProtectionEnabled.ValueBoolPointer(), + Tags: conversion.NewResourceTags(ctx, plan.Tags), + }, nil +} + +func NewAtlasUpdateReq(ctx context.Context, plan *TFModel) (*admin.FlexClusterDescriptionUpdate20241113, diag.Diagnostics) { + updateRequest := &admin.FlexClusterDescriptionUpdate20241113{ + TerminationProtectionEnabled: plan.TerminationProtectionEnabled.ValueBoolPointer(), + Tags: conversion.NewResourceTags(ctx, plan.Tags), + } + + return updateRequest, nil +} + +func ConvertBackupSettingsToTF(ctx context.Context, backupSettings *admin.FlexBackupSettings20241113) (*types.Object, diag.Diagnostics) { + if backupSettings == nil { + backupSettingsTF := types.ObjectNull(BackupSettingsType.AttributeTypes()) + return &backupSettingsTF, nil + } + + backupSettingsTF := &TFBackupSettings{ + Enabled: types.BoolPointerValue(backupSettings.Enabled), + } + backupSettingsObject, diags := types.ObjectValueFrom(ctx, BackupSettingsType.AttributeTypes(), backupSettingsTF) + if diags.HasError() { + return nil, diags + } + return &backupSettingsObject, nil +} + +func ConvertConnectionStringsToTF(ctx context.Context, connectionStrings *admin.FlexConnectionStrings20241113) (*types.Object, diag.Diagnostics) { + if connectionStrings == nil { + connectionStringsTF := types.ObjectNull(ConnectionStringsType.AttributeTypes()) + return &connectionStringsTF, nil + } + + connectionStringsTF := &TFConnectionStrings{ + Standard: types.StringPointerValue(connectionStrings.Standard), + StandardSrv: types.StringPointerValue(connectionStrings.StandardSrv), + } + connectionStringsObject, diags := types.ObjectValueFrom(ctx, ConnectionStringsType.AttributeTypes(), connectionStringsTF) + if diags.HasError() { + return nil, diags + } + return &connectionStringsObject, nil +} + +func ConvertProviderSettingsToTF(ctx context.Context, providerSettings admin.FlexProviderSettings20241113) (*types.Object, diag.Diagnostics) { + providerSettingsTF := &TFProviderSettings{ + ProviderName: types.StringPointerValue(providerSettings.ProviderName), + RegionName: types.StringPointerValue(providerSettings.RegionName), + BackingProviderName: types.StringPointerValue(providerSettings.BackingProviderName), + DiskSizeGb: types.Float64PointerValue(providerSettings.DiskSizeGB), + } + providerSettingsObject, diags := types.ObjectValueFrom(ctx, ProviderSettingsType.AttributeTypes(), providerSettingsTF) + if diags.HasError() { + return nil, diags + } + return &providerSettingsObject, nil +} diff --git a/internal/service/flexcluster/model_test.go b/internal/service/flexcluster/model_test.go new file mode 100644 index 0000000000..9554e5ef45 --- /dev/null +++ b/internal/service/flexcluster/model_test.go @@ -0,0 +1,385 @@ +package flexcluster_test + +import ( + "context" + "testing" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/flexcluster" + "github.com/stretchr/testify/assert" + "go.mongodb.org/atlas-sdk/v20241113001/admin" +) + +var ( + projectID = "projectId" + id = "id" + createDate = "2021-08-17T17:00:00Z" + mongoDBVersion = "8.0" + name = "myCluster" + clusterType = "REPLICASET" + stateName = "IDLE" + versionReleaseSystem = "LTS" + terminationProtectionEnabled = true + createDateTime, _ = conversion.StringToTime(createDate) + providerName = "AWS" + regionName = "us-east-1" + backingProviderName = "AWS" + diskSizeGb = 100.0 + standardConnectionString = "mongodb://localhost:27017" + standardSrvConnectionString = "mongodb+srv://localhost:27017" + key1 = "key1" + value1 = "value1" + connectionStringsObject, _ = flexcluster.ConvertConnectionStringsToTF(context.Background(), &admin.FlexConnectionStrings20241113{ + Standard: &standardConnectionString, + StandardSrv: &standardSrvConnectionString, + }) + backupSettingsObject, _ = flexcluster.ConvertBackupSettingsToTF(context.Background(), &admin.FlexBackupSettings20241113{ + Enabled: conversion.Pointer(true), + }) + providerSettingsObject, _ = flexcluster.ConvertProviderSettingsToTF(context.Background(), admin.FlexProviderSettings20241113{ + ProviderName: &providerName, + RegionName: ®ionName, + BackingProviderName: &backingProviderName, + DiskSizeGB: &diskSizeGb, + }) +) + +type NewTFModelTestCase struct { + expectedTFModel *flexcluster.TFModel + input *admin.FlexClusterDescription20241113 +} + +type NewTFModelDSPTestCase struct { + expectedTFModelDSP *flexcluster.TFModelDSP + input []admin.FlexClusterDescription20241113 +} + +type NewAtlasCreateReqTestCase struct { + input *flexcluster.TFModel + expectedSDKReq *admin.FlexClusterDescriptionCreate20241113 +} + +type NewAtlasUpdateReqTestCase struct { + input *flexcluster.TFModel + expectedSDKReq *admin.FlexClusterDescriptionUpdate20241113 +} + +func TestNewTFModel(t *testing.T) { + providerSettingsTF := &flexcluster.TFProviderSettings{ + ProviderName: types.StringNull(), + RegionName: types.StringNull(), + BackingProviderName: types.StringNull(), + DiskSizeGb: types.Float64Null(), + } + nilProviderSettingsObject, _ := types.ObjectValueFrom(context.Background(), flexcluster.ProviderSettingsType.AttributeTypes(), providerSettingsTF) + testCases := map[string]NewTFModelTestCase{ + "Complete TF state": { + expectedTFModel: &flexcluster.TFModel{ + ProjectId: types.StringValue(projectID), + Id: types.StringValue(id), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{ + key1: types.StringValue(value1), + }), + ProviderSettings: *providerSettingsObject, + ConnectionStrings: *connectionStringsObject, + CreateDate: types.StringValue(createDate), + MongoDbversion: types.StringValue(mongoDBVersion), + Name: types.StringValue(name), + ClusterType: types.StringValue(clusterType), + StateName: types.StringValue(stateName), + VersionReleaseSystem: types.StringValue(versionReleaseSystem), + BackupSettings: *backupSettingsObject, + TerminationProtectionEnabled: types.BoolValue(terminationProtectionEnabled), + }, + input: &admin.FlexClusterDescription20241113{ + GroupId: &projectID, + Id: &id, + Tags: &[]admin.ResourceTag{ + { + Key: key1, + Value: value1, + }, + }, + ProviderSettings: admin.FlexProviderSettings20241113{ + ProviderName: &providerName, + RegionName: ®ionName, + BackingProviderName: &backingProviderName, + DiskSizeGB: &diskSizeGb, + }, + ConnectionStrings: &admin.FlexConnectionStrings20241113{ + Standard: &standardConnectionString, + StandardSrv: &standardSrvConnectionString, + }, + CreateDate: &createDateTime, + MongoDBVersion: &mongoDBVersion, + Name: &name, + ClusterType: &clusterType, + StateName: &stateName, + VersionReleaseSystem: &versionReleaseSystem, + BackupSettings: &admin.FlexBackupSettings20241113{ + Enabled: conversion.Pointer(true), + }, + TerminationProtectionEnabled: &terminationProtectionEnabled, + }, + }, + "Nil values": { + expectedTFModel: &flexcluster.TFModel{ + ProjectId: types.StringNull(), + Id: types.StringNull(), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{}), + ProviderSettings: nilProviderSettingsObject, + ConnectionStrings: types.ObjectNull(flexcluster.ConnectionStringsType.AttrTypes), + CreateDate: types.StringNull(), + MongoDbversion: types.StringNull(), + Name: types.StringNull(), + ClusterType: types.StringNull(), + StateName: types.StringNull(), + VersionReleaseSystem: types.StringNull(), + BackupSettings: types.ObjectNull(flexcluster.BackupSettingsType.AttrTypes), + TerminationProtectionEnabled: types.BoolNull(), + }, + input: &admin.FlexClusterDescription20241113{ + GroupId: nil, + Id: nil, + Tags: &[]admin.ResourceTag{}, + ProviderSettings: admin.FlexProviderSettings20241113{}, + ConnectionStrings: nil, + CreateDate: nil, + MongoDBVersion: nil, + Name: nil, + ClusterType: nil, + StateName: nil, + VersionReleaseSystem: nil, + BackupSettings: nil, + TerminationProtectionEnabled: nil, + }, + }, + } + + for testName, tc := range testCases { + t.Run(testName, func(t *testing.T) { + tfModel, diags := flexcluster.NewTFModel(context.Background(), tc.input) + if diags.HasError() { + t.Errorf("unexpected errors found: %s", diags.Errors()[0].Summary()) + } + assert.Equal(t, tc.expectedTFModel, tfModel, "created TF model did not match expected output") + }) + } +} + +func TestNewTFModelDSP(t *testing.T) { + testCases := map[string]NewTFModelDSPTestCase{ + "Complete TF state": { + expectedTFModelDSP: &flexcluster.TFModelDSP{ + ProjectId: types.StringValue(projectID), + Results: []flexcluster.TFModel{ + { + ProjectId: types.StringValue(projectID), + Id: types.StringValue(id), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{ + key1: types.StringValue(value1), + }), + ProviderSettings: *providerSettingsObject, + ConnectionStrings: *connectionStringsObject, + CreateDate: types.StringValue(createDate), + MongoDbversion: types.StringValue(mongoDBVersion), + Name: types.StringValue(name), + ClusterType: types.StringValue(clusterType), + StateName: types.StringValue(stateName), + VersionReleaseSystem: types.StringValue(versionReleaseSystem), + BackupSettings: *backupSettingsObject, + TerminationProtectionEnabled: types.BoolValue(terminationProtectionEnabled), + }, + { + ProjectId: types.StringValue(projectID), + Id: types.StringValue("id-2"), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{ + key1: types.StringValue(value1), + }), + ProviderSettings: *providerSettingsObject, + ConnectionStrings: *connectionStringsObject, + CreateDate: types.StringValue(createDate), + MongoDbversion: types.StringValue(mongoDBVersion), + Name: types.StringValue(name), + ClusterType: types.StringValue(clusterType), + StateName: types.StringValue(stateName), + VersionReleaseSystem: types.StringValue(versionReleaseSystem), + BackupSettings: *backupSettingsObject, + TerminationProtectionEnabled: types.BoolValue(terminationProtectionEnabled), + }, + }, + }, + input: []admin.FlexClusterDescription20241113{ + { + GroupId: &projectID, + Id: &id, + Tags: &[]admin.ResourceTag{ + { + Key: key1, + Value: value1, + }, + }, + ProviderSettings: admin.FlexProviderSettings20241113{ + ProviderName: &providerName, + RegionName: ®ionName, + BackingProviderName: &backingProviderName, + DiskSizeGB: &diskSizeGb, + }, + ConnectionStrings: &admin.FlexConnectionStrings20241113{ + Standard: &standardConnectionString, + StandardSrv: &standardSrvConnectionString, + }, + CreateDate: &createDateTime, + MongoDBVersion: &mongoDBVersion, + Name: &name, + ClusterType: &clusterType, + StateName: &stateName, + VersionReleaseSystem: &versionReleaseSystem, + BackupSettings: &admin.FlexBackupSettings20241113{ + Enabled: conversion.Pointer(true), + }, + TerminationProtectionEnabled: &terminationProtectionEnabled, + }, + { + GroupId: &projectID, + Id: conversion.StringPtr("id-2"), + Tags: &[]admin.ResourceTag{ + { + Key: key1, + Value: value1, + }, + }, + ProviderSettings: admin.FlexProviderSettings20241113{ + ProviderName: &providerName, + RegionName: ®ionName, + BackingProviderName: &backingProviderName, + DiskSizeGB: &diskSizeGb, + }, + ConnectionStrings: &admin.FlexConnectionStrings20241113{ + Standard: &standardConnectionString, + StandardSrv: &standardSrvConnectionString, + }, + CreateDate: &createDateTime, + MongoDBVersion: &mongoDBVersion, + Name: &name, + ClusterType: &clusterType, + StateName: &stateName, + VersionReleaseSystem: &versionReleaseSystem, + BackupSettings: &admin.FlexBackupSettings20241113{ + Enabled: conversion.Pointer(true), + }, + TerminationProtectionEnabled: &terminationProtectionEnabled, + }, + }, + }, + "No Flex Clusters": { + expectedTFModelDSP: &flexcluster.TFModelDSP{ + ProjectId: types.StringValue(projectID), + Results: []flexcluster.TFModel{}, + }, + input: []admin.FlexClusterDescription20241113{}, + }, + } + for testName, tc := range testCases { + t.Run(testName, func(t *testing.T) { + tfModelDSP, diags := flexcluster.NewTFModelDSP(context.Background(), projectID, tc.input) + if diags.HasError() { + t.Errorf("unexpected errors found: %s", diags.Errors()[0].Summary()) + } + assert.Equal(t, tc.expectedTFModelDSP, tfModelDSP, "created TF model DSP did not match expected output") + }) + } +} + +func TestNewAtlasCreateReq(t *testing.T) { + testCases := map[string]NewAtlasCreateReqTestCase{ + "Complete TF state": { + input: &flexcluster.TFModel{ + ProjectId: types.StringValue(projectID), + Id: types.StringValue(id), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{ + key1: types.StringValue(value1), + }), + ProviderSettings: *providerSettingsObject, + ConnectionStrings: *connectionStringsObject, + CreateDate: types.StringValue(createDate), + MongoDbversion: types.StringValue(mongoDBVersion), + Name: types.StringValue(name), + ClusterType: types.StringValue(clusterType), + StateName: types.StringValue(stateName), + VersionReleaseSystem: types.StringValue(versionReleaseSystem), + BackupSettings: *backupSettingsObject, + TerminationProtectionEnabled: types.BoolValue(terminationProtectionEnabled), + }, + expectedSDKReq: &admin.FlexClusterDescriptionCreate20241113{ + Name: name, + Tags: &[]admin.ResourceTag{ + { + Key: key1, + Value: value1, + }, + }, + ProviderSettings: admin.FlexProviderSettingsCreate20241113{ + RegionName: regionName, + BackingProviderName: backingProviderName, + }, + TerminationProtectionEnabled: &terminationProtectionEnabled, + }, + }, + } + + for testName, tc := range testCases { + t.Run(testName, func(t *testing.T) { + apiReqResult, diags := flexcluster.NewAtlasCreateReq(context.Background(), tc.input) + if diags.HasError() { + t.Errorf("unexpected errors found: %s", diags.Errors()[0].Summary()) + } + assert.Equal(t, tc.expectedSDKReq, apiReqResult, "created sdk model did not match expected output") + }) + } +} + +func TestNewAtlasUpdateReq(t *testing.T) { + testCases := map[string]NewAtlasUpdateReqTestCase{ + "Complete TF state": { + input: &flexcluster.TFModel{ + ProjectId: types.StringValue(projectID), + Id: types.StringValue(id), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{ + key1: types.StringValue(value1), + }), + ProviderSettings: *providerSettingsObject, + ConnectionStrings: *connectionStringsObject, + CreateDate: types.StringValue(createDate), + MongoDbversion: types.StringValue(mongoDBVersion), + Name: types.StringValue(name), + ClusterType: types.StringValue(clusterType), + StateName: types.StringValue(stateName), + VersionReleaseSystem: types.StringValue(versionReleaseSystem), + BackupSettings: *backupSettingsObject, + TerminationProtectionEnabled: types.BoolValue(terminationProtectionEnabled), + }, + expectedSDKReq: &admin.FlexClusterDescriptionUpdate20241113{ + Tags: &[]admin.ResourceTag{ + { + Key: key1, + Value: value1, + }, + }, + TerminationProtectionEnabled: &terminationProtectionEnabled, + }, + }, + } + + for testName, tc := range testCases { + t.Run(testName, func(t *testing.T) { + apiReqResult, diags := flexcluster.NewAtlasUpdateReq(context.Background(), tc.input) + if diags.HasError() { + t.Errorf("unexpected errors found: %s", diags.Errors()[0].Summary()) + } + assert.Equal(t, tc.expectedSDKReq, apiReqResult, "created sdk model did not match expected output") + }) + } +} diff --git a/internal/service/flexcluster/plural_data_source.go b/internal/service/flexcluster/plural_data_source.go new file mode 100644 index 0000000000..4403955b78 --- /dev/null +++ b/internal/service/flexcluster/plural_data_source.go @@ -0,0 +1,65 @@ +package flexcluster + +import ( + "context" + "fmt" + "net/http" + + "github.com/hashicorp/terraform-plugin-framework/datasource" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/dsschema" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" + "go.mongodb.org/atlas-sdk/v20241113001/admin" +) + +var _ datasource.DataSource = &pluralDS{} +var _ datasource.DataSourceWithConfigure = &pluralDS{} + +func PluralDataSource() datasource.DataSource { + return &pluralDS{ + DSCommon: config.DSCommon{ + DataSourceName: fmt.Sprintf("%ss", resourceName), + }, + } +} + +type pluralDS struct { + config.DSCommon +} + +func (d *pluralDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) { + resp.Schema = PluralDataSourceSchema(ctx) + conversion.UpdateSchemaDescription(&resp.Schema) +} + +func (d *pluralDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) { + var tfModel TFModelDSP + resp.Diagnostics.Append(req.Config.Get(ctx, &tfModel)...) + if resp.Diagnostics.HasError() { + return + } + + connV2 := d.Client.AtlasV2 + + params := admin.ListFlexClustersApiParams{ + GroupId: tfModel.ProjectId.ValueString(), + } + + sdkProcessors, err := dsschema.AllPages(ctx, func(ctx context.Context, pageNum int) (dsschema.PaginateResponse[admin.FlexClusterDescription20241113], *http.Response, error) { + request := connV2.FlexClustersApi.ListFlexClustersWithParams(ctx, ¶ms) + request = request.PageNum(pageNum) + return request.Execute() + }) + + if err != nil { + resp.Diagnostics.AddError("error reading plural data source", err.Error()) + return + } + + newFlexClustersModel, diags := NewTFModelDSP(ctx, tfModel.ProjectId.ValueString(), sdkProcessors) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newFlexClustersModel)...) +} diff --git a/internal/service/flexcluster/plural_data_source_schema.go b/internal/service/flexcluster/plural_data_source_schema.go new file mode 100644 index 0000000000..9f9852fff5 --- /dev/null +++ b/internal/service/flexcluster/plural_data_source_schema.go @@ -0,0 +1,31 @@ +package flexcluster + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/datasource/schema" + "github.com/hashicorp/terraform-plugin-framework/types" +) + +func PluralDataSourceSchema(ctx context.Context) schema.Schema { + return schema.Schema{ + Attributes: map[string]schema.Attribute{ + "project_id": schema.StringAttribute{ + Required: true, + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](#tag/Projects/operation/listProjects) endpoint to retrieve all projects to which the authenticated user has access.\n\n**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.", + }, + "results": schema.ListNestedAttribute{ + Computed: true, + NestedObject: schema.NestedAttributeObject{ + Attributes: dataSourceSchema(true), + }, + MarkdownDescription: "List of returned documents that MongoDB Cloud provides when completing this request.", + }, + }, + } +} + +type TFModelDSP struct { + ProjectId types.String `tfsdk:"project_id"` + Results []TFModel `tfsdk:"results"` +} diff --git a/internal/service/flexcluster/resource.go b/internal/service/flexcluster/resource.go new file mode 100644 index 0000000000..5fddefa0a1 --- /dev/null +++ b/internal/service/flexcluster/resource.go @@ -0,0 +1,199 @@ +package flexcluster + +import ( + "context" + "errors" + "net/http" + "regexp" + + "github.com/hashicorp/terraform-plugin-framework/path" + "github.com/hashicorp/terraform-plugin-framework/resource" + "go.mongodb.org/atlas-sdk/v20241113001/admin" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" +) + +const resourceName = "flex_cluster" +const ErrorUpdateNotAllowed = "update not allowed" + +var _ resource.ResourceWithConfigure = &rs{} +var _ resource.ResourceWithImportState = &rs{} + +func Resource() resource.Resource { + return &rs{ + RSCommon: config.RSCommon{ + ResourceName: resourceName, + }, + } +} + +type rs struct { + config.RSCommon +} + +func (r *rs) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) { + resp.Schema = ResourceSchema(ctx) + conversion.UpdateSchemaDescription(&resp.Schema) +} + +func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) { + var tfModel TFModel + resp.Diagnostics.Append(req.Plan.Get(ctx, &tfModel)...) + if resp.Diagnostics.HasError() { + return + } + + flexClusterReq, diags := NewAtlasCreateReq(ctx, &tfModel) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + + projectID := tfModel.ProjectId.ValueString() + clusterName := tfModel.Name.ValueString() + + connV2 := r.Client.AtlasV2 + _, _, err := connV2.FlexClustersApi.CreateFlexCluster(ctx, projectID, flexClusterReq).Execute() + if err != nil { + resp.Diagnostics.AddError("error creating resource", err.Error()) + return + } + + flexClusterParams := &admin.GetFlexClusterApiParams{ + GroupId: projectID, + Name: clusterName, + } + + flexClusterResp, err := WaitStateTransition(ctx, flexClusterParams, connV2.FlexClustersApi, []string{retrystrategy.RetryStrategyCreatingState}, []string{retrystrategy.RetryStrategyIdleState}) + if err != nil { + resp.Diagnostics.AddError("error waiting for resource to be created", err.Error()) + return + } + + newFlexClusterModel, diags := NewTFModel(ctx, flexClusterResp) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newFlexClusterModel)...) +} + +func (r *rs) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) { + var flexClusterState TFModel + resp.Diagnostics.Append(req.State.Get(ctx, &flexClusterState)...) + if resp.Diagnostics.HasError() { + return + } + + connV2 := r.Client.AtlasV2 + flexCluster, apiResp, err := connV2.FlexClustersApi.GetFlexCluster(ctx, flexClusterState.ProjectId.ValueString(), flexClusterState.Name.ValueString()).Execute() + if err != nil { + if apiResp != nil && apiResp.StatusCode == http.StatusNotFound { + resp.State.RemoveResource(ctx) + return + } + resp.Diagnostics.AddError("error fetching resource", err.Error()) + return + } + + newFlexClusterModel, diags := NewTFModel(ctx, flexCluster) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newFlexClusterModel)...) +} + +func (r *rs) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) { + var plan TFModel + resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...) + if resp.Diagnostics.HasError() { + return + } + + flexClusterReq, diags := NewAtlasUpdateReq(ctx, &plan) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + + projectID := plan.ProjectId.ValueString() + clusterName := plan.Name.ValueString() + + connV2 := r.Client.AtlasV2 + _, _, err := connV2.FlexClustersApi.UpdateFlexCluster(ctx, projectID, plan.Name.ValueString(), flexClusterReq).Execute() + if err != nil { + resp.Diagnostics.AddError("error updating resource", err.Error()) + return + } + + flexClusterParams := &admin.GetFlexClusterApiParams{ + GroupId: projectID, + Name: clusterName, + } + + flexClusterResp, err := WaitStateTransition(ctx, flexClusterParams, connV2.FlexClustersApi, []string{retrystrategy.RetryStrategyUpdatingState}, []string{retrystrategy.RetryStrategyIdleState}) + if err != nil { + resp.Diagnostics.AddError("error waiting for resource to be updated", err.Error()) + return + } + + newFlexClusterModel, diags := NewTFModel(ctx, flexClusterResp) + if diags.HasError() { + resp.Diagnostics.Append(diags...) + return + } + resp.Diagnostics.Append(resp.State.Set(ctx, newFlexClusterModel)...) +} + +func (r *rs) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) { + var flexClusterState *TFModel + resp.Diagnostics.Append(req.State.Get(ctx, &flexClusterState)...) + if resp.Diagnostics.HasError() { + return + } + + connV2 := r.Client.AtlasV2 + if _, _, err := connV2.FlexClustersApi.DeleteFlexCluster(ctx, flexClusterState.ProjectId.ValueString(), flexClusterState.Name.ValueString()).Execute(); err != nil { + resp.Diagnostics.AddError("error deleting resource", err.Error()) + return + } + + flexClusterParams := &admin.GetFlexClusterApiParams{ + GroupId: flexClusterState.ProjectId.ValueString(), + Name: flexClusterState.Name.ValueString(), + } + + if err := WaitStateTransitionDelete(ctx, flexClusterParams, connV2.FlexClustersApi); err != nil { + resp.Diagnostics.AddError("error waiting for resource to be deleted", err.Error()) + return + } +} + +func (r *rs) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) { + projectID, name, err := splitFlexClusterImportID(req.ID) + if err != nil { + resp.Diagnostics.AddError("error splitting import ID", err.Error()) + return + } + + resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("project_id"), projectID)...) + resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("name"), name)...) +} + +func splitFlexClusterImportID(id string) (projectID, clusterName *string, err error) { + var re = regexp.MustCompile(`(?s)^([0-9a-fA-F]{24})-(.*)$`) + parts := re.FindStringSubmatch(id) + + if len(parts) != 3 { + err = errors.New("import format error: to import a flex cluster, use the format {project_id}-{cluster_name}") + return + } + + projectID = &parts[1] + clusterName = &parts[2] + + return +} diff --git a/internal/service/flexcluster/resource_schema.go b/internal/service/flexcluster/resource_schema.go new file mode 100644 index 0000000000..570024cce9 --- /dev/null +++ b/internal/service/flexcluster/resource_schema.go @@ -0,0 +1,198 @@ +package flexcluster + +import ( + "context" + + "github.com/hashicorp/terraform-plugin-framework/attr" + "github.com/hashicorp/terraform-plugin-framework/types" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/customplanmodifier" + + "github.com/hashicorp/terraform-plugin-framework/resource/schema" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/boolplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/float64planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/objectplanmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" + "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" +) + +func ResourceSchema(ctx context.Context) schema.Schema { + return schema.Schema{ + Attributes: map[string]schema.Attribute{ + "project_id": schema.StringAttribute{ + Required: true, + PlanModifiers: []planmodifier.String{ + customplanmodifier.NonUpdatableStringAttributePlanModifier(), + }, + MarkdownDescription: "Unique 24-hexadecimal character string that identifies the project.", + }, + "name": schema.StringAttribute{ + Required: true, + PlanModifiers: []planmodifier.String{ + customplanmodifier.NonUpdatableStringAttributePlanModifier(), + }, + MarkdownDescription: "Human-readable label that identifies the instance.", + }, + "provider_settings": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "backing_provider_name": schema.StringAttribute{ + Required: true, + PlanModifiers: []planmodifier.String{ + customplanmodifier.NonUpdatableStringAttributePlanModifier(), + }, + MarkdownDescription: "Cloud service provider on which MongoDB Cloud provisioned the flex cluster.", + }, + "disk_size_gb": schema.Float64Attribute{ + Computed: true, + PlanModifiers: []planmodifier.Float64{ + float64planmodifier.UseStateForUnknown(), + }, + MarkdownDescription: "Storage capacity available to the flex cluster expressed in gigabytes.", + }, + "provider_name": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + MarkdownDescription: "Human-readable label that identifies the cloud service provider.", + }, + "region_name": schema.StringAttribute{ + Required: true, + PlanModifiers: []planmodifier.String{ + customplanmodifier.NonUpdatableStringAttributePlanModifier(), + }, + MarkdownDescription: "Human-readable label that identifies the geographic location of your MongoDB flex cluster. The region you choose can affect network latency for clients accessing your databases. For a complete list of region names, see [AWS](https://docs.atlas.mongodb.com/reference/amazon-aws/#std-label-amazon-aws), [GCP](https://docs.atlas.mongodb.com/reference/google-gcp/), and [Azure](https://docs.atlas.mongodb.com/reference/microsoft-azure/).", + }, + }, + Required: true, + MarkdownDescription: "Group of cloud provider settings that configure the provisioned MongoDB flex cluster.", + }, + "tags": schema.MapAttribute{ + ElementType: types.StringType, + Optional: true, + MarkdownDescription: "Map that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the instance.", + }, + "backup_settings": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "enabled": schema.BoolAttribute{ + Computed: true, + MarkdownDescription: "Flag that indicates whether backups are performed for this flex cluster. Backup uses [TODO](TODO) for flex clusters.", + }, + }, + Computed: true, + PlanModifiers: []planmodifier.Object{ + objectplanmodifier.UseStateForUnknown(), + }, + MarkdownDescription: "Flex backup configuration", + }, + "cluster_type": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + MarkdownDescription: "Flex cluster topology.", + }, + "connection_strings": schema.SingleNestedAttribute{ + Attributes: map[string]schema.Attribute{ + "standard": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Public connection string that you can use to connect to this cluster. This connection string uses the mongodb:// protocol.", + }, + "standard_srv": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Public connection string that you can use to connect to this flex cluster. This connection string uses the `mongodb+srv://` protocol.", + }, + }, + Computed: true, + PlanModifiers: []planmodifier.Object{ + objectplanmodifier.UseStateForUnknown(), + }, + MarkdownDescription: "Collection of Uniform Resource Locators that point to the MongoDB database.", + }, + "create_date": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + MarkdownDescription: "Date and time when MongoDB Cloud created this instance. This parameter expresses its value in ISO 8601 format in UTC.", + }, + "id": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the instance.", + }, + "mongo_db_version": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Version of MongoDB that the instance runs.", + }, + "state_name": schema.StringAttribute{ + Computed: true, + MarkdownDescription: "Human-readable label that indicates the current operating condition of this instance.", + }, + "termination_protection_enabled": schema.BoolAttribute{ + Optional: true, + Computed: true, + PlanModifiers: []planmodifier.Bool{ + boolplanmodifier.UseStateForUnknown(), + }, + MarkdownDescription: "Flag that indicates whether termination protection is enabled on the cluster. If set to `true`, MongoDB Cloud won't delete the cluster. If set to `false`, MongoDB Cloud will delete the cluster.", + }, + "version_release_system": schema.StringAttribute{ + Computed: true, + PlanModifiers: []planmodifier.String{ + stringplanmodifier.UseStateForUnknown(), + }, + MarkdownDescription: "Method by which the cluster maintains the MongoDB versions.", + }, + }, + } +} + +type TFModel struct { + ProviderSettings types.Object `tfsdk:"provider_settings"` + ConnectionStrings types.Object `tfsdk:"connection_strings"` + Tags types.Map `tfsdk:"tags"` + CreateDate types.String `tfsdk:"create_date"` + ProjectId types.String `tfsdk:"project_id"` + Id types.String `tfsdk:"id"` + MongoDbversion types.String `tfsdk:"mongo_db_version"` + Name types.String `tfsdk:"name"` + ClusterType types.String `tfsdk:"cluster_type"` + StateName types.String `tfsdk:"state_name"` + VersionReleaseSystem types.String `tfsdk:"version_release_system"` + BackupSettings types.Object `tfsdk:"backup_settings"` + TerminationProtectionEnabled types.Bool `tfsdk:"termination_protection_enabled"` +} + +type TFBackupSettings struct { + Enabled types.Bool `tfsdk:"enabled"` +} + +var BackupSettingsType = types.ObjectType{AttrTypes: map[string]attr.Type{ + "enabled": types.BoolType, +}} + +type TFConnectionStrings struct { + Standard types.String `tfsdk:"standard"` + StandardSrv types.String `tfsdk:"standard_srv"` +} + +var ConnectionStringsType = types.ObjectType{AttrTypes: map[string]attr.Type{ + "standard": types.StringType, + "standard_srv": types.StringType, +}} + +type TFProviderSettings struct { + BackingProviderName types.String `tfsdk:"backing_provider_name"` + DiskSizeGb types.Float64 `tfsdk:"disk_size_gb"` + ProviderName types.String `tfsdk:"provider_name"` + RegionName types.String `tfsdk:"region_name"` +} + +var ProviderSettingsType = types.ObjectType{AttrTypes: map[string]attr.Type{ + "backing_provider_name": types.StringType, + "disk_size_gb": types.Float64Type, + "provider_name": types.StringType, + "region_name": types.StringType, +}} diff --git a/internal/service/flexcluster/resource_test.go b/internal/service/flexcluster/resource_test.go new file mode 100644 index 0000000000..0dfa2e2d46 --- /dev/null +++ b/internal/service/flexcluster/resource_test.go @@ -0,0 +1,195 @@ +package flexcluster_test + +import ( + "context" + "fmt" + "os" + "regexp" + "testing" + + "github.com/hashicorp/terraform-plugin-testing/helper/resource" + "github.com/hashicorp/terraform-plugin-testing/terraform" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc" +) + +var ( + resourceType = "mongodbatlas_flex_cluster" + resourceName = "mongodbatlas_flex_cluster.test" + dataSourceName = "data.mongodbatlas_flex_cluster.test" + dataSourcePluralName = "data.mongodbatlas_flex_clusters.test" +) + +func TestAccFlexClusterRS_basic(t *testing.T) { + tc := basicTestCase(t) + // Tests include testing of plural data source and so cannot be run in parallel + resource.Test(t, *tc) +} + +func TestAccFlexClusterRS_failedUpdate(t *testing.T) { + tc := failedUpdateTestCase(t) + resource.Test(t, *tc) +} + +func basicTestCase(t *testing.T) *resource.TestCase { + t.Helper() + var ( + projectID = os.Getenv("MONGODB_ATLAS_FLEX_PROJECT_ID") + clusterName = acc.RandomName() + provider = "AWS" + region = "US_EAST_1" + ) + return &resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: checkDestroy, + Steps: []resource.TestStep{ + { + Config: configBasic(projectID, clusterName, provider, region, true), + Check: checksFlexCluster(projectID, clusterName, true), + }, + { + Config: configBasic(projectID, clusterName, provider, region, false), + Check: checksFlexCluster(projectID, clusterName, false), + }, + { + Config: configBasic(projectID, clusterName, provider, region, true), + ResourceName: resourceName, + ImportStateIdFunc: importStateIDFunc(resourceName), + ImportState: true, + ImportStateVerify: true, + }, + }, + } +} + +func failedUpdateTestCase(t *testing.T) *resource.TestCase { + t.Helper() + var ( + projectID = os.Getenv("MONGODB_ATLAS_FLEX_PROJECT_ID") + projectIDUpdated = os.Getenv("MONGODB_ATLAS_FLEX_PROJECT_ID") + "-updated" + clusterName = acc.RandomName() + clusterNameUpdated = clusterName + "-updated" + provider = "AWS" + providerUpdated = "GCP" + region = "US_EAST_1" + regionUpdated = "US_EAST_2" + ) + return &resource.TestCase{ + PreCheck: func() { acc.PreCheckBasic(t) }, + ProtoV6ProviderFactories: acc.TestAccProviderV6Factories, + CheckDestroy: checkDestroy, + Steps: []resource.TestStep{ + { + Config: configBasic(projectID, clusterName, provider, region, false), + Check: checksFlexCluster(projectID, clusterName, false), + }, + { + Config: configBasic(projectID, clusterNameUpdated, provider, region, false), + ExpectError: regexp.MustCompile("name cannot be updated"), + }, + { + Config: configBasic(projectIDUpdated, clusterName, provider, region, false), + ExpectError: regexp.MustCompile("project_id cannot be updated"), + }, + { + Config: configBasic(projectID, clusterName, providerUpdated, region, false), + ExpectError: regexp.MustCompile("provider_settings.backing_provider_name cannot be updated"), + }, + { + Config: configBasic(projectID, clusterName, provider, regionUpdated, false), + ExpectError: regexp.MustCompile("provider_settings.region_name cannot be updated"), + }, + }, + } +} + +func configBasic(projectID, clusterName, provider, region string, terminationProtectionEnabled bool) string { + return fmt.Sprintf(` + resource "mongodbatlas_flex_cluster" "test" { + project_id = %[1]q + name = %[2]q + provider_settings = { + backing_provider_name = %[3]q + region_name = %[4]q + } + termination_protection_enabled = %[5]t + tags = { + testKey = "testValue" + } + } + data "mongodbatlas_flex_cluster" "test" { + project_id = mongodbatlas_flex_cluster.test.project_id + name = mongodbatlas_flex_cluster.test.name + } + data "mongodbatlas_flex_clusters" "test" { + project_id = mongodbatlas_flex_cluster.test.project_id + }`, projectID, clusterName, provider, region, terminationProtectionEnabled) +} + +func checksFlexCluster(projectID, clusterName string, terminationProtectionEnabled bool) resource.TestCheckFunc { + checks := []resource.TestCheckFunc{checkExists()} + attrMap := map[string]string{ + "project_id": projectID, + "name": clusterName, + "termination_protection_enabled": fmt.Sprintf("%v", terminationProtectionEnabled), + "tags.testKey": "testValue", + } + pluralMap := map[string]string{ + "project_id": projectID, + "results.#": "1", + } + attrSet := []string{ + "backup_settings.enabled", + "cluster_type", + "connection_strings.standard", + "create_date", + "id", + "mongo_db_version", + "state_name", + "version_release_system", + "provider_settings.provider_name", + } + checks = acc.AddAttrChecks(dataSourcePluralName, checks, pluralMap) + return acc.CheckRSAndDS(resourceName, &dataSourceName, &dataSourcePluralName, attrSet, attrMap, checks...) +} + +func checkExists() resource.TestCheckFunc { + return func(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type == resourceType { + projectID := rs.Primary.Attributes["project_id"] + name := rs.Primary.Attributes["name"] + _, _, err := acc.ConnV2().FlexClustersApi.GetFlexCluster(context.Background(), projectID, name).Execute() + if err != nil { + return fmt.Errorf("flex cluster (%s:%s) not found", projectID, name) + } + } + } + return nil + } +} + +func checkDestroy(state *terraform.State) error { + for _, rs := range state.RootModule().Resources { + if rs.Type == resourceType { + projectID := rs.Primary.Attributes["project_id"] + name := rs.Primary.Attributes["name"] + _, _, err := acc.ConnV2().FlexClustersApi.GetFlexCluster(context.Background(), projectID, name).Execute() + if err == nil { + return fmt.Errorf("flex cluster (%s:%s) still exists", projectID, name) + } + } + } + return nil +} + +func importStateIDFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("not found: %s", resourceName) + } + + return fmt.Sprintf("%s-%s", rs.Primary.Attributes["project_id"], rs.Primary.Attributes["name"]), nil + } +} diff --git a/internal/service/flexcluster/state_transition.go b/internal/service/flexcluster/state_transition.go new file mode 100644 index 0000000000..af1bd4f696 --- /dev/null +++ b/internal/service/flexcluster/state_transition.go @@ -0,0 +1,60 @@ +package flexcluster + +import ( + "context" + "errors" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy" + "go.mongodb.org/atlas-sdk/v20241113001/admin" +) + +func WaitStateTransition(ctx context.Context, requestParams *admin.GetFlexClusterApiParams, client admin.FlexClustersApi, pendingStates, desiredStates []string) (*admin.FlexClusterDescription20241113, error) { + stateConf := &retry.StateChangeConf{ + Pending: pendingStates, + Target: desiredStates, + Refresh: refreshFunc(ctx, requestParams, client), + Timeout: 3 * time.Hour, + MinTimeout: 3 * time.Second, + Delay: 0, + } + + flexClusterResp, err := stateConf.WaitForStateContext(ctx) + if err != nil { + return nil, err + } + + if flexCluster, ok := flexClusterResp.(*admin.FlexClusterDescription20241113); ok && flexCluster != nil { + return flexCluster, nil + } + + return nil, errors.New("did not obtain valid result when waiting for flex cluster state transition") +} + +func WaitStateTransitionDelete(ctx context.Context, requestParams *admin.GetFlexClusterApiParams, client admin.FlexClustersApi) error { + stateConf := &retry.StateChangeConf{ + Pending: []string{retrystrategy.RetryStrategyDeletingState}, + Target: []string{retrystrategy.RetryStrategyDeletedState}, + Refresh: refreshFunc(ctx, requestParams, client), + Timeout: 3 * time.Hour, + MinTimeout: 3 * time.Second, + Delay: 0, + } + _, err := stateConf.WaitForStateContext(ctx) + return err +} + +func refreshFunc(ctx context.Context, requestParams *admin.GetFlexClusterApiParams, client admin.FlexClustersApi) retry.StateRefreshFunc { + return func() (any, string, error) { + flexCluster, resp, err := client.GetFlexClusterWithParams(ctx, requestParams).Execute() + if err != nil { + if resp.StatusCode == 404 { + return "", retrystrategy.RetryStrategyDeletedState, nil + } + return nil, "", err + } + state := flexCluster.GetStateName() + return flexCluster, state, nil + } +} diff --git a/internal/service/flexcluster/state_transition_test.go b/internal/service/flexcluster/state_transition_test.go new file mode 100644 index 0000000000..bb22cfb069 --- /dev/null +++ b/internal/service/flexcluster/state_transition_test.go @@ -0,0 +1,163 @@ +package flexcluster_test + +import ( + "context" + "errors" + "net/http" + "testing" + + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/flexcluster" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/mock" + "go.mongodb.org/atlas-sdk/v20241113001/admin" + "go.mongodb.org/atlas-sdk/v20241113001/mockadmin" +) + +var ( + IdleState = "IDLE" + CreatingState = "CREATING" + UpdatingState = "UPDATING" + DeletingState = "DELETING" + DeletedState = "DELETED" + UnknownState = "" + sc500 = conversion.IntPtr(500) + sc200 = conversion.IntPtr(200) + sc404 = conversion.IntPtr(404) + clusterName = "clusterName" + requestParams = &admin.GetFlexClusterApiParams{ + GroupId: "groupId", + Name: clusterName, + } +) + +type testCase struct { + expectedState *string + name string + mockResponses []response + desiredStates []string + pendingStates []string + expectedError bool +} + +func TestFlexClusterStateTransition(t *testing.T) { + testCases := []testCase{ + { + name: "Successful transition to IDLE", + mockResponses: []response{ + {state: &CreatingState, statusCode: sc200}, + {state: &IdleState, statusCode: sc200}, + }, + expectedState: &IdleState, + expectedError: false, + desiredStates: []string{IdleState}, + pendingStates: []string{CreatingState}, + }, + { + name: "Error when API returns 5XX", + mockResponses: []response{ + {statusCode: sc500, err: errors.New("Internal server error")}, + }, + expectedState: nil, + expectedError: true, + desiredStates: []string{IdleState}, + pendingStates: []string{CreatingState}, + }, + { + name: "Deleted state when API returns 404", + mockResponses: []response{ + {state: &DeletingState, statusCode: sc200}, + {statusCode: sc404, err: errors.New("Not found")}, + }, + expectedState: nil, + expectedError: true, + desiredStates: []string{IdleState}, + pendingStates: []string{DeletingState}, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + m := mockadmin.NewFlexClustersApi(t) + m.EXPECT().GetFlexClusterWithParams(mock.Anything, mock.Anything).Return(admin.GetFlexClusterApiRequest{ApiService: m}) + + for _, resp := range tc.mockResponses { + modelResp, httpResp, err := resp.get() + m.EXPECT().GetFlexClusterExecute(mock.Anything).Return(modelResp, httpResp, err).Once() + } + resp, err := flexcluster.WaitStateTransition(context.Background(), requestParams, m, tc.pendingStates, tc.desiredStates) + assert.Equal(t, tc.expectedError, err != nil) + if resp != nil { + assert.Equal(t, *tc.expectedState, *resp.StateName) + } + }) + } +} + +func TestFlexClusterStateTransitionForDelete(t *testing.T) { + testCases := []testCase{ + { + name: "Successful transition to DELETED", + mockResponses: []response{ + {state: &DeletingState, statusCode: sc200}, + {statusCode: sc404, err: errors.New("Not found")}, + }, + expectedError: false, + }, + { + name: "Error when API responds with error", + mockResponses: []response{ + {statusCode: sc500, err: errors.New("Internal server error")}, + }, + expectedError: true, + }, + { + name: "Failed delete when responding with unknown state", + mockResponses: []response{ + {state: &DeletingState}, + {state: &UnknownState}, + }, + expectedError: true, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + m := mockadmin.NewFlexClustersApi(t) + m.EXPECT().GetFlexClusterWithParams(mock.Anything, mock.Anything).Return(admin.GetFlexClusterApiRequest{ApiService: m}) + + for _, resp := range tc.mockResponses { + modelResp, httpResp, err := resp.get() + m.EXPECT().GetFlexClusterExecute(mock.Anything).Return(modelResp, httpResp, err).Once() + } + err := flexcluster.WaitStateTransitionDelete(context.Background(), requestParams, m) + assert.Equal(t, tc.expectedError, err != nil) + }) + } +} + +type response struct { + state *string + statusCode *int + err error +} + +func (r *response) get() (*admin.FlexClusterDescription20241113, *http.Response, error) { + var httpResp *http.Response + if r.statusCode != nil { + httpResp = &http.Response{ + StatusCode: *r.statusCode, + } + } + return responseWithState(r.state), httpResp, r.err +} + +func responseWithState(state *string) *admin.FlexClusterDescription20241113 { + if state == nil { + return nil + } + return &admin.FlexClusterDescription20241113{ + Name: &clusterName, + StateName: state, + } +} diff --git a/internal/service/flexcluster/tfplugingen/generator_config.yml b/internal/service/flexcluster/tfplugingen/generator_config.yml new file mode 100644 index 0000000000..d8ca4550d9 --- /dev/null +++ b/internal/service/flexcluster/tfplugingen/generator_config.yml @@ -0,0 +1,21 @@ +provider: + name: mongodbatlas + +resources: + flex_cluster: + read: + path: /api/atlas/v2/groups/{groupId}/flexClusters/{name} + method: GET + create: + path: /api/atlas/v2/groups/{groupId}/flexClusters + method: POST + +data_sources: + flex_cluster: + read: + path: /api/atlas/v2/groups/{groupId}/flexClusters/{name} + method: GET + flex_clusters: + read: + path: /api/atlas/v2/groups/{groupId}/flexClusters + method: GET diff --git a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go b/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go index e0a19714e8..d663a1d918 100644 --- a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go +++ b/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go @@ -15,6 +15,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpoint" @@ -34,6 +35,7 @@ func Resource() *schema.Resource { Importer: &schema.ResourceImporter{ StateContext: resourceImport, }, + DeprecationMessage: fmt.Sprintf(constant.DeprecationResourceByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"), Schema: map[string]*schema.Schema{ "project_id": { Type: schema.TypeString, diff --git a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoint_service_serverless.go b/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoint_service_serverless.go index 8a7331f7b3..e69fbe840b 100644 --- a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoint_service_serverless.go +++ b/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoint_service_serverless.go @@ -6,6 +6,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpointservice" @@ -13,7 +14,8 @@ import ( func DataSource() *schema.Resource { return &schema.Resource{ - ReadContext: dataSourceRead, + ReadContext: dataSourceRead, + DeprecationMessage: fmt.Sprintf(constant.DeprecationDataSourceByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"), Schema: map[string]*schema.Schema{ "project_id": { Type: schema.TypeString, diff --git a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go b/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go index 1641b1051e..0c509088bb 100644 --- a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go +++ b/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go @@ -2,17 +2,20 @@ package privatelinkendpointserviceserverless import ( "context" + "fmt" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "go.mongodb.org/atlas-sdk/v20241113001/admin" ) func PluralDataSource() *schema.Resource { return &schema.Resource{ - ReadContext: dataSourcePluralRead, + ReadContext: dataSourcePluralRead, + DeprecationMessage: fmt.Sprintf(constant.DeprecationDataSourceByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"), Schema: map[string]*schema.Schema{ "project_id": { Type: schema.TypeString, diff --git a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go b/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go index 358c53c583..4c1ad5f2ff 100644 --- a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go +++ b/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go @@ -15,6 +15,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" ) @@ -34,6 +35,7 @@ func Resource() *schema.Resource { Importer: &schema.ResourceImporter{ StateContext: resourceImport, }, + DeprecationMessage: fmt.Sprintf(constant.DeprecationResourceByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"), Schema: map[string]*schema.Schema{ "project_id": { Type: schema.TypeString, diff --git a/internal/service/project/model_project.go b/internal/service/project/model_project.go index 35d058ac8f..93eaf6ea8a 100644 --- a/internal/service/project/model_project.go +++ b/internal/service/project/model_project.go @@ -5,7 +5,6 @@ import ( "go.mongodb.org/atlas-sdk/v20241113001/admin" - "github.com/hashicorp/terraform-plugin-framework/attr" "github.com/hashicorp/terraform-plugin-framework/diag" "github.com/hashicorp/terraform-plugin-framework/types" @@ -35,7 +34,7 @@ func NewTFProjectDataSourceModel(ctx context.Context, project *admin.Group, proj Teams: NewTFTeamsDataSourceModel(ctx, projectProps.Teams), Limits: NewTFLimitsDataSourceModel(ctx, projectProps.Limits), IPAddresses: ipAddressesModel, - Tags: NewTFTags(project.GetTags()), + Tags: conversion.NewTFTags(project.GetTags()), IsSlowOperationThresholdingEnabled: types.BoolValue(projectProps.IsSlowOperationThresholdingEnabled), }, nil } @@ -111,7 +110,7 @@ func NewTFProjectResourceModel(ctx context.Context, projectRes *admin.Group, pro Teams: newTFTeamsResourceModel(ctx, projectProps.Teams), Limits: newTFLimitsResourceModel(ctx, projectProps.Limits), IPAddresses: ipAddressesModel, - Tags: NewTFTags(projectRes.GetTags()), + Tags: conversion.NewTFTags(projectRes.GetTags()), IsSlowOperationThresholdingEnabled: types.BoolValue(projectProps.IsSlowOperationThresholdingEnabled), } @@ -208,27 +207,3 @@ func UpdateProjectBool(plan, state types.Bool, setting **bool) bool { } return false } - -func NewTFTags(tags []admin.ResourceTag) types.Map { - typesTags := make(map[string]attr.Value, len(tags)) - for _, tag := range tags { - typesTags[tag.Key] = types.StringValue(tag.Value) - } - return types.MapValueMust(types.StringType, typesTags) -} - -func NewResourceTags(ctx context.Context, tags types.Map) []admin.ResourceTag { - if tags.IsNull() || len(tags.Elements()) == 0 { - return []admin.ResourceTag{} - } - elements := make(map[string]types.String, len(tags.Elements())) - _ = tags.ElementsAs(ctx, &elements, false) - var tagsAdmin []admin.ResourceTag - for key, tagValue := range elements { - tagsAdmin = append(tagsAdmin, admin.ResourceTag{ - Key: key, - Value: tagValue.ValueString(), - }) - } - return tagsAdmin -} diff --git a/internal/service/project/model_project_test.go b/internal/service/project/model_project_test.go index 288272c854..f9dc07b080 100644 --- a/internal/service/project/model_project_test.go +++ b/internal/service/project/model_project_test.go @@ -236,7 +236,7 @@ func TestProjectDataSourceSDKToDataSourceTFModel(t *testing.T) { Limits: limitsTF, IPAddresses: ipAddressesTF, Created: types.StringValue("0001-01-01T00:00:00Z"), - Tags: emptyTfTags(), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{}), }, }, { @@ -271,7 +271,7 @@ func TestProjectDataSourceSDKToDataSourceTFModel(t *testing.T) { Limits: limitsTF, IPAddresses: ipAddressesTF, Created: types.StringValue("0001-01-01T00:00:00Z"), - Tags: emptyTfTags(), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{}), }, }, } @@ -323,7 +323,7 @@ func TestProjectDataSourceSDKToResourceTFModel(t *testing.T) { Limits: limitsTFSet, IPAddresses: ipAddressesTF, Created: types.StringValue("0001-01-01T00:00:00Z"), - Tags: emptyTfTags(), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{}), }, }, { @@ -356,7 +356,7 @@ func TestProjectDataSourceSDKToResourceTFModel(t *testing.T) { Limits: limitsTFSet, IPAddresses: ipAddressesTF, Created: types.StringValue("0001-01-01T00:00:00Z"), - Tags: emptyTfTags(), + Tags: types.MapValueMust(types.StringType, map[string]attr.Value{}), }, }, } @@ -550,48 +550,3 @@ func TestUpdateProjectBool(t *testing.T) { }) } } - -func TestNewResourceTags(t *testing.T) { - testCases := map[string]struct { - plan types.Map - expected []admin.ResourceTag - }{ - "tags null": {types.MapNull(types.StringType), []admin.ResourceTag{}}, - "tags unknown": {types.MapUnknown(types.StringType), []admin.ResourceTag{}}, - "tags convert normally": {types.MapValueMust(types.StringType, map[string]attr.Value{ - "key1": types.StringValue("value1"), - }), []admin.ResourceTag{ - *admin.NewResourceTag("key1", "value1"), - }}, - } - for name, tc := range testCases { - t.Run(name, func(t *testing.T) { - assert.Equal(t, tc.expected, project.NewResourceTags(context.Background(), tc.plan)) - }) - } -} - -func TestNewTFTags(t *testing.T) { - var ( - tfMapEmpty = emptyTfTags() - apiListEmpty = []admin.ResourceTag{} - apiSingleTag = []admin.ResourceTag{*admin.NewResourceTag("key1", "value1")} - tfMapSingleTag = types.MapValueMust(types.StringType, map[string]attr.Value{"key1": types.StringValue("value1")}) - ) - testCases := map[string]struct { - expected types.Map - adminTags []admin.ResourceTag - }{ - "api empty list tf null should give map null": {tfMapEmpty, apiListEmpty}, - "tags single value tf null should give map single": {tfMapSingleTag, apiSingleTag}, - } - for name, tc := range testCases { - t.Run(name, func(t *testing.T) { - assert.Equal(t, tc.expected, project.NewTFTags(tc.adminTags)) - }) - } -} - -func emptyTfTags() types.Map { - return types.MapValueMust(types.StringType, map[string]attr.Value{}) -} diff --git a/internal/service/project/resource_project.go b/internal/service/project/resource_project.go index 7e73abb17d..628e137244 100644 --- a/internal/service/project/resource_project.go +++ b/internal/service/project/resource_project.go @@ -66,13 +66,12 @@ func (r *projectRS) Create(ctx context.Context, req resource.CreateRequest, resp if resp.Diagnostics.HasError() { return } - tags := NewResourceTags(ctx, projectPlan.Tags) projectGroup := &admin.Group{ OrgId: projectPlan.OrgID.ValueString(), Name: projectPlan.Name.ValueString(), WithDefaultAlertsSettings: projectPlan.WithDefaultAlertsSettings.ValueBoolPointer(), RegionUsageRestrictions: conversion.StringNullIfEmpty(projectPlan.RegionUsageRestrictions.ValueString()).ValueStringPointer(), - Tags: &tags, + Tags: conversion.NewResourceTags(ctx, projectPlan.Tags), } projectAPIParams := &admin.CreateProjectApiParams{ @@ -584,15 +583,15 @@ func hasLimitsChanged(planLimits, stateLimits []TFLimitModel) bool { } func UpdateProject(ctx context.Context, projectsAPI admin.ProjectsApi, projectState, projectPlan *TFProjectRSModel) error { - tagsBefore := NewResourceTags(ctx, projectState.Tags) - tagsAfter := NewResourceTags(ctx, projectPlan.Tags) + tagsBefore := conversion.NewResourceTags(ctx, projectState.Tags) + tagsAfter := conversion.NewResourceTags(ctx, projectPlan.Tags) if projectPlan.Name.Equal(projectState.Name) && reflect.DeepEqual(tagsBefore, tagsAfter) { return nil } projectID := projectState.ID.ValueString() - if _, _, err := projectsAPI.UpdateProject(ctx, projectID, NewGroupUpdate(projectPlan, &tagsAfter)).Execute(); err != nil { + if _, _, err := projectsAPI.UpdateProject(ctx, projectID, NewGroupUpdate(projectPlan, tagsAfter)).Execute(); err != nil { return fmt.Errorf("error updating the project(%s): %s", projectID, err) } diff --git a/internal/service/serverlessinstance/data_source_serverless_instance.go b/internal/service/serverlessinstance/data_source_serverless_instance.go index 9588e65b65..ca799d60cf 100644 --- a/internal/service/serverlessinstance/data_source_serverless_instance.go +++ b/internal/service/serverlessinstance/data_source_serverless_instance.go @@ -2,9 +2,11 @@ package serverlessinstance import ( "context" + "fmt" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster" @@ -88,14 +90,16 @@ func dataSourceSchema() map[string]*schema.Schema { Computed: true, }, "continuous_backup_enabled": { - Type: schema.TypeBool, - Optional: true, - Computed: true, + Deprecated: fmt.Sprintf(constant.DeprecatioParamByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"), + Type: schema.TypeBool, + Optional: true, + Computed: true, }, "auto_indexing": { - Type: schema.TypeBool, - Optional: true, - Computed: true, + Deprecated: fmt.Sprintf(constant.DeprecatioParamByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"), + Type: schema.TypeBool, + Optional: true, + Computed: true, }, "tags": &advancedcluster.DSTagsSchema, } diff --git a/internal/service/serverlessinstance/resource_serverless_instance.go b/internal/service/serverlessinstance/resource_serverless_instance.go index ce8950e0a1..95c9ada716 100644 --- a/internal/service/serverlessinstance/resource_serverless_instance.go +++ b/internal/service/serverlessinstance/resource_serverless_instance.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant" "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion" "github.com/mongodb/terraform-provider-mongodbatlas/internal/config" "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster" @@ -110,14 +111,16 @@ func resourceSchema() map[string]*schema.Schema { Computed: true, }, "continuous_backup_enabled": { - Type: schema.TypeBool, - Optional: true, - Computed: true, + Deprecated: fmt.Sprintf(constant.DeprecatioParamByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"), + Type: schema.TypeBool, + Optional: true, + Computed: true, }, "auto_indexing": { - Type: schema.TypeBool, - Optional: true, - Computed: true, + Deprecated: fmt.Sprintf(constant.DeprecatioParamByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"), + Type: schema.TypeBool, + Optional: true, + Computed: true, }, "tags": &advancedcluster.RSTagsSchema, } diff --git a/templates/data-source.md.tmpl b/templates/data-source.md.tmpl index 45b3c38584..b649202982 100644 --- a/templates/data-source.md.tmpl +++ b/templates/data-source.md.tmpl @@ -9,9 +9,7 @@ {{ else if eq .Name "mongodbatlas_privatelink_endpoint" }} {{ tffile (printf "examples/%s/aws/cluster/main.tf" .Name )}} {{ else if eq .Name "mongodbatlas_privatelink_endpoint_service_serverless" }} - {{ tffile (printf "examples/%s/aws/main.tf" .Name )}} {{ else if eq .Name "mongodbatlas_privatelink_endpoint_serverless" }} - {{ tffile "examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/main.tf" }} {{ else if eq .Name "mongodbatlas_cluster" }} {{ tffile (printf "examples/%s/tenant-upgrade/main.tf" .Name )}} {{ else if eq .Name "mongodbatlas_cluster" }} diff --git a/templates/data-sources/flex_cluster.md.tmpl b/templates/data-sources/flex_cluster.md.tmpl new file mode 100644 index 0000000000..9e1051b71b --- /dev/null +++ b/templates/data-sources/flex_cluster.md.tmpl @@ -0,0 +1,10 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` describes a flex cluster. + +## Example Usages +{{ tffile (printf "examples/%s/main.tf" .Name )}} + +{{ .SchemaMarkdown | trimspace }} + +For more information see: [MongoDB Atlas API - Flex Cluster](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Flex-Clusters/operation/getFlexCluster) Documentation. diff --git a/templates/data-sources/flex_clusters.md.tmpl b/templates/data-sources/flex_clusters.md.tmpl new file mode 100644 index 0000000000..7d0e6f430b --- /dev/null +++ b/templates/data-sources/flex_clusters.md.tmpl @@ -0,0 +1,10 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` returns all flex clusters in a project. + +## Example Usages +{{ tffile (printf "examples/mongodbatlas_flex_cluster/main.tf" )}} + +{{ .SchemaMarkdown | trimspace }} + +For more information see: [MongoDB Atlas API - Flex Clusters](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Streams/operation/listFlexClusters) Documentation. diff --git a/templates/resources.md.tmpl b/templates/resources.md.tmpl index b2f176cdf3..855f829e7a 100644 --- a/templates/resources.md.tmpl +++ b/templates/resources.md.tmpl @@ -9,9 +9,7 @@ {{ else if eq .Name "mongodbatlas_privatelink_endpoint" }} {{ tffile (printf "examples/%s/aws/cluster/main.tf" .Name )}} {{ else if eq .Name "mongodbatlas_privatelink_endpoint_service_serverless" }} - {{ tffile (printf "examples/%s/aws/main.tf" .Name )}} {{ else if eq .Name "mongodbatlas_privatelink_endpoint_serverless" }} - {{ tffile "examples/mongodbatlas_privatelink_endpoint_service_serverless/aws/main.tf" }} {{ else if eq .Name "mongodbatlas_cluster" }} {{ tffile (printf "examples/%s/tenant-upgrade/main.tf" .Name )}} {{ else if eq .Name "mongodbatlas_cluster" }} @@ -56,7 +54,6 @@ {{ else if eq .Name "mongodbatlas_ldap_verify" }} {{ else if eq .Name "mongodbatlas_third_party_integration" }} {{ else if eq .Name "mongodbatlas_x509_authentication_database_user" }} - {{ else if eq .Name "mongodbatlas_stream_processor" }} {{ else if eq .Name "mongodbatlas_privatelink_endpoint_service_data_federation_online_archive" }} {{ else }} {{ tffile (printf "examples/%s/main.tf" .Name )}} diff --git a/templates/resources/flex_cluster.md.tmpl b/templates/resources/flex_cluster.md.tmpl new file mode 100644 index 0000000000..0b4542e691 --- /dev/null +++ b/templates/resources/flex_cluster.md.tmpl @@ -0,0 +1,17 @@ +# {{.Type}}: {{.Name}} + +`{{.Name}}` provides a Flex Cluster resource. The resource lets you create, update, delete and import a flex cluster. + +## Example Usages + +{{ tffile (printf "examples/%s/main.tf" .Name )}} + +{{ .SchemaMarkdown | trimspace }} + +# Import +You can import the Flex Cluster resource by using the Project ID and Flex Cluster name, in the format `PROJECT_ID-FLEX_CLUSTER_NAME`. For example: +``` +$ terraform import mongodbatlas_flex_cluster.test 6117ac2fe2a3d04ed27a987v-yourFlexClusterName +``` + +For more information see: [MongoDB Atlas API - Flex Cluster](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Flex-Clusters/operation/createFlexcluster) Documentation. diff --git a/tools/codegen/config.yml b/tools/codegen/config.yml index 29cc04509c..3a2e017d77 100644 --- a/tools/codegen/config.yml +++ b/tools/codegen/config.yml @@ -11,7 +11,6 @@ resources: group_id: project_id ignores: ["links"] timeouts: ["create", "update", "delete"] - # overrides: # project_id: # plan_modifiers: [{ @@ -25,12 +24,10 @@ resources: # ], # definition: "stringvalidator.ConflictsWith(path.MatchRoot(\"name\"))" # }] - # prefix_path: # computability: # optional: true # computed: true - search_deployment: read: path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/search/deployment