From b636fdb1079983d2b9e7a2b962c643a4cda452d4 Mon Sep 17 00:00:00 2001 From: martinstibbe <33664051+martinstibbe@users.noreply.github.com> Date: Mon, 23 Jan 2023 15:47:32 -0600 Subject: [PATCH] v1.8.0 staging (#1018) * INTMDB-454: New Data Source to GET Org Id (#973) * Add support for mongodbatlas_roles_org_id * Add Documentation for roles_org_id * Update go library * Add mongodbatlas_roles_org_id to datasource and resource to provide org_id * doc clean up / link to new API docs * formatting fix + update links to new API docs * formatting + update link to new API docs * formatting Co-authored-by: Zuhair Ahmed * INTMDB-409: Deprecation Announcement (#988) * Chore(deps): Bump github.com/gruntwork-io/terratest (#978) Bumps [github.com/gruntwork-io/terratest](https://github.com/gruntwork-io/terratest) from 0.41.6 to 0.41.7. - [Release notes](https://github.com/gruntwork-io/terratest/releases) - [Commits](https://github.com/gruntwork-io/terratest/compare/v0.41.6...v0.41.7) --- updated-dependencies: - dependency-name: github.com/gruntwork-io/terratest dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Chore(deps): Bump actions/stale from 6 to 7 (#977) Bumps [actions/stale](https://github.com/actions/stale) from 6 to 7. - [Release notes](https://github.com/actions/stale/releases) - [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/stale/compare/v6...v7) --- updated-dependencies: - dependency-name: actions/stale dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * v1.7.0 Pre-Release (#980) * Delete mongodbatlas.erb (#962) * INTMDB-523: Rename exportJobID to exportID to match go client (#976) * Updated version of atlas api client used, renamed bucketID to exportJobID * Reverted changes to bucketID and updated exportJobID to exportID * INTMDB-521: AWS Secrets Manager to Auth into Terraform Atlas Provider (#975) * Add support for assume_role * Add documentation for assume_role feature * Add AWS parameters Env vars * Update index.html.markdown * Doc clean up * typo * Add regional behavior to endpoint sts client * Add sts_endpoint parameter * Update website/docs/index.html.markdown * formatting * formatting2 * Removed commented code Co-authored-by: Zuhair Ahmed * Update .github_changelog_generator Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * Revert "v1.7.0 Pre-Release (#980)" (#982) This reverts commit 7a57d21bd7f919994f7d3a7ff2728ff049717f60. * Add deprecation notices * Additional deprecation details * Add more detail * Update resource_mongodbatlas_private_ip_mode.go * Update resource_mongodbatlas_private_ip_mode.go * Update resource_mongodbatlas_private_ip_mode.go Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * INTMDB-482: Add deprecation relating to NEW_RELIC, FLOWDOCK (#989) * Chore(deps): Bump github.com/gruntwork-io/terratest (#978) Bumps [github.com/gruntwork-io/terratest](https://github.com/gruntwork-io/terratest) from 0.41.6 to 0.41.7. - [Release notes](https://github.com/gruntwork-io/terratest/releases) - [Commits](https://github.com/gruntwork-io/terratest/compare/v0.41.6...v0.41.7) --- updated-dependencies: - dependency-name: github.com/gruntwork-io/terratest dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Chore(deps): Bump actions/stale from 6 to 7 (#977) Bumps [actions/stale](https://github.com/actions/stale) from 6 to 7. - [Release notes](https://github.com/actions/stale/releases) - [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/stale/compare/v6...v7) --- updated-dependencies: - dependency-name: actions/stale dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * v1.7.0 Pre-Release (#980) * Delete mongodbatlas.erb (#962) * INTMDB-523: Rename exportJobID to exportID to match go client (#976) * Updated version of atlas api client used, renamed bucketID to exportJobID * Reverted changes to bucketID and updated exportJobID to exportID * INTMDB-521: AWS Secrets Manager to Auth into Terraform Atlas Provider (#975) * Add support for assume_role * Add documentation for assume_role feature * Add AWS parameters Env vars * Update index.html.markdown * Doc clean up * typo * Add regional behavior to endpoint sts client * Add sts_endpoint parameter * Update website/docs/index.html.markdown * formatting * formatting2 * Removed commented code Co-authored-by: Zuhair Ahmed * Update .github_changelog_generator Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * Revert "v1.7.0 Pre-Release (#980)" (#982) This reverts commit 7a57d21bd7f919994f7d3a7ff2728ff049717f60. * V1.7.0 staging (#984) * Delete mongodbatlas.erb (#962) * INTMDB-523: Rename exportJobID to exportID to match go client (#976) * Updated version of atlas api client used, renamed bucketID to exportJobID * Reverted changes to bucketID and updated exportJobID to exportID * INTMDB-521: AWS Secrets Manager to Auth into Terraform Atlas Provider (#975) * Add support for assume_role * Add documentation for assume_role feature * Add AWS parameters Env vars * Update index.html.markdown * Doc clean up * typo * Add regional behavior to endpoint sts client * Add sts_endpoint parameter * Update website/docs/index.html.markdown * formatting * formatting2 * Removed commented code Co-authored-by: Zuhair Ahmed * Update .github_changelog_generator * Update CHANGELOG.md * Changelog Cleanup * 1.7.0 Upgrade and Information Guide Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * AWS_SM_Doc_Adds (#986) * Adding Instructions to dev versions of provider (#987) * Add deprecation relating to NEW_RELIC, FLOWDOCK Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * INTMDB-530: Add import example for Encryption at Rest (#992) * Chore(deps): Bump github.com/gruntwork-io/terratest (#978) Bumps [github.com/gruntwork-io/terratest](https://github.com/gruntwork-io/terratest) from 0.41.6 to 0.41.7. - [Release notes](https://github.com/gruntwork-io/terratest/releases) - [Commits](https://github.com/gruntwork-io/terratest/compare/v0.41.6...v0.41.7) --- updated-dependencies: - dependency-name: github.com/gruntwork-io/terratest dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Chore(deps): Bump actions/stale from 6 to 7 (#977) Bumps [actions/stale](https://github.com/actions/stale) from 6 to 7. - [Release notes](https://github.com/actions/stale/releases) - [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/stale/compare/v6...v7) --- updated-dependencies: - dependency-name: actions/stale dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * v1.7.0 Pre-Release (#980) * Delete mongodbatlas.erb (#962) * INTMDB-523: Rename exportJobID to exportID to match go client (#976) * Updated version of atlas api client used, renamed bucketID to exportJobID * Reverted changes to bucketID and updated exportJobID to exportID * INTMDB-521: AWS Secrets Manager to Auth into Terraform Atlas Provider (#975) * Add support for assume_role * Add documentation for assume_role feature * Add AWS parameters Env vars * Update index.html.markdown * Doc clean up * typo * Add regional behavior to endpoint sts client * Add sts_endpoint parameter * Update website/docs/index.html.markdown * formatting * formatting2 * Removed commented code Co-authored-by: Zuhair Ahmed * Update .github_changelog_generator Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * Revert "v1.7.0 Pre-Release (#980)" (#982) This reverts commit 7a57d21bd7f919994f7d3a7ff2728ff049717f60. * V1.7.0 staging (#984) * Delete mongodbatlas.erb (#962) * INTMDB-523: Rename exportJobID to exportID to match go client (#976) * Updated version of atlas api client used, renamed bucketID to exportJobID * Reverted changes to bucketID and updated exportJobID to exportID * INTMDB-521: AWS Secrets Manager to Auth into Terraform Atlas Provider (#975) * Add support for assume_role * Add documentation for assume_role feature * Add AWS parameters Env vars * Update index.html.markdown * Doc clean up * typo * Add regional behavior to endpoint sts client * Add sts_endpoint parameter * Update website/docs/index.html.markdown * formatting * formatting2 * Removed commented code Co-authored-by: Zuhair Ahmed * Update .github_changelog_generator * Update CHANGELOG.md * Changelog Cleanup * 1.7.0 Upgrade and Information Guide Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * AWS_SM_Doc_Adds (#986) * Adding Instructions to dev versions of provider (#987) * doc formatting edits (#990) * doc formatting (#991) * Add import example for Encryption at Rest Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * INTMDB-468: Hide current_certificate when X.509 Authentication Database Users are Created (#985) * Chore(deps): Bump github.com/gruntwork-io/terratest (#978) Bumps [github.com/gruntwork-io/terratest](https://github.com/gruntwork-io/terratest) from 0.41.6 to 0.41.7. - [Release notes](https://github.com/gruntwork-io/terratest/releases) - [Commits](https://github.com/gruntwork-io/terratest/compare/v0.41.6...v0.41.7) --- updated-dependencies: - dependency-name: github.com/gruntwork-io/terratest dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Chore(deps): Bump actions/stale from 6 to 7 (#977) Bumps [actions/stale](https://github.com/actions/stale) from 6 to 7. - [Release notes](https://github.com/actions/stale/releases) - [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/stale/compare/v6...v7) --- updated-dependencies: - dependency-name: actions/stale dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Initial attempt at moving certificate out of ID * Find latest ID Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * federated_settings_org_config import example fix (#996) * example fix * log to file add * formating * formatting updates * Example fix Resource: mongodbatlas_federated_settings_org_config * Change logic of invitation create read delete when it has been accepted (#1012) * INTMDB-346: Feature add: add support for programmatic API keys (#974) * Initial version * Add support for api key test cases * Update documents * Update website/docs/r/api_key.html.markdown Co-authored-by: Zuhair Ahmed * Update website/docs/d/api_keys.html.markdown Co-authored-by: Zuhair Ahmed * Update website/docs/d/api_key.html.markdown Co-authored-by: Zuhair Ahmed * initial addition of tests * Update project api keys tests * Commented code removal * Update Docs rename resource * Add warning and update reference link to new docs * Update reference links to new docs * Update reference link to new docs * formatting * link fix Co-authored-by: Zuhair Ahmed * INTMDB-472: Update_snapshots doesn't save at TF state with mongodbatlas_cloud_backup_schedule resource (#1014) * skip update_snapshots on read as API does not return current value * Doc update * Update website/docs/r/cloud_backup_schedule.html.markdown Co-authored-by: Zuhair Ahmed * INTMDB-455: bi_connector settings in mongodbatlas_advanced_cluster fix (#1010) * Add support for migrating bi_connector to bi_connector_config * Add deprecated parameter pathway for upgrade and test * INTMDB-519: mongodbatlas_third_party_integration - api_token keeps updating on every apply (#1011) * Add additional overrides for obfuscated API values set to parameters * Add username to sensitive values * Add WebHook secret and URL to manage drift * INTMDB-448: custom_db_role error (#1009) * example fix * log to file add * formating * formatting updates * Add mutex to prevent concurrent API calls datasource simplify count as it picks up other tests Co-authored-by: Zuhair Ahmed * INTMDB-543: LDAP Config and LDAP Verify Resources Fix (#1004) * Update for breaking change in mongodb go library * go mod tidy * Update error logging and constant * INTMDB-427: Cloud backup schedule, export fix (#968) * Addressing failures on cloud backup schedule * Updated condition for updating export request * INTMDB-341: Fix search_index update (#964) * Fixed casing on search_index attribute casing in update. Updated formatting on test json * More formatting of string based tf configs * INTMDB-488: Analytics node tier new features (#994) * Chore(deps): Bump github.com/gruntwork-io/terratest (#978) Bumps [github.com/gruntwork-io/terratest](https://github.com/gruntwork-io/terratest) from 0.41.6 to 0.41.7. - [Release notes](https://github.com/gruntwork-io/terratest/releases) - [Commits](https://github.com/gruntwork-io/terratest/compare/v0.41.6...v0.41.7) --- updated-dependencies: - dependency-name: github.com/gruntwork-io/terratest dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Chore(deps): Bump actions/stale from 6 to 7 (#977) Bumps [actions/stale](https://github.com/actions/stale) from 6 to 7. - [Release notes](https://github.com/actions/stale/releases) - [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/stale/compare/v6...v7) --- updated-dependencies: - dependency-name: actions/stale dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * v1.7.0 Pre-Release (#980) * Delete mongodbatlas.erb (#962) * INTMDB-523: Rename exportJobID to exportID to match go client (#976) * Updated version of atlas api client used, renamed bucketID to exportJobID * Reverted changes to bucketID and updated exportJobID to exportID * INTMDB-521: AWS Secrets Manager to Auth into Terraform Atlas Provider (#975) * Add support for assume_role * Add documentation for assume_role feature * Add AWS parameters Env vars * Update index.html.markdown * Doc clean up * typo * Add regional behavior to endpoint sts client * Add sts_endpoint parameter * Update website/docs/index.html.markdown * formatting * formatting2 * Removed commented code Co-authored-by: Zuhair Ahmed * Update .github_changelog_generator Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * Revert "v1.7.0 Pre-Release (#980)" (#982) This reverts commit 7a57d21bd7f919994f7d3a7ff2728ff049717f60. * V1.7.0 staging (#984) * Delete mongodbatlas.erb (#962) * INTMDB-523: Rename exportJobID to exportID to match go client (#976) * Updated version of atlas api client used, renamed bucketID to exportJobID * Reverted changes to bucketID and updated exportJobID to exportID * INTMDB-521: AWS Secrets Manager to Auth into Terraform Atlas Provider (#975) * Add support for assume_role * Add documentation for assume_role feature * Add AWS parameters Env vars * Update index.html.markdown * Doc clean up * typo * Add regional behavior to endpoint sts client * Add sts_endpoint parameter * Update website/docs/index.html.markdown * formatting * formatting2 * Removed commented code Co-authored-by: Zuhair Ahmed * Update .github_changelog_generator * Update CHANGELOG.md * Changelog Cleanup * 1.7.0 Upgrade and Information Guide Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * AWS_SM_Doc_Adds (#986) * Adding Instructions to dev versions of provider (#987) * doc formatting edits (#990) * doc formatting (#991) * Initial analytics scaling * Add Documentation updates for analytics_auto_scaling * Update advanced_cluster.html.markdown * Add testing for analytics scaling * example fix * log to file add * formating * formatting updates * Add mutex to custom db role API call to avoid overlapping API calls * Add back test mutex fixed API concurrency issue * Rollback 448 changes pushed to incorrect branch Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Dosty Everts Co-authored-by: Zuhair Ahmed * INTMDB-382: Improve the importing of default alert configurations for a project (#993) * Added alert_configurations data_source * Added documentation on how to use data_source_alert_configuration outputs, as well as the new data source for all alert configurations * Removed duplicate function declaration * Resolving lint errors around looping * Added example for importing atlas-alert-configurations en masse * Got tests passing, added documentation for alert_configuration data_source * Fixed alert_configurations test * Updated readme to include instructions for creating the data source * Fixed spacing and added a link to the full example on the data source docs * INTMDB-400: Feature add: Enable support for Snapshot Distribution (#979) * Add CopySettings support * Add feature for snapshot replication * Update go.mod * Update go.sum * Update go.mod * Add example for resource with snapshot distribution * Add recommended if else approach bring go-client to latest master to pickup empty struct * Add note about deleteCopiedBackups * INTMDB-464 Ignorable replication_specs & region_configs (#961) * Started work on state migration * Got StateUpgrader for v0 to v1 working & added test * Updated flattenAdvancedReplicationSpecs algorithm to ensure order is maintained between api & tf state * Fixed lint issues * Migrated replication_spec migration code to be encompassed by bi connector migration * INTMDB-547: 1005 - Enhance docs for mongodbatlas cloud_backup_schedule (#1007) * Updates documenation for mongodbatlas_cloud_backup_schedule by adding more information about possible values for frequency_type, frequency_interval, retention_unit and retention_value. * Updates documenation for mongodbatlas_cloud_backup_schedule by adding more information about possible values for frequency_type, frequency_interval, retention_unit and retention_value. * Adds comments to the examples for more context * Removes unnecessary colons. * Fixed spelling mistake * Fix BI Connector documentation snippet (#1017) This change adjusts the BI Connector documentation snippet by: - Replacing the reference to `bi_connector` with `bi_connector_config` - Wrapping the `secondary` string in double quotes - Fixing inconsistent spacing * INTMDB-397: oplogMinRetentionHours + NVMe Cluster Class Warning Doc Update (#1016) * Chore(deps): Bump github.com/gruntwork-io/terratest (#1013) Bumps [github.com/gruntwork-io/terratest](https://github.com/gruntwork-io/terratest) from 0.41.7 to 0.41.9. - [Release notes](https://github.com/gruntwork-io/terratest/releases) - [Commits](https://github.com/gruntwork-io/terratest/compare/v0.41.7...v0.41.9) --- updated-dependencies: - dependency-name: github.com/gruntwork-io/terratest dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Add support for oplog_min_retention_hours * Update website/docs/r/advanced_cluster.html.markdown * Update website/docs/r/cluster.html.markdown * Update mongodbatlas/resource_mongodbatlas_cluster.go Co-authored-by: Wojciech Trocki * Restructure suggested code * restructure * Allow zero for oplog_min_retention_hours Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Zuhair Ahmed Co-authored-by: Wojciech Trocki * Update Changelog * Update CHANGELOG.md * Update CHANGELOG.md * Update CHANGELOG.md * Update CHANGELOG.md * Update CHANGELOG.md Signed-off-by: dependabot[bot] Co-authored-by: Zuhair Ahmed Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Dosty Everts Co-authored-by: Edward Mallia Co-authored-by: Matt Thompson Co-authored-by: Wojciech Trocki --- .github_changelog_generator | 4 +- CHANGELOG.md | 47 +++- .../atlas-alert-configurations/.gitignore | 2 + examples/atlas-alert-configurations/README.md | 32 +++ .../alert-configurations-data.tf | 29 ++ .../atlas-alert-configurations/provider.tf | 4 + .../atlas-alert-configurations/variables.tf | 10 + .../atlas-alert-configurations/versions.tf | 8 + go.mod | 17 +- go.sum | 20 +- ..._source_mongodbatlas_accesslist_api_key.go | 89 ++++++ ...ce_mongodbatlas_accesslist_api_key_test.go | 63 +++++ ...source_mongodbatlas_accesslist_api_keys.go | 93 +++++++ ...e_mongodbatlas_accesslist_api_keys_test.go | 61 +++++ ...ta_source_mongodbatlas_advanced_cluster.go | 34 ++- ...a_source_mongodbatlas_advanced_clusters.go | 32 ++- ...source_mongodbatlas_alert_configuration.go | 214 +++++++++++++++ ...ource_mongodbatlas_alert_configurations.go | 150 ++++++++++ ..._mongodbatlas_alert_configurations_test.go | 88 ++++++ .../data_source_mongodbatlas_api_key.go | 69 +++++ .../data_source_mongodbatlas_api_key_test.go | 55 ++++ .../data_source_mongodbatlas_api_keys.go | 83 ++++++ .../data_source_mongodbatlas_api_keys_test.go | 56 ++++ ...urce_mongodbatlas_cloud_backup_schedule.go | 35 +++ ...ce_mongodbatlas_cloud_provider_snapshot.go | 2 +- ...s_cloud_provider_snapshot_backup_policy.go | 2 +- ...las_cloud_provider_snapshot_restore_job.go | 2 +- ...as_cloud_provider_snapshot_restore_jobs.go | 2 +- ...e_mongodbatlas_cloud_provider_snapshots.go | 2 +- ...ource_mongodbatlas_custom_db_roles_test.go | 2 +- .../data_source_mongodbatlas_org_id.go | 51 ++++ .../data_source_mongodbatlas_org_id_test.go | 43 +++ ...ata_source_mongodbatlas_project_api_key.go | 81 ++++++ ...ource_mongodbatlas_project_api_key_test.go | 54 ++++ ...ta_source_mongodbatlas_project_api_keys.go | 87 ++++++ ...urce_mongodbatlas_project_api_keys_test.go | 55 ++++ ...ce_mongodbatlas_third_party_integration.go | 2 +- ...e_mongodbatlas_third_party_integrations.go | 60 +++- mongodbatlas/provider.go | 63 ++++- ...source_mongodbatlas_access_list_api_key.go | 227 ++++++++++++++++ ...e_mongodbatlas_access_list_api_key_test.go | 201 ++++++++++++++ .../resource_mongodbatlas_advanced_cluster.go | 178 ++++++++++-- ...e_mongodbatlas_advanced_cluster_migrate.go | 250 +++++++++++++++++ ...dbatlas_advanced_cluster_migration_test.go | 132 +++++++++ ...urce_mongodbatlas_advanced_cluster_test.go | 88 ++++++ ...source_mongodbatlas_alert_configuration.go | 14 +- mongodbatlas/resource_mongodbatlas_api_key.go | 217 +++++++++++++++ .../resource_mongodbatlas_api_key_test.go | 144 ++++++++++ ...urce_mongodbatlas_cloud_backup_schedule.go | 97 ++++++- ...mongodbatlas_cloud_backup_schedule_test.go | 162 +++++++++-- ...ce_mongodbatlas_cloud_provider_snapshot.go | 2 +- ...s_cloud_provider_snapshot_backup_policy.go | 2 +- ...las_cloud_provider_snapshot_restore_job.go | 2 +- mongodbatlas/resource_mongodbatlas_cluster.go | 14 + .../resource_mongodbatlas_custom_db_role.go | 9 + ...source_mongodbatlas_custom_db_role_test.go | 1 - ...esource_mongodbatlas_ldap_configuration.go | 33 +-- .../resource_mongodbatlas_ldap_verify.go | 13 +- .../resource_mongodbatlas_org_invitation.go | 122 +++++---- .../resource_mongodbatlas_private_ip_mode.go | 1 + .../resource_mongodbatlas_project_api_key.go | 257 ++++++++++++++++++ ...ource_mongodbatlas_project_api_key_test.go | 111 ++++++++ .../resource_mongodbatlas_search_index.go | 4 +- ...resource_mongodbatlas_search_index_test.go | 214 +++++++-------- ...ce_mongodbatlas_third_party_integration.go | 3 +- ...atlas_x509_authentication_database_user.go | 28 +- .../docs/d/access_list_api_key.html.markdown | 67 +++++ .../docs/d/access_list_api_keys.html.markdown | 75 +++++ website/docs/d/advanced_cluster.html.markdown | 15 +- .../docs/d/advanced_clusters.html.markdown | 16 +- .../docs/d/alert_configuration.html.markdown | 24 +- .../docs/d/alert_configurations.html.markdown | 74 +++++ website/docs/d/api_key.html.markdown | 54 ++++ website/docs/d/api_keys.html.markdown | 56 ++++ .../d/cloud_backup_schedule.html.markdown | 47 ++-- website/docs/d/cluster.html.markdown | 3 +- website/docs/d/clusters.html.markdown | 3 +- website/docs/d/project.html.markdown | 11 +- website/docs/d/project_api_key.html.markdown | 55 ++++ website/docs/d/project_api_keys.html.markdown | 57 ++++ website/docs/d/projects.html.markdown | 11 +- website/docs/d/roles_org_id.html.markdown | 35 +++ .../docs/r/access_list_api_key.html.markdown | 58 ++++ website/docs/r/advanced_cluster.html.markdown | 36 ++- website/docs/r/api_key.html.markdown | 64 +++++ .../r/cloud_backup_schedule.html.markdown | 108 ++++++-- website/docs/r/cluster.html.markdown | 14 +- .../docs/r/encryption_at_rest.html.markdown | 9 +- ...ederated_settings_org_config.html.markdown | 2 +- website/docs/r/project.html.markdown | 7 +- website/docs/r/project_api_key.html.markdown | 59 ++++ .../docs/r/third_party_integration.markdown | 2 + 92 files changed, 4915 insertions(+), 376 deletions(-) create mode 100644 examples/atlas-alert-configurations/.gitignore create mode 100644 examples/atlas-alert-configurations/README.md create mode 100644 examples/atlas-alert-configurations/alert-configurations-data.tf create mode 100644 examples/atlas-alert-configurations/provider.tf create mode 100644 examples/atlas-alert-configurations/variables.tf create mode 100644 examples/atlas-alert-configurations/versions.tf create mode 100644 mongodbatlas/data_source_mongodbatlas_accesslist_api_key.go create mode 100644 mongodbatlas/data_source_mongodbatlas_accesslist_api_key_test.go create mode 100644 mongodbatlas/data_source_mongodbatlas_accesslist_api_keys.go create mode 100644 mongodbatlas/data_source_mongodbatlas_accesslist_api_keys_test.go create mode 100644 mongodbatlas/data_source_mongodbatlas_alert_configurations.go create mode 100644 mongodbatlas/data_source_mongodbatlas_alert_configurations_test.go create mode 100644 mongodbatlas/data_source_mongodbatlas_api_key.go create mode 100644 mongodbatlas/data_source_mongodbatlas_api_key_test.go create mode 100644 mongodbatlas/data_source_mongodbatlas_api_keys.go create mode 100644 mongodbatlas/data_source_mongodbatlas_api_keys_test.go create mode 100644 mongodbatlas/data_source_mongodbatlas_org_id.go create mode 100644 mongodbatlas/data_source_mongodbatlas_org_id_test.go create mode 100644 mongodbatlas/data_source_mongodbatlas_project_api_key.go create mode 100644 mongodbatlas/data_source_mongodbatlas_project_api_key_test.go create mode 100644 mongodbatlas/data_source_mongodbatlas_project_api_keys.go create mode 100644 mongodbatlas/data_source_mongodbatlas_project_api_keys_test.go create mode 100644 mongodbatlas/resource_mongodbatlas_access_list_api_key.go create mode 100644 mongodbatlas/resource_mongodbatlas_access_list_api_key_test.go create mode 100644 mongodbatlas/resource_mongodbatlas_advanced_cluster_migrate.go create mode 100644 mongodbatlas/resource_mongodbatlas_advanced_cluster_migration_test.go create mode 100644 mongodbatlas/resource_mongodbatlas_api_key.go create mode 100644 mongodbatlas/resource_mongodbatlas_api_key_test.go create mode 100644 mongodbatlas/resource_mongodbatlas_project_api_key.go create mode 100644 mongodbatlas/resource_mongodbatlas_project_api_key_test.go create mode 100644 website/docs/d/access_list_api_key.html.markdown create mode 100644 website/docs/d/access_list_api_keys.html.markdown create mode 100644 website/docs/d/alert_configurations.html.markdown create mode 100644 website/docs/d/api_key.html.markdown create mode 100644 website/docs/d/api_keys.html.markdown create mode 100644 website/docs/d/project_api_key.html.markdown create mode 100644 website/docs/d/project_api_keys.html.markdown create mode 100644 website/docs/d/roles_org_id.html.markdown create mode 100644 website/docs/r/access_list_api_key.html.markdown create mode 100644 website/docs/r/api_key.html.markdown create mode 100644 website/docs/r/project_api_key.html.markdown diff --git a/.github_changelog_generator b/.github_changelog_generator index 29056cc9e7..f0aaa57513 100644 --- a/.github_changelog_generator +++ b/.github_changelog_generator @@ -1,4 +1,4 @@ -future-release=v1.7.0 -since-tag=v1.6.1 +future-release=v1.8.0 +since-tag=v1.7.0 date-format=%B %d, %Y base=CHANGELOG.md diff --git a/CHANGELOG.md b/CHANGELOG.md index bedb1c465c..9d6971239e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,50 @@ # Changelog +## [v1.8.0](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/v1.8.0) (2023-1-23) + +[Full Changelog](https://github.com/mongodb/terraform-provider-mongodbatlas/compare/v1.7.0...v1.8.0) + +**Enhancements:** + +- Snapshot Distribution Support [\#979](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/979) - INTMDB-400 +- Programmatically Create API Keys [\#974](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/974) - INTMDB-346 +- Retrieve Org Id from API Keys [\#973](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/973) - INTMDB-454 +- Analytics Node Tier New Features Support [\#994](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/994) - INTMDB-488 +- Improve Default Alerts and Example Creation [\#993](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/993) - INTMDB-382 +- oplogMinRetentionHours Paramter Support in advanced_cluster and cluster [\#1016](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/1016) - INTMDB-397 +- Expand documentation for mongodbatlas_cloud_backup_schedule to include information about valid values for frequency_interval [\#1007](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/1007) - INTMDB-547 + +**Depreciations:** + +- cloud_provider_snapshot, cloud_provider_snapshot_backup_policy, cloud_provider_snapshot_restore_job, and private_ip_mode are now deprecated and will be removed from codebase as of v1.9 release [\#988](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/988) - INTMDB-409 +- NEW_RELIC and FLOWDOCK mongodbatlas_third_party_integration resource are now deprecated and will be removed from codebase as of v1.9 release [\#989](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/989) - INTMDB-482 + +**Bug Fixes:** + +- Hide current_certificate when X.509 Authentication Database Users are Created [\#985](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/985) - INTMDB-468 +- Import example added for encryption_at_rest resource [\#992](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/992) - INTMDB-530 +- Resource cloud_backup_snapshot_export_job variable name change [\#976](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/976) - INTMDB-523 +- Invitation handling after user accepts invitation fix [\#1012](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/1012) - INTMDB-511 +- Update_snapshots doesn't save at TF state with mongodbatlas_cloud_backup_schedule resource fix [\#974](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/974) - INTMDB-472 +- bi_connector settings in mongodbatlas_advanced_cluster fix (breaking changes) [\#1010](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/1010) - INTMDB-455 +- mongodbatlas_third_party_integration api_token keeps updating on every apply fix [\#1011](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/1011) - INTMDB-519 +- custom_db_role error fix [\#1009](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/1009) - INTMDB-448 +- LDAP Config and LDAP Verify Resources Fix [\#1004](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/1004) - INTMDB-543 +- Cloud backup schedule, export fix [\#968](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/968) - INTMDB-427 +- resource_mongodbatlas_search_index_test fix [\#964](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/964) - INTMDB-341 +- Cannot ignore changes for replication_specs when autoscaling enabled fix [\#961](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/961) - INTMDB-464 +- BI Connector documentation fix [\#1017](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/1017) +- federated_settings_org_config import example fix [\#996](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/996) + +**Closed Issues:** +- Documentation: Expand documentation for mongodbatlas\_cloud\_backup\_schedule to include information about valid values for frequency\_interval [\#1005](https://github.com/mongodb/terraform-provider-mongodbatlas/issues/1005) +- Serverless instance returns incorrect connection string [\#934](https://github.com/mongodb/terraform-provider-mongodbatlas/issues/934) +- Terraform apply failed with Error: Provider produced inconsistent final plan This is a bug in the provider, which should be reported in the provider's own issue tracker. [\#926](https://github.com/mongodb/terraform-provider-mongodbatlas/issues/926) + +**Merged Pull Requests:** + +- Chore\(deps\): Bump github.com/gruntwork-io/terratest from 0.41.7 to 0.41.9 [\#1013](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/1013) ([dependabot[bot]](https://github.com/apps/dependabot)) + ## [v1.7.0](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/v1.7.0) (2022-12-23) [Full Changelog](https://github.com/mongodb/terraform-provider-mongodbatlas/compare/v1.6.1...v1.7.0) @@ -134,7 +179,7 @@ [Full Changelog](https://github.com/mongodb/terraform-provider-mongodbatlas/compare/v1.4.5...v1.4.6) -**Fixed** +**Enhancements and Bug Fixes:** - INTMDB-387 - Enable Azure NVME for Atlas Dedicated clusters [\#833](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/833) - INTMDB-342 - Update TestAccDataSourceMongoDBAtlasPrivateEndpointRegionalMode_basic test to use a new project to prevent conflicts [\#837](https://github.com/mongodb/terraform-provider-mongodbatlas/pull/837) - INTMDB-347 - Cloud_backup is not being correctly imported - issue [\#768](https://github.com/mongodb/terraform-provider-mongodbatlas/issues/768) diff --git a/examples/atlas-alert-configurations/.gitignore b/examples/atlas-alert-configurations/.gitignore new file mode 100644 index 0000000000..f26e799c52 --- /dev/null +++ b/examples/atlas-alert-configurations/.gitignore @@ -0,0 +1,2 @@ +*.sh +alert-configurations.tf \ No newline at end of file diff --git a/examples/atlas-alert-configurations/README.md b/examples/atlas-alert-configurations/README.md new file mode 100644 index 0000000000..06c53ac86c --- /dev/null +++ b/examples/atlas-alert-configurations/README.md @@ -0,0 +1,32 @@ +## Using the data source +Example exists in `alert-configurations-data.tf`. To use this example exactly: +- Copy directory to local disk +- Add a `terraform.tfvars` +- Add your `project_id` +- Run `terraform apply` + +### Create alert resources and import them into state file +``` +terraform output -raw alert_imports > import-alerts.sh +terraform output -raw alert_resources > alert-configurations.tf +chmod +x ./import-alerts.sh +./import-alerts.sh +terraform apply +``` + +## Contingency Plans +If unhappy with the resource file or imports, here are some things that can be done: + +### Remove targeted resources from the appropriate files and remove the alet_configuration from state +- Manually remove the resource (ex: `mongodbatlas_alert_configuration.CLUSTER_MONGOS_IS_MISSING_2`) from the `tf` file, and then remove it from state, ex: +``` +terraform state rm mongodbatlas_alert_configuration.CLUSTER_MONGOS_IS_MISSING_2 +``` + +### Remove all alert_configurations from state +- Delete the `tf` file that was used for import, and then: +``` +terraform state list | grep ^mongodbatlas_alert_configuration. | awk '{print "terraform state rm " $1}' > state-rm-alerts.sh +chmod +x state-rm-alerts.sh +./state-rm-alerts.sh +``` diff --git a/examples/atlas-alert-configurations/alert-configurations-data.tf b/examples/atlas-alert-configurations/alert-configurations-data.tf new file mode 100644 index 0000000000..78780ff951 --- /dev/null +++ b/examples/atlas-alert-configurations/alert-configurations-data.tf @@ -0,0 +1,29 @@ +data "mongodbatlas_alert_configurations" "import" { + project_id = var.project_id + + output_type = ["resource_hcl", "resource_import"] +} + +locals { + alerts = data.mongodbatlas_alert_configurations.import.results + + alert_resources = compact([ + for i, alert in local.alerts : + alert.output == null ? null : + length(alert.output) < 1 == null ? null : alert.output[0].value + ]) + + alert_imports = compact([ + for i, alert in local.alerts : + alert.output == null ? null : + length(alert.output) < 2 == null ? null : alert.output[1].value + ]) +} + +output "alert_resources" { + value = join("\n", local.alert_resources) +} + +output "alert_imports" { + value = join("", local.alert_imports) +} diff --git a/examples/atlas-alert-configurations/provider.tf b/examples/atlas-alert-configurations/provider.tf new file mode 100644 index 0000000000..e5aeda8033 --- /dev/null +++ b/examples/atlas-alert-configurations/provider.tf @@ -0,0 +1,4 @@ +provider "mongodbatlas" { + public_key = var.public_key + private_key = var.private_key +} \ No newline at end of file diff --git a/examples/atlas-alert-configurations/variables.tf b/examples/atlas-alert-configurations/variables.tf new file mode 100644 index 0000000000..8f1b11a855 --- /dev/null +++ b/examples/atlas-alert-configurations/variables.tf @@ -0,0 +1,10 @@ +variable "public_key" { + description = "Public API key to authenticate to Atlas" +} +variable "private_key" { + description = "Private API key to authenticate to Atlas" +} +variable "project_id" { + description = "Atlas project name" + default = "" +} diff --git a/examples/atlas-alert-configurations/versions.tf b/examples/atlas-alert-configurations/versions.tf new file mode 100644 index 0000000000..92fca3b63d --- /dev/null +++ b/examples/atlas-alert-configurations/versions.tf @@ -0,0 +1,8 @@ +terraform { + required_providers { + mongodbatlas = { + source = "mongodb/mongodbatlas" + } + } + required_version = ">= 0.13" +} \ No newline at end of file diff --git a/go.mod b/go.mod index b87008c2c7..e3ec31ebd3 100644 --- a/go.mod +++ b/go.mod @@ -12,8 +12,9 @@ require ( github.com/mwielbut/pointy v1.1.0 github.com/spf13/cast v1.5.0 github.com/terraform-providers/terraform-provider-aws v1.60.1-0.20210625132053-af2d5c0ad54f - go.mongodb.org/atlas v0.19.0 + go.mongodb.org/atlas v0.21.0 go.mongodb.org/realm v0.1.0 + golang.org/x/exp v0.0.0-20221208152030-732eee02a75a ) require ( @@ -111,16 +112,16 @@ require ( github.com/vmihailenco/tagparser v0.1.1 // indirect github.com/zclconf/go-cty v1.12.1 // indirect go.opencensus.io v0.23.0 // indirect - golang.org/x/crypto v0.0.0-20220517005047-85d78b3ac167 // indirect + golang.org/x/crypto v0.1.0 // indirect golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect - golang.org/x/mod v0.4.2 // indirect - golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 // indirect + golang.org/x/mod v0.6.0 // indirect + golang.org/x/net v0.1.0 // indirect golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c // indirect - golang.org/x/sys v0.0.0-20220517195934-5e4e11fc645e // indirect - golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 // indirect - golang.org/x/text v0.3.7 // indirect + golang.org/x/sys v0.1.0 // indirect + golang.org/x/term v0.1.0 // indirect + golang.org/x/text v0.4.0 // indirect golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e // indirect - golang.org/x/tools v0.1.3 // indirect + golang.org/x/tools v0.2.0 // indirect golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect google.golang.org/api v0.48.0 // indirect google.golang.org/appengine v1.6.7 // indirect diff --git a/go.sum b/go.sum index 3855f7863f..9693f114e3 100644 --- a/go.sum +++ b/go.sum @@ -919,8 +919,8 @@ go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.etcd.io/bbolt v1.3.4/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ= go.etcd.io/etcd v0.0.0-20200513171258-e048e166ab9c/go.mod h1:xCI7ZzBfRuGgBXyXO6yfWfDmlWd35khcWpUa4L0xI/k= go.mongodb.org/atlas v0.12.0/go.mod h1:wVCnHcm/7/IfTjEB6K8K35PLG70yGz8BdkRwX0oK9/M= -go.mongodb.org/atlas v0.19.0 h1:gvezG9d0KsSDaExEdTtcGqZHRvvVazzuEcBUpBXxmlg= -go.mongodb.org/atlas v0.19.0/go.mod h1:PFk1IGhiGjFXHGVspOK7i1U2nnPjK8wAjYwQf6FoVf4= +go.mongodb.org/atlas v0.21.0 h1:7Wi8Yy3hJGAyMvb8vZZjoYaQ89l58GCmIx5ppxtrrqc= +go.mongodb.org/atlas v0.21.0/go.mod h1:XTjsxWgoOSwaZrQUvhTEuwjymxnF0r12RPibZuW1Uts= go.mongodb.org/realm v0.1.0 h1:zJiXyLaZrznQ+Pz947ziSrDKUep39DO4SfA0Fzx8M4M= go.mongodb.org/realm v0.1.0/go.mod h1:4Vj6iy+Puo1TDERcoh4XZ+pjtwbOzPpzqy3Cwe8ZmDM= go.mozilla.org/mozlog v0.0.0-20170222151521-4bb13139d403/go.mod h1:jHoPAGnDrCy6kaI2tAze5Prf0Nr0w/oNkROt2lw3n3o= @@ -963,6 +963,8 @@ golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20220517005047-85d78b3ac167 h1:O8uGbHCqlTp2P6QJSLmCojM4mN6UemYv8K+dCnmHmu0= golang.org/x/crypto v0.0.0-20220517005047-85d78b3ac167/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= +golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU= +golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -974,6 +976,8 @@ golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u0 golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM= golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU= golang.org/x/exp v0.0.0-20200331195152-e8c3332aa8e5/go.mod h1:4M0jN8W1tt0AVLNr8HDosyJCDCDuyL9N9+3m7wDWgKw= +golang.org/x/exp v0.0.0-20221208152030-732eee02a75a h1:4iLhBPcpqFmylhnkbY3W0ONLUYYkDAW9xMFLfxgsvCw= +golang.org/x/exp v0.0.0-20221208152030-732eee02a75a/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= @@ -1001,6 +1005,8 @@ golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2 h1:Gz96sIWK3OalVv/I/qNygP42zyoKp3xptRVCWRFEBvo= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0 h1:b9gGHsz9/HhJ3HF5DHQytPpuwocVTChQJK3AvoLRD5I= +golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI= golang.org/x/net v0.0.0-20180530234432-1e491301e022/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -1057,6 +1063,8 @@ golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qx golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 h1:CIJ76btIcR3eFI5EgSo6k1qKw9KJexJuRLI9G7Hp5wE= golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0= +golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -1164,9 +1172,13 @@ golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220517195934-5e4e11fc645e h1:w36l2Uw3dRan1K3TyXriXvY+6T56GNmlKGcqiQUJDfM= golang.org/x/sys v0.0.0-20220517195934-5e4e11fc645e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U= +golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.1.0 h1:g6Z6vPFA9dYBAF7DWcH6sCcOntplXsDKcliusYijMlw= +golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -1177,6 +1189,8 @@ golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -1283,6 +1297,8 @@ golang.org/x/tools v0.1.2-0.20210512205948-8287d5da45e4/go.mod h1:o0xws9oXOQQZyj golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.3 h1:L69ShwSZEyCsLKoAxDKeMvLDZkumEe8gXUZAjab0tX8= golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.2.0 h1:G6AHpWxTMGY1KyEYoAQ5WTtIekUUvDNjan3ugu60JvE= +golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= diff --git a/mongodbatlas/data_source_mongodbatlas_accesslist_api_key.go b/mongodbatlas/data_source_mongodbatlas_accesslist_api_key.go new file mode 100644 index 0000000000..f1708ba0ef --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_accesslist_api_key.go @@ -0,0 +1,89 @@ +package mongodbatlas + +import ( + "context" + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func dataSourceMongoDBAtlasAccessListAPIKey() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceMongoDBAtlasAccessListAPIKeyRead, + Schema: map[string]*schema.Schema{ + "org_id": { + Type: schema.TypeString, + Required: true, + }, + "api_key_id": { + Type: schema.TypeString, + Required: true, + }, + "ip_address": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.IsIPAddress, + }, + "cidr_block": { + Type: schema.TypeString, + Computed: true, + }, + "created": { + Type: schema.TypeString, + Computed: true, + }, + "access_count": { + Type: schema.TypeInt, + Computed: true, + }, + "last_used": { + Type: schema.TypeString, + Computed: true, + }, + "last_used_address": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceMongoDBAtlasAccessListAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + + orgID := d.Get("org_id").(string) + apiKeyID := d.Get("api_key_id").(string) + ipAddress := d.Get("ip_address").(string) + accessListAPIKey, _, err := conn.AccessListAPIKeys.Get(ctx, orgID, apiKeyID, ipAddress) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting access list api key information: %s", err)) + } + + if err := d.Set("cidr_block", accessListAPIKey.CidrBlock); err != nil { + return diag.FromErr(fmt.Errorf("error setting `cidr_block`: %s", err)) + } + + if err := d.Set("last_used_address", accessListAPIKey.LastUsedAddress); err != nil { + return diag.FromErr(fmt.Errorf("error setting `last_used_address`: %s", err)) + } + + if err := d.Set("last_used", accessListAPIKey.LastUsed); err != nil { + return diag.FromErr(fmt.Errorf("error setting `last_used`: %s", err)) + } + + if err := d.Set("created", accessListAPIKey.Created); err != nil { + return diag.FromErr(fmt.Errorf("error setting `created`: %s", err)) + } + + if err := d.Set("access_count", accessListAPIKey.Count); err != nil { + return diag.FromErr(fmt.Errorf("error setting `access_count`: %s", err)) + } + + d.SetId(resource.UniqueId()) + + return nil +} diff --git a/mongodbatlas/data_source_mongodbatlas_accesslist_api_key_test.go b/mongodbatlas/data_source_mongodbatlas_accesslist_api_key_test.go new file mode 100644 index 0000000000..a406efc70b --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_accesslist_api_key_test.go @@ -0,0 +1,63 @@ +package mongodbatlas + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccConfigDSAccesslistAPIKey_basic(t *testing.T) { + resourceName := "mongodbatlas_access_list_api_key.test" + dataSourceName := "data.mongodbatlas_access_list_api_key.test" + orgID := os.Getenv("MONGODB_ATLAS_ORG_ID") + description := fmt.Sprintf("test-acc-accesslist-api_key-%s", acctest.RandString(5)) + ipAddress := fmt.Sprintf("179.154.226.%d", acctest.RandIntRange(0, 255)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + //CheckDestroy: testAccCheckMongoDBAtlasNetworkPeeringDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDSMongoDBAtlasAccesslistAPIKeyConfig(orgID, description, ipAddress), + Check: resource.ComposeTestCheckFunc( + // Test for Resource + testAccCheckMongoDBAtlasAccessListAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "ip_address"), + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "ip_address", ipAddress), + // Test for Data source + resource.TestCheckResourceAttrSet(dataSourceName, "org_id"), + resource.TestCheckResourceAttrSet(dataSourceName, "ip_address"), + resource.TestCheckResourceAttr(dataSourceName, "ip_address", ipAddress), + ), + }, + }, + }) +} + +func testAccDSMongoDBAtlasAccesslistAPIKeyConfig(orgID, description, ipAddress string) string { + return fmt.Sprintf(` + data "mongodbatlas_access_list_api_key" "test" { + org_id = %[1]q + api_key_id = mongodbatlas_access_list_api_key.test.api_key_id + ip_address = %[3]q + } + + resource "mongodbatlas_api_key" "test" { + org_id = %[1]q + description = %[2]q + role_names = ["ORG_MEMBER","ORG_BILLING_ADMIN"] + } + + resource "mongodbatlas_access_list_api_key" "test" { + org_id = %[1]q + ip_address = %[3]q + api_key_id = mongodbatlas_api_key.test.api_key_id + } + `, orgID, description, ipAddress) +} diff --git a/mongodbatlas/data_source_mongodbatlas_accesslist_api_keys.go b/mongodbatlas/data_source_mongodbatlas_accesslist_api_keys.go new file mode 100644 index 0000000000..970425e9cc --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_accesslist_api_keys.go @@ -0,0 +1,93 @@ +package mongodbatlas + +import ( + "context" + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + + matlas "go.mongodb.org/atlas/mongodbatlas" +) + +func dataSourceMongoDBAtlasAccessListAPIKeys() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceMongoDBAtlasAccessListAPIKeysRead, + Schema: map[string]*schema.Schema{ + "org_id": { + Type: schema.TypeString, + Required: true, + }, + "api_key_id": { + Type: schema.TypeString, + Required: true, + }, + "page_num": { + Type: schema.TypeInt, + Optional: true, + }, + "items_per_page": { + Type: schema.TypeInt, + Optional: true, + }, + "results": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ip_address": { + Type: schema.TypeString, + Computed: true, + }, + "cidr_block": { + Type: schema.TypeString, + Computed: true, + }, + "created": { + Type: schema.TypeString, + Computed: true, + }, + "access_count": { + Type: schema.TypeInt, + Computed: true, + }, + "last_used": { + Type: schema.TypeString, + Computed: true, + }, + "last_used_address": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + } +} + +func dataSourceMongoDBAtlasAccessListAPIKeysRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + options := &matlas.ListOptions{ + PageNum: d.Get("page_num").(int), + ItemsPerPage: d.Get("items_per_page").(int), + } + + orgID := d.Get("org_id").(string) + apiKeyID := d.Get("api_key_id").(string) + + accessListAPIKeys, _, err := conn.AccessListAPIKeys.List(ctx, orgID, apiKeyID, options) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting access list api keys information: %s", err)) + } + + if err := d.Set("results", flattenAccessListAPIKeys(ctx, conn, orgID, accessListAPIKeys.Results)); err != nil { + return diag.FromErr(fmt.Errorf("error setting `results`: %s", err)) + } + + d.SetId(resource.UniqueId()) + + return nil +} diff --git a/mongodbatlas/data_source_mongodbatlas_accesslist_api_keys_test.go b/mongodbatlas/data_source_mongodbatlas_accesslist_api_keys_test.go new file mode 100644 index 0000000000..26e1542c79 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_accesslist_api_keys_test.go @@ -0,0 +1,61 @@ +package mongodbatlas + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccConfigDSAccesslistAPIKeys_basic(t *testing.T) { + resourceName := "mongodbatlas_access_list_api_key.test" + dataSourceName := "data.mongodbatlas_access_list_api_keys.test" + orgID := os.Getenv("MONGODB_ATLAS_ORG_ID") + description := fmt.Sprintf("test-acc-accesslist-api_keys-%s", acctest.RandString(5)) + ipAddress := fmt.Sprintf("179.154.226.%d", acctest.RandIntRange(0, 255)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + //CheckDestroy: testAccCheckMongoDBAtlasNetworkPeeringDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDSMongoDBAtlasAccesslistAPIKeysConfig(orgID, description, ipAddress), + Check: resource.ComposeTestCheckFunc( + // Test for Resource + testAccCheckMongoDBAtlasAccessListAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "ip_address"), + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "ip_address", ipAddress), + // Test for Data source + resource.TestCheckResourceAttrSet(dataSourceName, "org_id"), + resource.TestCheckResourceAttrSet(dataSourceName, "results.#"), + ), + }, + }, + }) +} + +func testAccDSMongoDBAtlasAccesslistAPIKeysConfig(orgID, description, ipAddress string) string { + return fmt.Sprintf(` + data "mongodbatlas_access_list_api_keys" "test" { + org_id = %[1]q + api_key_id = mongodbatlas_access_list_api_key.test.api_key_id + } + + resource "mongodbatlas_api_key" "test" { + org_id = %[1]q + description = %[2]q + role_names = ["ORG_MEMBER","ORG_BILLING_ADMIN"] + } + + resource "mongodbatlas_access_list_api_key" "test" { + org_id = %[1]q + ip_address = %[3]q + api_key_id = mongodbatlas_api_key.test.api_key_id + } + `, orgID, description, ipAddress) +} diff --git a/mongodbatlas/data_source_mongodbatlas_advanced_cluster.go b/mongodbatlas/data_source_mongodbatlas_advanced_cluster.go index b2becba6fb..4bc9b59865 100644 --- a/mongodbatlas/data_source_mongodbatlas_advanced_cluster.go +++ b/mongodbatlas/data_source_mongodbatlas_advanced_cluster.go @@ -22,7 +22,7 @@ func dataSourceMongoDBAtlasAdvancedCluster() *schema.Resource { Type: schema.TypeBool, Computed: true, }, - "bi_connector": { + "bi_connector_config": { Type: schema.TypeList, Computed: true, Elem: &schema.Resource{ @@ -141,6 +141,34 @@ func dataSourceMongoDBAtlasAdvancedCluster() *schema.Resource { }, }, }, + "analytics_auto_scaling": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disk_gb_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "compute_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "compute_scale_down_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "compute_min_instance_size": { + Type: schema.TypeString, + Computed: true, + }, + "compute_max_instance_size": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, "backing_provider_name": { Type: schema.TypeString, Computed: true, @@ -216,8 +244,8 @@ func dataSourceMongoDBAtlasAdvancedClusterRead(ctx context.Context, d *schema.Re return diag.FromErr(fmt.Errorf(errorClusterAdvancedSetting, "backup_enabled", clusterName, err)) } - if err := d.Set("bi_connector", flattenBiConnectorConfig(cluster.BiConnector)); err != nil { - return diag.FromErr(fmt.Errorf(errorClusterAdvancedSetting, "bi_connector", clusterName, err)) + if err := d.Set("bi_connector_config", flattenBiConnectorConfig(cluster.BiConnector)); err != nil { + return diag.FromErr(fmt.Errorf(errorClusterAdvancedSetting, "bi_connector_config", clusterName, err)) } if err := d.Set("cluster_type", cluster.ClusterType); err != nil { diff --git a/mongodbatlas/data_source_mongodbatlas_advanced_clusters.go b/mongodbatlas/data_source_mongodbatlas_advanced_clusters.go index 1a5231a228..d842c84932 100644 --- a/mongodbatlas/data_source_mongodbatlas_advanced_clusters.go +++ b/mongodbatlas/data_source_mongodbatlas_advanced_clusters.go @@ -30,7 +30,7 @@ func dataSourceMongoDBAtlasAdvancedClusters() *schema.Resource { Type: schema.TypeBool, Computed: true, }, - "bi_connector": { + "bi_connector_config": { Type: schema.TypeList, Computed: true, Elem: &schema.Resource{ @@ -149,6 +149,34 @@ func dataSourceMongoDBAtlasAdvancedClusters() *schema.Resource { }, }, }, + "analytics_auto_scaling": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disk_gb_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "compute_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "compute_scale_down_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "compute_min_instance_size": { + Type: schema.TypeString, + Computed: true, + }, + "compute_max_instance_size": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, "backing_provider_name": { Type: schema.TypeString, Computed: true, @@ -246,7 +274,7 @@ func flattenAdvancedClusters(ctx context.Context, conn *matlas.Client, clusters result := map[string]interface{}{ "advanced_configuration": flattenProcessArgs(processArgs), "backup_enabled": clusters[i].BackupEnabled, - "bi_connector": flattenBiConnectorConfig(clusters[i].BiConnector), + "bi_connector_config": flattenBiConnectorConfig(clusters[i].BiConnector), "cluster_type": clusters[i].ClusterType, "create_date": clusters[i].CreateDate, "connection_strings": flattenConnectionStrings(clusters[i].ConnectionStrings), diff --git a/mongodbatlas/data_source_mongodbatlas_alert_configuration.go b/mongodbatlas/data_source_mongodbatlas_alert_configuration.go index 1d1336add0..81ea8745d8 100644 --- a/mongodbatlas/data_source_mongodbatlas_alert_configuration.go +++ b/mongodbatlas/data_source_mongodbatlas_alert_configuration.go @@ -4,8 +4,12 @@ import ( "context" "fmt" + "github.com/hashicorp/hcl/v2/hclwrite" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/zclconf/go-cty/cty" + matlas "go.mongodb.org/atlas/mongodbatlas" ) func dataSourceMongoDBAtlasAlertConfiguration() *schema.Resource { @@ -245,6 +249,27 @@ func dataSourceMongoDBAtlasAlertConfiguration() *schema.Resource { }, }, }, + "output": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"resource_hcl", "resource_import"}, false), + }, + "label": { + Type: schema.TypeString, + Optional: true, + }, + "value": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, }, } } @@ -296,6 +321,12 @@ func dataSourceMongoDBAtlasAlertConfigurationRead(ctx context.Context, d *schema return diag.FromErr(fmt.Errorf(errorAlertConfSetting, "notification", projectID, err)) } + if dOutput := d.Get("output"); dOutput != nil { + if err := d.Set("output", computeAlertConfigurationOutput(alert, dOutput.([]interface{}), alert.EventTypeName)); err != nil { + return diag.FromErr(fmt.Errorf(errorAlertConfSetting, "output", projectID, err)) + } + } + d.SetId(encodeStateID(map[string]string{ "id": alert.ID, "project_id": projectID, @@ -303,3 +334,186 @@ func dataSourceMongoDBAtlasAlertConfigurationRead(ctx context.Context, d *schema return nil } + +func computeAlertConfigurationOutput(alert *matlas.AlertConfiguration, outputConfigurations []interface{}, defaultLabel string) []map[string]interface{} { + output := make([]map[string]interface{}, 0) + + for i := 0; i < len(outputConfigurations); i++ { + config := outputConfigurations[i].(map[string]interface{}) + var o = map[string]interface{}{ + "type": config["type"], + } + + if label, ok := o["label"]; ok { + o["label"] = label + } else { + o["label"] = defaultLabel + } + + if outputValue := outputAlertConfiguration(alert, o["type"].(string), o["label"].(string)); outputValue != "" { + o["value"] = outputValue + } + + output = append(output, o) + } + + return output +} + +func outputAlertConfiguration(alert *matlas.AlertConfiguration, outputType, resourceLabel string) string { + if outputType == "resource_hcl" { + return outputAlertConfigurationResourceHcl(resourceLabel, alert) + } + if outputType == "resource_import" { + return outputAlertConfigurationResourceImport(resourceLabel, alert) + } + + return "" +} + +func outputAlertConfigurationResourceHcl(label string, alert *matlas.AlertConfiguration) string { + f := hclwrite.NewEmptyFile() + root := f.Body() + resource := root.AppendNewBlock("resource", []string{"mongodbatlas_alert_configuration", label}).Body() + + resource.SetAttributeValue("project_id", cty.StringVal(alert.GroupID)) + resource.SetAttributeValue("event_type", cty.StringVal(alert.EventTypeName)) + + if alert.Enabled != nil { + resource.SetAttributeValue("enabled", cty.BoolVal(*alert.Enabled)) + } + + for _, matcher := range alert.Matchers { + values := convertMatcherToCtyValues(matcher) + + appendBlockWithCtyValues(resource, "matcher", []string{}, values) + } + + if alert.MetricThreshold != nil { + values := convertMetricThresholdToCtyValues(*alert.MetricThreshold) + + appendBlockWithCtyValues(resource, "metric_threshold_config", []string{}, values) + } + + if alert.Threshold != nil { + values := convertThresholdToCtyValues(*alert.Threshold) + + appendBlockWithCtyValues(resource, "threshold_config", []string{}, values) + } + + for i := 0; i < len(alert.Notifications); i++ { + values := convertNotificationToCtyValues(&alert.Notifications[i]) + + appendBlockWithCtyValues(resource, "notification", []string{}, values) + } + + return string(f.Bytes()) +} + +func outputAlertConfigurationResourceImport(label string, alert *matlas.AlertConfiguration) string { + return fmt.Sprintf("terraform import mongodbatlas_alert_configuration.%s %s-%s\n", label, alert.GroupID, alert.ID) +} + +func convertMatcherToCtyValues(matcher matlas.Matcher) map[string]cty.Value { + return map[string]cty.Value{ + "field_name": cty.StringVal(matcher.FieldName), + "operator": cty.StringVal(matcher.Operator), + "value": cty.StringVal(matcher.Value), + } +} + +func convertMetricThresholdToCtyValues(metric matlas.MetricThreshold) map[string]cty.Value { + return map[string]cty.Value{ + "metric_name": cty.StringVal(metric.MetricName), + "operator": cty.StringVal(metric.Operator), + "threshold": cty.NumberFloatVal(metric.Threshold), + "units": cty.StringVal(metric.Units), + "mode": cty.StringVal(metric.Mode), + } +} + +func convertThresholdToCtyValues(threshold matlas.Threshold) map[string]cty.Value { + return map[string]cty.Value{ + "operator": cty.StringVal(threshold.Operator), + "units": cty.StringVal(threshold.Units), + "threshold": cty.NumberFloatVal(threshold.Threshold), + } +} + +func convertNotificationToCtyValues(notification *matlas.Notification) map[string]cty.Value { + values := map[string]cty.Value{} + + if notification.ChannelName != "" { + values["channel_name"] = cty.StringVal(notification.ChannelName) + } + + if notification.DatadogRegion != "" { + values["datadog_region"] = cty.StringVal(notification.DatadogRegion) + } + + if notification.EmailAddress != "" { + values["email_address"] = cty.StringVal(notification.EmailAddress) + } + + if notification.FlowName != "" { + values["flow_name"] = cty.StringVal(notification.FlowName) + } + + if notification.IntervalMin > 0 { + values["interval_min"] = cty.NumberIntVal(int64(notification.IntervalMin)) + } + + if notification.MobileNumber != "" { + values["mobile_number"] = cty.StringVal(notification.MobileNumber) + } + + if notification.OpsGenieRegion != "" { + values["ops_genie_region"] = cty.StringVal(notification.OpsGenieRegion) + } + + if notification.OrgName != "" { + values["org_name"] = cty.StringVal(notification.OrgName) + } + + if notification.TeamID != "" { + values["team_id"] = cty.StringVal(notification.TeamID) + } + + if notification.TeamName != "" { + values["team_name"] = cty.StringVal(notification.TeamName) + } + + if notification.TypeName != "" { + values["type_name"] = cty.StringVal(notification.TypeName) + } + + if notification.Username != "" { + values["username"] = cty.StringVal(notification.Username) + } + + if notification.DelayMin != nil && *notification.DelayMin > 0 { + values["delay_min"] = cty.NumberIntVal(int64(*notification.DelayMin)) + } + + if notification.EmailEnabled != nil && *notification.EmailEnabled { + values["email_enabled"] = cty.BoolVal(*notification.EmailEnabled) + } + + if notification.SMSEnabled != nil && *notification.SMSEnabled { + values["sms_enabled"] = cty.BoolVal(*notification.SMSEnabled) + } + + if len(notification.Roles) > 0 { + roles := make([]cty.Value, 0) + + for _, r := range notification.Roles { + if r != "" { + roles = append(roles, cty.StringVal(r)) + } + } + + values["roles"] = cty.TupleVal(roles) + } + + return values +} diff --git a/mongodbatlas/data_source_mongodbatlas_alert_configurations.go b/mongodbatlas/data_source_mongodbatlas_alert_configurations.go new file mode 100644 index 0000000000..9fa7720b50 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_alert_configurations.go @@ -0,0 +1,150 @@ +package mongodbatlas + +import ( + "context" + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + matlas "go.mongodb.org/atlas/mongodbatlas" +) + +func resourceListOptions() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "page_num": { + Type: schema.TypeInt, + Optional: true, + Default: 0, + }, + "items_per_page": { + Type: schema.TypeInt, + Optional: true, + Default: 100, + }, + "include_count": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + }, + } +} + +func readListOptions(listOptionsArr []interface{}) *matlas.ListOptions { + var listOptions map[string]interface{} + + if len(listOptionsArr) > 0 { + listOptions = listOptionsArr[0].(map[string]interface{}) + } else { + listOptions = map[string]interface{}{ + "page_num": 0, + "items_per_page": 100, + "include_count": false, + } + } + + return &matlas.ListOptions{ + PageNum: listOptions["page_num"].(int), + ItemsPerPage: listOptions["items_per_page"].(int), + IncludeCount: listOptions["include_count"].(bool), + } +} + +func dataSourceMongoDBAtlasAlertConfigurations() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceMongoDBAtlasAlertConfigurationsRead, + Schema: map[string]*schema.Schema{ + "project_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "list_options": { + Type: schema.TypeList, + Optional: true, + Elem: resourceListOptions(), + }, + "results": { + Type: schema.TypeList, + Computed: true, + Elem: dataSourceMongoDBAtlasAlertConfiguration(), + }, + "total_count": { + Type: schema.TypeInt, + Computed: true, + }, + "output_type": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{"resource_hcl", "resource_import"}, false), + }, + }, + }, + } +} + +func dataSourceMongoDBAtlasAlertConfigurationsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + projectID := d.Get("project_id").(string) + listOptions := d.Get("list_options").([]interface{}) + + alerts, _, err := conn.AlertConfigurations.List(ctx, projectID, readListOptions(listOptions)) + + if err != nil { + return diag.FromErr(fmt.Errorf(errorReadAlertConf, err)) + } + + results := flattenAlertConfigurations(ctx, conn, alerts, d) + + if err := d.Set("results", results); err != nil { + return diag.FromErr(fmt.Errorf(errorAlertConfSetting, "results", projectID, err)) + } + + if err := d.Set("list_options", listOptions); err != nil { + return diag.FromErr(fmt.Errorf(errorAlertConfSetting, "list_options", projectID, err)) + } + + d.SetId(encodeStateID(map[string]string{ + "project_id": projectID, + })) + + return nil +} + +func flattenAlertConfigurations(ctx context.Context, conn *matlas.Client, alerts []matlas.AlertConfiguration, d *schema.ResourceData) []map[string]interface{} { + var outputConfigurations []interface{} + + results := make([]map[string]interface{}, 0) + + if output := d.Get("output_type"); output != nil { + for _, o := range output.([]interface{}) { + outputConfigurations = append(outputConfigurations, map[string]interface{}{ + "type": o.(string), + }) + } + } + + for i := 0; i < len(alerts); i++ { + label := fmt.Sprintf("%s_%d", alerts[i].EventTypeName, i) + + results = append(results, map[string]interface{}{ + "alert_configuration_id": alerts[i].ID, + "event_type": alerts[i].EventTypeName, + "created": alerts[i].Created, + "updated": alerts[i].Updated, + "enabled": alerts[i].Enabled, + "matcher": flattenAlertConfigurationMatchers(alerts[i].Matchers), + "metric_threshold_config": flattenAlertConfigurationMetricThresholdConfig(alerts[i].MetricThreshold), + "threshold_config": flattenAlertConfigurationThresholdConfig(alerts[i].Threshold), + "notification": flattenAlertConfigurationNotifications(d, alerts[i].Notifications), + "output": computeAlertConfigurationOutput(&alerts[i], outputConfigurations, label), + }) + } + + return results +} diff --git a/mongodbatlas/data_source_mongodbatlas_alert_configurations_test.go b/mongodbatlas/data_source_mongodbatlas_alert_configurations_test.go new file mode 100644 index 0000000000..8cd264a995 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_alert_configurations_test.go @@ -0,0 +1,88 @@ +package mongodbatlas + +import ( + "context" + "fmt" + "os" + "strconv" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + matlas "go.mongodb.org/atlas/mongodbatlas" +) + +func TestAccConfigDSAlertConfigurations_basic(t *testing.T) { + var ( + dataSourceName = "data.mongodbatlas_alert_configurations.test" + projectID = os.Getenv("MONGODB_ATLAS_PROJECT_ID") + ) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccDSMongoDBAtlasAlertConfigurations(projectID), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasAlertConfigurationsCount(dataSourceName), + resource.TestCheckResourceAttrSet(dataSourceName, "project_id"), + ), + }, + }, + }) +} + +func testAccDSMongoDBAtlasAlertConfigurations(projectID string) string { + return fmt.Sprintf(` + data "mongodbatlas_alert_configurations" "test" { + project_id = "%s" + + list_options { + page_num = 0 + } + } + `, projectID) +} + +func testAccCheckMongoDBAtlasAlertConfigurationsCount(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*MongoDBClient).Atlas + + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("no ID is set") + } + + ids := decodeStateID(rs.Primary.ID) + projectID := ids["project_id"] + + alertResp, _, err := conn.AlertConfigurations.List(context.Background(), projectID, &matlas.ListOptions{ + PageNum: 0, + ItemsPerPage: 100, + IncludeCount: true, + }) + + if err != nil { + return fmt.Errorf("the Alert Configurations List for project (%s) could not be read", projectID) + } + + resultsNumber := rs.Primary.Attributes["results.#"] + var dataSourceResultsCount int + + if dataSourceResultsCount, err = strconv.Atoi(resultsNumber); err != nil { + return fmt.Errorf("%s results count is somehow not a number %s", resourceName, resultsNumber) + } + + apiResultsCount := len(alertResp) + if dataSourceResultsCount != len(alertResp) { + return fmt.Errorf("%s results count (%v) did not match that of current Alert Configurations (%d)", resourceName, dataSourceResultsCount, apiResultsCount) + } + + return nil + } +} diff --git a/mongodbatlas/data_source_mongodbatlas_api_key.go b/mongodbatlas/data_source_mongodbatlas_api_key.go new file mode 100644 index 0000000000..eafc807ee3 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_api_key.go @@ -0,0 +1,69 @@ +package mongodbatlas + +import ( + "context" + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func dataSourceMongoDBAtlasAPIKey() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceMongoDBAtlasAPIKeyRead, + Schema: map[string]*schema.Schema{ + "org_id": { + Type: schema.TypeString, + Required: true, + }, + "api_key_id": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "public_key": { + Type: schema.TypeString, + Computed: true, + }, + "role_names": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + } +} + +func dataSourceMongoDBAtlasAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + + orgID := d.Get("org_id").(string) + apiKeyID := d.Get("api_key_id").(string) + apiKey, _, err := conn.APIKeys.Get(ctx, orgID, apiKeyID) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) + } + + if err := d.Set("description", apiKey.Desc); err != nil { + return diag.FromErr(fmt.Errorf("error setting `description`: %s", err)) + } + + if err := d.Set("public_key", apiKey.PublicKey); err != nil { + return diag.FromErr(fmt.Errorf("error setting `public_key`: %s", err)) + } + + if err := d.Set("role_names", flattenOrgAPIKeyRoles(orgID, apiKey.Roles)); err != nil { + return diag.FromErr(fmt.Errorf("error setting `roles`: %s", err)) + } + + d.SetId(resource.UniqueId()) + + return nil +} diff --git a/mongodbatlas/data_source_mongodbatlas_api_key_test.go b/mongodbatlas/data_source_mongodbatlas_api_key_test.go new file mode 100644 index 0000000000..dd0e21a275 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_api_key_test.go @@ -0,0 +1,55 @@ +package mongodbatlas + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccConfigDSAPIKey_basic(t *testing.T) { + resourceName := "mongodbatlas_api_key.test" + dataSourceName := "data.mongodbatlas_api_key.test" + orgID := os.Getenv("MONGODB_ATLAS_ORG_ID") + description := fmt.Sprintf("test-acc-api_key-%s", acctest.RandString(5)) + roleName := "ORG_MEMBER" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + //CheckDestroy: testAccCheckMongoDBAtlasNetworkPeeringDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDSMongoDBAtlasAPIKeyConfig(orgID, description, roleName), + Check: resource.ComposeTestCheckFunc( + // Test for Resource + testAccCheckMongoDBAtlasAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "description"), + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "description", description), + // Test for Data source + resource.TestCheckResourceAttrSet(dataSourceName, "org_id"), + resource.TestCheckResourceAttrSet(dataSourceName, "description"), + ), + }, + }, + }) +} + +func testAccDSMongoDBAtlasAPIKeyConfig(orgID, apiKeyID, roleNames string) string { + return fmt.Sprintf(` + resource "mongodbatlas_api_key" "test" { + org_id = "%s" + description = "%s" + role_names = ["%s"] + } + + data "mongodbatlas_api_key" "test" { + org_id = "${mongodbatlas_api_key.test.org_id}" + api_key_id = "${mongodbatlas_api_key.test.api_key_id}" + } + `, orgID, apiKeyID, roleNames) +} diff --git a/mongodbatlas/data_source_mongodbatlas_api_keys.go b/mongodbatlas/data_source_mongodbatlas_api_keys.go new file mode 100644 index 0000000000..70ab25cc4e --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_api_keys.go @@ -0,0 +1,83 @@ +package mongodbatlas + +import ( + "context" + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + + matlas "go.mongodb.org/atlas/mongodbatlas" +) + +func dataSourceMongoDBAtlasAPIKeys() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceMongoDBAtlasAPIKeysRead, + Schema: map[string]*schema.Schema{ + "org_id": { + Type: schema.TypeString, + Required: true, + }, + "page_num": { + Type: schema.TypeInt, + Optional: true, + }, + "items_per_page": { + Type: schema.TypeInt, + Optional: true, + }, + "results": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "description": { + Type: schema.TypeString, + Computed: true, + }, + "api_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "public_key": { + Type: schema.TypeString, + Computed: true, + }, + "role_names": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + }, + } +} + +func dataSourceMongoDBAtlasAPIKeysRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + options := &matlas.ListOptions{ + PageNum: d.Get("page_num").(int), + ItemsPerPage: d.Get("items_per_page").(int), + } + + orgID := d.Get("org_id").(string) + + apiKeys, _, err := conn.APIKeys.List(ctx, orgID, options) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting api keys information: %s", err)) + } + + if err := d.Set("results", flattenOrgAPIKeys(ctx, conn, orgID, apiKeys)); err != nil { + return diag.FromErr(fmt.Errorf("error setting `results`: %s", err)) + } + + d.SetId(resource.UniqueId()) + + return nil +} diff --git a/mongodbatlas/data_source_mongodbatlas_api_keys_test.go b/mongodbatlas/data_source_mongodbatlas_api_keys_test.go new file mode 100644 index 0000000000..eb846dbcd0 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_api_keys_test.go @@ -0,0 +1,56 @@ +package mongodbatlas + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccConfigDSAPIKeys_basic(t *testing.T) { + resourceName := "mongodbatlas_api_key.test" + dataSourceName := "data.mongodbatlas_api_keys.test" + orgID := os.Getenv("MONGODB_ATLAS_ORG_ID") + description := fmt.Sprintf("test-acc-api_key-%s", acctest.RandString(5)) + roleName := "ORG_MEMBER" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + //CheckDestroy: testAccCheckMongoDBAtlasNetworkPeeringDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDSMongoDBAtlasAPIKeysConfig(orgID, description, roleName), + Check: resource.ComposeTestCheckFunc( + // Test for Resource + testAccCheckMongoDBAtlasAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "description"), + + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "description", description), + + // Test for Data source + resource.TestCheckResourceAttrSet(dataSourceName, "org_id"), + resource.TestCheckResourceAttrSet(dataSourceName, "results.#"), + ), + }, + }, + }) +} + +func testAccDSMongoDBAtlasAPIKeysConfig(orgID, description, roleNames string) string { + return fmt.Sprintf(` + resource "mongodbatlas_api_key" "test" { + org_id = "%s" + description = "%s" + role_names = ["%s"] + } + + data "mongodbatlas_api_keys" "test" { + org_id = "${mongodbatlas_api_key.test.org_id}" + } + `, orgID, description, roleNames) +} diff --git a/mongodbatlas/data_source_mongodbatlas_cloud_backup_schedule.go b/mongodbatlas/data_source_mongodbatlas_cloud_backup_schedule.go index f26040c95b..134cefbc74 100644 --- a/mongodbatlas/data_source_mongodbatlas_cloud_backup_schedule.go +++ b/mongodbatlas/data_source_mongodbatlas_cloud_backup_schedule.go @@ -25,6 +25,37 @@ func dataSourceMongoDBAtlasCloudBackupSchedule() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "copy_settings": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cloud_provider": { + Type: schema.TypeString, + Computed: true, + }, + "frequencies": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "region_name": { + Type: schema.TypeString, + Computed: true, + }, + "replication_spec_id": { + Type: schema.TypeString, + Computed: true, + }, + "should_copy_oplogs": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, "next_snapshot": { Type: schema.TypeString, Computed: true, @@ -245,6 +276,10 @@ func dataSourceMongoDBAtlasCloudBackupScheduleRead(ctx context.Context, d *schem return diag.Errorf(errorSnapshotBackupScheduleSetting, "policy_item_monthly", clusterName, err) } + if err := d.Set("copy_settings", flattenCopySettings(backupPolicy.CopySettings)); err != nil { + return diag.Errorf(errorSnapshotBackupScheduleSetting, "copy_settings", clusterName, err) + } + if err := d.Set("export", flattenExport(backupPolicy)); err != nil { return diag.Errorf(errorSnapshotBackupScheduleSetting, "export", clusterName, err) } diff --git a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot.go b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot.go index 2140313b9f..36abbe4b32 100644 --- a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot.go +++ b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot.go @@ -63,7 +63,7 @@ func dataSourceMongoDBAtlasCloudProviderSnapshot() *schema.Resource { Computed: true, }, }, - DeprecationMessage: "This data source is deprecated. Please transition to mongodbatlas_cloud_backup_snapshot as soon as possible", + DeprecationMessage: "This data source is deprecated, and will be removed in v1.9 release. Please transition to mongodbatlas_cloud_backup_snapshot as soon as possible", } } diff --git a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_backup_policy.go b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_backup_policy.go index 7934b059a6..8db4b12457 100644 --- a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_backup_policy.go +++ b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_backup_policy.go @@ -85,7 +85,7 @@ func dataSourceMongoDBAtlasCloudProviderSnapshotBackupPolicy() *schema.Resource }, }, }, - DeprecationMessage: "This data source is deprecated. Please transition to mongodbatlas_cloud_backup_schedule as soon as possible", + DeprecationMessage: "This data source is deprecated, and will be removed in v1.9 release. Please transition to mongodbatlas_cloud_backup_schedule as soon as possible", } } diff --git a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_restore_job.go b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_restore_job.go index ea237c6b2c..0e286cf6da 100644 --- a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_restore_job.go +++ b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_restore_job.go @@ -89,7 +89,7 @@ func dataSourceMongoDBAtlasCloudProviderSnapshotRestoreJob() *schema.Resource { Computed: true, }, }, - DeprecationMessage: "This data source is deprecated. Please transition to mongodbatlas_cloud_backup_snapshot_restore_job as soon as possible", + DeprecationMessage: "This data source is deprecated, and will be removed in v1.9 release. Please transition to mongodbatlas_cloud_backup_snapshot_restore_job as soon as possible", } } diff --git a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_restore_jobs.go b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_restore_jobs.go index 77a57af40f..34819f1e8d 100644 --- a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_restore_jobs.go +++ b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshot_restore_jobs.go @@ -107,7 +107,7 @@ func dataSourceMongoDBAtlasCloudProviderSnapshotRestoreJobs() *schema.Resource { Computed: true, }, }, - DeprecationMessage: "This data source is deprecated. Please transition to mongodbatlas_cloud_backup_snapshot_restore_jobs as soon as possible", + DeprecationMessage: "This data source is deprecated, and will be removed in v1.9 release. Please transition to mongodbatlas_cloud_backup_snapshot_restore_jobs as soon as possible", } } diff --git a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshots.go b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshots.go index 390a016428..8877a80b21 100644 --- a/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshots.go +++ b/mongodbatlas/data_source_mongodbatlas_cloud_provider_snapshots.go @@ -84,7 +84,7 @@ func dataSourceMongoDBAtlasCloudProviderSnapshots() *schema.Resource { Computed: true, }, }, - DeprecationMessage: "This data source is deprecated. Please transition to mongodbatlas_cloud_backup_snapshots as soon as possible", + DeprecationMessage: "This data source is deprecated, and will be removed in v1.9 release. Please transition to mongodbatlas_cloud_backup_snapshots as soon as possible", } } diff --git a/mongodbatlas/data_source_mongodbatlas_custom_db_roles_test.go b/mongodbatlas/data_source_mongodbatlas_custom_db_roles_test.go index bc8770dfc1..5003e0f75a 100644 --- a/mongodbatlas/data_source_mongodbatlas_custom_db_roles_test.go +++ b/mongodbatlas/data_source_mongodbatlas_custom_db_roles_test.go @@ -37,7 +37,7 @@ func TestAccConfigDSCustomDBRoles_basic(t *testing.T) { // Test for Data source resource.TestCheckResourceAttrSet(dataSourceName, "project_id"), - resource.TestCheckResourceAttr(dataSourceName, "results.#", "1"), + resource.TestCheckResourceAttrSet(dataSourceName, "results.#"), ), }, }, diff --git a/mongodbatlas/data_source_mongodbatlas_org_id.go b/mongodbatlas/data_source_mongodbatlas_org_id.go new file mode 100644 index 0000000000..310465d3c0 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_org_id.go @@ -0,0 +1,51 @@ +package mongodbatlas + +import ( + "context" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + + matlas "go.mongodb.org/atlas/mongodbatlas" +) + +func dataSourceMongoDBAtlasOrgID() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceMongoDBAtlasOrgIDRead, + Schema: map[string]*schema.Schema{ + "org_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceMongoDBAtlasOrgIDRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + + var ( + err error + root *matlas.Root + ) + + options := &matlas.ListOptions{} + apiKeyOrgList, _, err := conn.Root.List(ctx, options) + if err != nil { + return diag.Errorf("error getting API Key's org assigned (%s): ", err) + } + + if err := d.Set("org_id", apiKeyOrgList.APIKey.Roles[0].OrgID); err != nil { + return diag.Errorf(errorProjectSetting, `org_id`, root.APIKey.ID, err) + } + + for _, role := range apiKeyOrgList.APIKey.Roles { + if strings.HasPrefix(role.RoleName, "ORG_") { + d.SetId(apiKeyOrgList.APIKey.Roles[0].OrgID) + } + } + + return nil +} diff --git a/mongodbatlas/data_source_mongodbatlas_org_id_test.go b/mongodbatlas/data_source_mongodbatlas_org_id_test.go new file mode 100644 index 0000000000..2a556dbb92 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_org_id_test.go @@ -0,0 +1,43 @@ +package mongodbatlas + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccConfigDSOrgID_basic(t *testing.T) { + var ( + dataSourceName = "data.mongodbatlas_roles_org_id.test" + orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") + name = fmt.Sprintf("test-acc-%s@mongodb.com", acctest.RandString(10)) + initialRole = []string{"ORG_OWNER"} + ) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasOrgInvitationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceMongoDBAtlasOrgIDConfig(orgID, name, initialRole), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(dataSourceName, "org_id"), + ), + }, + }, + }) +} + +func testAccDataSourceMongoDBAtlasOrgIDConfig(orgID, username string, roles []string) string { + return (` + data "mongodbatlas_roles_org_id" "test" { + } + + output "org_id" { + value = data.mongodbatlas_roles_org_id.test.org_id + }`) +} diff --git a/mongodbatlas/data_source_mongodbatlas_project_api_key.go b/mongodbatlas/data_source_mongodbatlas_project_api_key.go new file mode 100644 index 0000000000..9779a31a62 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_project_api_key.go @@ -0,0 +1,81 @@ +package mongodbatlas + +import ( + "context" + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func dataSourceMongoDBAtlasProjectAPIKey() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceMongoDBAtlasProjectAPIKeyRead, + Schema: map[string]*schema.Schema{ + "project_id": { + Type: schema.TypeString, + Required: true, + }, + "api_key_id": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "public_key": { + Type: schema.TypeString, + Computed: true, + }, + "private_key": { + Type: schema.TypeString, + Computed: true, + }, + "role_names": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + } +} + +func dataSourceMongoDBAtlasProjectAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + + projectID := d.Get("project_id").(string) + apiKeyID := d.Get("api_key_id").(string) + projectAPIKeys, _, err := conn.ProjectAPIKeys.List(ctx, projectID, nil) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) + } + + for _, val := range projectAPIKeys { + if val.ID == apiKeyID { + if err := d.Set("description", val.Desc); err != nil { + return diag.FromErr(fmt.Errorf("error setting `description`: %s", err)) + } + + if err := d.Set("public_key", val.PublicKey); err != nil { + return diag.FromErr(fmt.Errorf("error setting `public_key`: %s", err)) + } + + if err := d.Set("private_key", val.PrivateKey); err != nil { + return diag.FromErr(fmt.Errorf("error setting `private_key`: %s", err)) + } + + if err := d.Set("role_names", flattenProjectAPIKeyRoles(projectID, val.Roles)); err != nil { + return diag.FromErr(fmt.Errorf("error setting `roles`: %s", err)) + } + } + } + + d.SetId(resource.UniqueId()) + + return nil +} diff --git a/mongodbatlas/data_source_mongodbatlas_project_api_key_test.go b/mongodbatlas/data_source_mongodbatlas_project_api_key_test.go new file mode 100644 index 0000000000..a0eeb83cc5 --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_project_api_key_test.go @@ -0,0 +1,54 @@ +package mongodbatlas + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccConfigDSProjectAPIKey_basic(t *testing.T) { + resourceName := "mongodbatlas_project_api_key.test" + dataSourceName := "data.mongodbatlas_project_api_key.test" + projectID := os.Getenv("MONGODB_ATLAS_PROJECT_ID") + description := fmt.Sprintf("test-acc-project-api_key-%s", acctest.RandString(5)) + roleName := "GROUP_OWNER" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasNetworkPeeringDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDSMongoDBAtlasProjectAPIKeyConfig(projectID, description, roleName), + Check: resource.ComposeTestCheckFunc( + // Test for Resource + resource.TestCheckResourceAttrSet(resourceName, "project_id"), + resource.TestCheckResourceAttrSet(resourceName, "description"), + resource.TestCheckResourceAttr(resourceName, "project_id", projectID), + resource.TestCheckResourceAttr(resourceName, "description", description), + // Test for Data source + resource.TestCheckResourceAttrSet(dataSourceName, "project_id"), + resource.TestCheckResourceAttrSet(dataSourceName, "description"), + ), + }, + }, + }) +} + +func testAccDSMongoDBAtlasProjectAPIKeyConfig(projectID, description, roleNames string) string { + return fmt.Sprintf(` + resource "mongodbatlas_project_api_key" "test" { + project_id = %[1]q + description = %[2]q + role_names = [%[3]q] + } + + data "mongodbatlas_project_api_key" "test" { + project_id = %[1]q + api_key_id = "${mongodbatlas_project_api_key.test.api_key_id}" + } + `, projectID, description, roleNames) +} diff --git a/mongodbatlas/data_source_mongodbatlas_project_api_keys.go b/mongodbatlas/data_source_mongodbatlas_project_api_keys.go new file mode 100644 index 0000000000..6ea06b8e3e --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_project_api_keys.go @@ -0,0 +1,87 @@ +package mongodbatlas + +import ( + "context" + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + + matlas "go.mongodb.org/atlas/mongodbatlas" +) + +func dataSourceMongoDBAtlasProjectAPIKeys() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceMongoDBAtlasProjectAPIKeysRead, + Schema: map[string]*schema.Schema{ + "project_id": { + Type: schema.TypeString, + Required: true, + }, + "page_num": { + Type: schema.TypeInt, + Optional: true, + }, + "items_per_page": { + Type: schema.TypeInt, + Optional: true, + }, + "results": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "description": { + Type: schema.TypeString, + Computed: true, + }, + "api_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "public_key": { + Type: schema.TypeString, + Computed: true, + }, + "private_key": { + Type: schema.TypeString, + Computed: true, + }, + "role_names": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + }, + } +} + +func dataSourceMongoDBAtlasProjectAPIKeysRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + options := &matlas.ListOptions{ + PageNum: d.Get("page_num").(int), + ItemsPerPage: d.Get("items_per_page").(int), + } + + projectID := d.Get("project_id").(string) + + apiKeys, _, err := conn.ProjectAPIKeys.List(ctx, projectID, options) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting api keys information: %s", err)) + } + + if err := d.Set("results", flattenProjectAPIKeys(ctx, conn, projectID, apiKeys)); err != nil { + return diag.FromErr(fmt.Errorf("error setting `results`: %s", err)) + } + + d.SetId(resource.UniqueId()) + + return nil +} diff --git a/mongodbatlas/data_source_mongodbatlas_project_api_keys_test.go b/mongodbatlas/data_source_mongodbatlas_project_api_keys_test.go new file mode 100644 index 0000000000..db276b636e --- /dev/null +++ b/mongodbatlas/data_source_mongodbatlas_project_api_keys_test.go @@ -0,0 +1,55 @@ +package mongodbatlas + +import ( + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccConfigDSProjectAPIKeys_basic(t *testing.T) { + resourceName := "mongodbatlas_project_api_key.test" + dataSourceName := "data.mongodbatlas_project_api_keys.test" + orgID := os.Getenv("MONGODB_ATLAS_PROJECT_ID") + description := fmt.Sprintf("test-acc-project-api_key-%s", acctest.RandString(5)) + roleName := "GROUP_OWNER" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasNetworkPeeringDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDSMongoDBAtlasProjectAPIKeysConfig(orgID, description, roleName), + Check: resource.ComposeTestCheckFunc( + // Test for Resource + resource.TestCheckResourceAttrSet(resourceName, "project_id"), + resource.TestCheckResourceAttrSet(resourceName, "description"), + + resource.TestCheckResourceAttr(resourceName, "project_id", orgID), + resource.TestCheckResourceAttr(resourceName, "description", description), + + // Test for Data source + resource.TestCheckResourceAttrSet(dataSourceName, "project_id"), + resource.TestCheckResourceAttrSet(dataSourceName, "results.#"), + ), + }, + }, + }) +} + +func testAccDSMongoDBAtlasProjectAPIKeysConfig(projectID, description, roleNames string) string { + return fmt.Sprintf(` + resource "mongodbatlas_project_api_key" "test" { + project_id = %[1]q + description = %[2]q + role_names = [%[3]q] + } + + data "mongodbatlas_project_api_keys" "test" { + project_id = %[1]q + } + `, projectID, description, roleNames) +} diff --git a/mongodbatlas/data_source_mongodbatlas_third_party_integration.go b/mongodbatlas/data_source_mongodbatlas_third_party_integration.go index 1279463632..caf5c8d2c3 100644 --- a/mongodbatlas/data_source_mongodbatlas_third_party_integration.go +++ b/mongodbatlas/data_source_mongodbatlas_third_party_integration.go @@ -144,7 +144,7 @@ func dataSourceMongoDBAtlasThirdPartyIntegrationRead(ctx context.Context, d *sch return diag.FromErr(fmt.Errorf("error getting third party integration for type %s %w", queryType, err)) } - fieldMap := integrationToSchema(integration) + fieldMap := integrationToSchema(d, integration) for property, value := range fieldMap { if err = d.Set(property, value); err != nil { diff --git a/mongodbatlas/data_source_mongodbatlas_third_party_integrations.go b/mongodbatlas/data_source_mongodbatlas_third_party_integrations.go index 2f693be83f..abe9a33b04 100644 --- a/mongodbatlas/data_source_mongodbatlas_third_party_integrations.go +++ b/mongodbatlas/data_source_mongodbatlas_third_party_integrations.go @@ -37,7 +37,7 @@ func dataSourceMongoDBAtlasThirdPartyIntegrationsRead(ctx context.Context, d *sc return diag.FromErr(fmt.Errorf("error getting third party integration list: %s", err)) } - if err = d.Set("results", flattenIntegrations(integrations, projectID)); err != nil { + if err = d.Set("results", flattenIntegrations(d, integrations, projectID)); err != nil { return diag.FromErr(fmt.Errorf("error setting results for third party integrations %s", err)) } @@ -46,7 +46,7 @@ func dataSourceMongoDBAtlasThirdPartyIntegrationsRead(ctx context.Context, d *sc return nil } -func flattenIntegrations(integrations *matlas.ThirdPartyIntegrations, projectID string) (list []map[string]interface{}) { +func flattenIntegrations(d *schema.ResourceData, integrations *matlas.ThirdPartyIntegrations, projectID string) (list []map[string]interface{}) { if len(integrations.Results) == 0 { return } @@ -54,7 +54,7 @@ func flattenIntegrations(integrations *matlas.ThirdPartyIntegrations, projectID list = make([]map[string]interface{}, 0, len(integrations.Results)) for _, integration := range integrations.Results { - service := integrationToSchema(integration) + service := integrationToSchema(d, integration) service["project_id"] = projectID list = append(list, service) } @@ -62,27 +62,59 @@ func flattenIntegrations(integrations *matlas.ThirdPartyIntegrations, projectID return } -func integrationToSchema(integration *matlas.ThirdPartyIntegration) map[string]interface{} { +func integrationToSchema(d *schema.ResourceData, integration *matlas.ThirdPartyIntegration) map[string]interface{} { + integrationSchema := schemaToIntegration(d) + if integrationSchema.LicenseKey == "" { + integrationSchema.APIKey = integration.LicenseKey + } + if integrationSchema.WriteToken == "" { + integrationSchema.APIKey = integration.WriteToken + } + if integrationSchema.ReadToken == "" { + integrationSchema.APIKey = integration.ReadToken + } + if integrationSchema.APIKey == "" { + integrationSchema.APIKey = integration.APIKey + } + if integrationSchema.ServiceKey == "" { + integrationSchema.APIKey = integration.ServiceKey + } + if integrationSchema.APIToken == "" { + integrationSchema.APIKey = integration.APIToken + } + if integrationSchema.RoutingKey == "" { + integrationSchema.APIKey = integration.RoutingKey + } + if integrationSchema.Secret == "" { + integrationSchema.APIKey = integration.Secret + } + if integrationSchema.Password == "" { + integrationSchema.APIKey = integration.Password + } + if integrationSchema.UserName == "" { + integrationSchema.APIKey = integration.UserName + } + out := map[string]interface{}{ "type": integration.Type, - "license_key": integration.LicenseKey, + "license_key": integrationSchema.LicenseKey, "account_id": integration.AccountID, - "write_token": integration.WriteToken, - "read_token": integration.ReadToken, - "api_key": integration.APIKey, + "write_token": integrationSchema.WriteToken, + "read_token": integrationSchema.ReadToken, + "api_key": integrationSchema.APIKey, "region": integration.Region, - "service_key": integration.ServiceKey, - "api_token": integration.APIToken, + "service_key": integrationSchema.ServiceKey, + "api_token": integrationSchema.APIToken, "team_name": integration.TeamName, "channel_name": integration.ChannelName, - "routing_key": integration.RoutingKey, + "routing_key": integrationSchema.RoutingKey, "flow_name": integration.FlowName, "org_name": integration.OrgName, "url": integration.URL, - "secret": integration.Secret, + "secret": integrationSchema.Secret, "microsoft_teams_webhook_url": integration.MicrosoftTeamsWebhookURL, - "user_name": integration.UserName, - "password": integration.Password, + "user_name": integrationSchema.UserName, + "password": integrationSchema.Password, "service_discovery": integration.ServiceDiscovery, "scheme": integration.Scheme, "enabled": integration.Enabled, diff --git a/mongodbatlas/provider.go b/mongodbatlas/provider.go index a65caf3486..e5af0afba5 100644 --- a/mongodbatlas/provider.go +++ b/mongodbatlas/provider.go @@ -22,11 +22,13 @@ import ( "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/secretsmanager" + "github.com/hashicorp/hcl/v2/hclwrite" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/mwielbut/pointy" "github.com/spf13/cast" + "github.com/zclconf/go-cty/cty" matlas "go.mongodb.org/atlas/mongodbatlas" ) @@ -35,6 +37,10 @@ var ( baseURL = "" ) +const ( + endPointSTSDefault = "https://sts.amazonaws.com" +) + type SecretData struct { PublicKey string `json:"public_key"` PrivateKey string `json:"private_key"` @@ -145,6 +151,13 @@ func getDataSourcesMap() map[string]*schema.Resource { "mongodbatlas_custom_db_roles": dataSourceMongoDBAtlasCustomDBRoles(), "mongodbatlas_database_user": dataSourceMongoDBAtlasDatabaseUser(), "mongodbatlas_database_users": dataSourceMongoDBAtlasDatabaseUsers(), + "mongodbatlas_api_key": dataSourceMongoDBAtlasAPIKey(), + "mongodbatlas_api_keys": dataSourceMongoDBAtlasAPIKeys(), + "mongodbatlas_access_list_api_key": dataSourceMongoDBAtlasAccessListAPIKey(), + "mongodbatlas_access_list_api_keys": dataSourceMongoDBAtlasAccessListAPIKeys(), + "mongodbatlas_project_api_key": dataSourceMongoDBAtlasProjectAPIKey(), + "mongodbatlas_project_api_keys": dataSourceMongoDBAtlasProjectAPIKeys(), + "mongodbatlas_roles_org_id": dataSourceMongoDBAtlasOrgID(), "mongodbatlas_project": dataSourceMongoDBAtlasProject(), "mongodbatlas_projects": dataSourceMongoDBAtlasProjects(), "mongodbatlas_cluster": dataSourceMongoDBAtlasCluster(), @@ -163,6 +176,7 @@ func getDataSourcesMap() map[string]*schema.Resource { "mongodbatlas_teams": dataSourceMongoDBAtlasTeam(), "mongodbatlas_global_cluster_config": dataSourceMongoDBAtlasGlobalCluster(), "mongodbatlas_alert_configuration": dataSourceMongoDBAtlasAlertConfiguration(), + "mongodbatlas_alert_configurations": dataSourceMongoDBAtlasAlertConfigurations(), "mongodbatlas_x509_authentication_database_user": dataSourceMongoDBAtlasX509AuthDBUser(), "mongodbatlas_private_endpoint_regional_mode": dataSourceMongoDBAtlasPrivateEndpointRegionalMode(), "mongodbatlas_privatelink_endpoint": dataSourceMongoDBAtlasPrivateLinkEndpoint(), @@ -215,6 +229,9 @@ func getDataSourcesMap() map[string]*schema.Resource { func getResourcesMap() map[string]*schema.Resource { resourcesMap := map[string]*schema.Resource{ "mongodbatlas_advanced_cluster": resourceMongoDBAtlasAdvancedCluster(), + "mongodbatlas_api_key": resourceMongoDBAtlasAPIKey(), + "mongodbatlas_access_list_api_key": resourceMongoDBAtlasAccessListAPIKey(), + "mongodbatlas_project_api_key": resourceMongoDBAtlasProjectAPIKey(), "mongodbatlas_custom_db_role": resourceMongoDBAtlasCustomDBRole(), "mongodbatlas_database_user": resourceMongoDBAtlasDatabaseUser(), "mongodbatlas_project": resourceMongoDBAtlasProject(), @@ -308,7 +325,7 @@ func providerConfigure(ctx context.Context, d *schema.ResourceData) (interface{} func configureCredentialsSTS(config *Config, secret, region, awsAccessKeyID, awsSecretAccessKey, awsSessionToken, endpoint string) (Config, error) { ep, err := endpoints.GetSTSRegionalEndpoint("regional") if err != nil { - fmt.Printf("GetSTSRegionalEndpoint error: %s", err) + log.Printf("GetSTSRegionalEndpoint error: %s", err) return *config, err } @@ -317,7 +334,7 @@ func configureCredentialsSTS(config *Config, secret, region, awsAccessKeyID, aws if service == endpoints.StsServiceID { if endpoint == "" { return endpoints.ResolvedEndpoint{ - URL: "https://sts.amazonaws.com", + URL: endPointSTSDefault, SigningRegion: region, }, nil } @@ -343,17 +360,17 @@ func configureCredentialsSTS(config *Config, secret, region, awsAccessKeyID, aws _, err = sess.Config.Credentials.Get() if err != nil { - fmt.Printf("Session get credentials error: %s", err) + log.Printf("Session get credentials error: %s", err) return *config, err } _, err = creds.Get() if err != nil { - fmt.Printf("STS get credentials error: %s", err) + log.Printf("STS get credentials error: %s", err) return *config, err } secretString, err := secretsManagerGetSecretValue(sess, &aws.Config{Credentials: creds, Region: aws.String(region)}, secret) if err != nil { - fmt.Printf("Get Secrets error: %s", err) + log.Printf("Get Secrets error: %s", err) return *config, err } @@ -387,25 +404,24 @@ func secretsManagerGetSecretValue(sess *session.Session, creds *aws.Config, secr if aerr, ok := err.(awserr.Error); ok { switch aerr.Code() { case secretsmanager.ErrCodeResourceNotFoundException: - fmt.Println(secretsmanager.ErrCodeResourceNotFoundException, aerr.Error()) + log.Println(secretsmanager.ErrCodeResourceNotFoundException, aerr.Error()) case secretsmanager.ErrCodeInvalidParameterException: - fmt.Println(secretsmanager.ErrCodeInvalidParameterException, aerr.Error()) + log.Println(secretsmanager.ErrCodeInvalidParameterException, aerr.Error()) case secretsmanager.ErrCodeInvalidRequestException: - fmt.Println(secretsmanager.ErrCodeInvalidRequestException, aerr.Error()) + log.Println(secretsmanager.ErrCodeInvalidRequestException, aerr.Error()) case secretsmanager.ErrCodeDecryptionFailure: - fmt.Println(secretsmanager.ErrCodeDecryptionFailure, aerr.Error()) + log.Println(secretsmanager.ErrCodeDecryptionFailure, aerr.Error()) case secretsmanager.ErrCodeInternalServiceError: - fmt.Println(secretsmanager.ErrCodeInternalServiceError, aerr.Error()) + log.Println(secretsmanager.ErrCodeInternalServiceError, aerr.Error()) default: - fmt.Println(aerr.Error()) + log.Println(aerr.Error()) } } else { - fmt.Println(err.Error()) + log.Println(err.Error()) } return "", err } - fmt.Println(result) return *result.SecretString, err } @@ -572,6 +588,27 @@ func HashCodeString(s string) int { return 0 } +func appendBlockWithCtyValues(body *hclwrite.Body, name string, labels []string, values map[string]cty.Value) { + if len(values) == 0 { + return + } + + keys := make([]string, 0, len(values)) + + for key := range values { + keys = append(keys, key) + } + + sort.Strings(keys) + + body.AppendNewline() + block := body.AppendNewBlock(name, labels).Body() + + for _, k := range keys { + block.SetAttributeValue(k, values[k]) + } +} + // assumeRoleSchema From aws provider.go func assumeRoleSchema() *schema.Schema { return &schema.Schema{ diff --git a/mongodbatlas/resource_mongodbatlas_access_list_api_key.go b/mongodbatlas/resource_mongodbatlas_access_list_api_key.go new file mode 100644 index 0000000000..8ff1cad4e5 --- /dev/null +++ b/mongodbatlas/resource_mongodbatlas_access_list_api_key.go @@ -0,0 +1,227 @@ +package mongodbatlas + +import ( + "context" + "errors" + "fmt" + "net" + "net/http" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + matlas "go.mongodb.org/atlas/mongodbatlas" +) + +func resourceMongoDBAtlasAccessListAPIKey() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceMongoDBAtlasAccessListAPIKeyCreate, + ReadContext: resourceMongoDBAtlasAccessListAPIKeyRead, + UpdateContext: resourceMongoDBAtlasAccessListAPIKeyUpdate, + DeleteContext: resourceMongoDBAtlasAccessListAPIKeyDelete, + Importer: &schema.ResourceImporter{ + StateContext: resourceMongoDBAtlasAccessListAPIKeyImportState, + }, + Schema: map[string]*schema.Schema{ + "org_id": { + Type: schema.TypeString, + Required: true, + }, + "api_key_id": { + Type: schema.TypeString, + Required: true, + }, + "cidr_block": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"ip_address"}, + ValidateFunc: func(i interface{}, k string) (s []string, es []error) { + v, ok := i.(string) + if !ok { + es = append(es, fmt.Errorf("expected type of %s to be string", k)) + return + } + + _, ipnet, err := net.ParseCIDR(v) + if err != nil { + es = append(es, fmt.Errorf("expected %s to contain a valid CIDR, got: %s with err: %s", k, v, err)) + return + } + + if ipnet == nil || v != ipnet.String() { + es = append(es, fmt.Errorf("expected %s to contain a valid network CIDR, expected %s, got %s", k, ipnet, v)) + return + } + return + }, + }, + "ip_address": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: []string{"cidr_block"}, + ValidateFunc: validation.IsIPAddress, + }, + }, + } +} + +func resourceMongoDBAtlasAccessListAPIKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*MongoDBClient).Atlas + orgID := d.Get("org_id").(string) + apiKeyID := d.Get("api_key_id").(string) + IPAddress := d.Get("ip_address").(string) + CIDRBlock := d.Get("cidr_block").(string) + + var entry string + + switch { + case CIDRBlock != "": + parts := strings.SplitN(CIDRBlock, "/", 2) + if parts[1] == "32" { + entry = parts[0] + } else { + entry = CIDRBlock + } + case IPAddress != "": + entry = IPAddress + default: + entry = IPAddress + } + + createReq := matlas.AccessListAPIKeysReq{} + createReq.CidrBlock = CIDRBlock + createReq.IPAddress = IPAddress + + createRequest := []*matlas.AccessListAPIKeysReq{} + createRequest = append(createRequest, &createReq) + + _, resp, err := conn.AccessListAPIKeys.Create(ctx, orgID, apiKeyID, createRequest) + if err != nil { + if resp != nil && resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + + return diag.FromErr(fmt.Errorf("error create API key: %s", err)) + } + + d.SetId(encodeStateID(map[string]string{ + "org_id": orgID, + "api_key_id": apiKeyID, + "entry": entry, + })) + + return resourceMongoDBAtlasAccessListAPIKeyRead(ctx, d, meta) +} + +func resourceMongoDBAtlasAccessListAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + ids := decodeStateID(d.Id()) + orgID := ids["org_id"] + apiKeyID := ids["api_key_id"] + + apiKey, _, err := conn.AccessListAPIKeys.Get(ctx, orgID, apiKeyID, strings.ReplaceAll(ids["entry"], "/", "%2F")) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) + } + + if err := d.Set("api_key_id", apiKeyID); err != nil { + return diag.FromErr(fmt.Errorf("error setting `api_key_id`: %s", err)) + } + + if err := d.Set("ip_address", apiKey.IPAddress); err != nil { + return diag.FromErr(fmt.Errorf("error setting `ip_address`: %s", err)) + } + + if err := d.Set("cidr_block", apiKey.CidrBlock); err != nil { + return diag.FromErr(fmt.Errorf("error setting `cidr_block`: %s", err)) + } + + d.SetId(encodeStateID(map[string]string{ + "org_id": orgID, + "api_key_id": apiKeyID, + "entry": ids["entry"], + })) + + return nil +} + +func resourceMongoDBAtlasAccessListAPIKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + return resourceMongoDBAtlasAccessListAPIKeyRead(ctx, d, meta) +} + +func resourceMongoDBAtlasAccessListAPIKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*MongoDBClient).Atlas + ids := decodeStateID(d.Id()) + orgID := ids["org_id"] + apiKeyID := ids["api_key_id"] + + _, err := conn.AccessListAPIKeys.Delete(ctx, orgID, apiKeyID, strings.ReplaceAll(ids["entry"], "/", "%2F")) + if err != nil { + return diag.FromErr(fmt.Errorf("error deleting API Key: %s", err)) + } + return nil +} + +func resourceMongoDBAtlasAccessListAPIKeyImportState(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + conn := meta.(*MongoDBClient).Atlas + + parts := strings.SplitN(d.Id(), "-", 3) + if len(parts) != 3 { + return nil, errors.New("import format error: to import a api key use the format {org_id}-{api_key_id}-{ip_address}") + } + + orgID := parts[0] + apiKeyID := parts[1] + entry := parts[2] + + r, _, err := conn.AccessListAPIKeys.Get(ctx, orgID, apiKeyID, strings.ReplaceAll(entry, "/", "%2F")) + if err != nil { + return nil, fmt.Errorf("couldn't import api key %s in project %s, error: %s", orgID, apiKeyID, err) + } + + if err := d.Set("org_id", orgID); err != nil { + return nil, fmt.Errorf("error setting `org_id`: %s", err) + } + + if err := d.Set("ip_address", r.IPAddress); err != nil { + return nil, fmt.Errorf("error setting `ip_address`: %s", err) + } + + if err := d.Set("cidr_block", r.CidrBlock); err != nil { + return nil, fmt.Errorf("error setting `cidr_block`: %s", err) + } + + d.SetId(encodeStateID(map[string]string{ + "org_id": orgID, + "api_key_id": apiKeyID, + "entry": entry, + })) + + return []*schema.ResourceData{d}, nil +} + +func flattenAccessListAPIKeys(ctx context.Context, conn *matlas.Client, orgID string, accessListAPIKeys []*matlas.AccessListAPIKey) []map[string]interface{} { + var results []map[string]interface{} + + if len(accessListAPIKeys) > 0 { + results = make([]map[string]interface{}, len(accessListAPIKeys)) + for k, accessListAPIKey := range accessListAPIKeys { + results[k] = map[string]interface{}{ + "ip_address": accessListAPIKey.IPAddress, + "cidr_block": accessListAPIKey.CidrBlock, + "created": accessListAPIKey.Created, + "access_count": accessListAPIKey.Count, + "last_used": accessListAPIKey.LastUsed, + "last_used_address": accessListAPIKey.LastUsedAddress, + } + } + } + return results +} diff --git a/mongodbatlas/resource_mongodbatlas_access_list_api_key_test.go b/mongodbatlas/resource_mongodbatlas_access_list_api_key_test.go new file mode 100644 index 0000000000..632ef20e38 --- /dev/null +++ b/mongodbatlas/resource_mongodbatlas_access_list_api_key_test.go @@ -0,0 +1,201 @@ +package mongodbatlas + +import ( + "context" + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" +) + +func TestAccProjectRSAccesslistAPIKey_SettingIPAddress(t *testing.T) { + resourceName := "mongodbatlas_access_list_api_key.test" + orgID := os.Getenv("MONGODB_ATLAS_ORG_ID") + ipAddress := fmt.Sprintf("179.154.226.%d", acctest.RandIntRange(0, 255)) + description := fmt.Sprintf("test-acc-access_list-api_key-%s", acctest.RandString(5)) + updatedIPAddress := fmt.Sprintf("179.154.228.%d", acctest.RandIntRange(0, 255)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasAccessListAPIKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMongoDBAtlasAccessListAPIKeyConfigSettingIPAddress(orgID, description, ipAddress), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasAccessListAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "ip_address"), + + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "ip_address", ipAddress), + ), + }, + { + Config: testAccMongoDBAtlasAccessListAPIKeyConfigSettingIPAddress(orgID, description, updatedIPAddress), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasAccessListAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "ip_address"), + + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "ip_address", updatedIPAddress), + ), + }, + }, + }) +} + +func TestAccProjectRSAccessListAPIKey_SettingCIDRBlock(t *testing.T) { + resourceName := "mongodbatlas_access_list_api_key.test" + orgID := os.Getenv("MONGODB_ATLAS_ORG_ID") + cidrBlock := fmt.Sprintf("179.154.226.%d/32", acctest.RandIntRange(0, 255)) + description := fmt.Sprintf("test-acc-access_list-api_key-%s", acctest.RandString(5)) + updatedCIDRBlock := fmt.Sprintf("179.154.228.%d/32", acctest.RandIntRange(0, 255)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasAccessListAPIKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMongoDBAtlasAccessListAPIKeyConfigSettingCIDRBlock(orgID, description, cidrBlock), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasAccessListAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "cidr_block"), + + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "cidr_block", cidrBlock), + ), + }, + { + Config: testAccMongoDBAtlasAccessListAPIKeyConfigSettingCIDRBlock(orgID, description, updatedCIDRBlock), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasAccessListAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "cidr_block"), + + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "cidr_block", updatedCIDRBlock), + ), + }, + }, + }) +} + +func TestAccProjectRSAccessListAPIKey_importBasic(t *testing.T) { + orgID := os.Getenv("MONGODB_ATLAS_ORG_ID") + ipAddress := fmt.Sprintf("179.154.226.%d", acctest.RandIntRange(0, 255)) + resourceName := "mongodbatlas_access_list_api_key.test" + description := fmt.Sprintf("test-acc-access_list-api_key-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasAccessListAPIKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMongoDBAtlasAccessListAPIKeyConfigSettingIPAddress(orgID, description, ipAddress), + }, + { + ResourceName: resourceName, + ImportStateIdFunc: testAccCheckMongoDBAtlasAccessListAPIKeyImportStateIDFunc(resourceName), + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckMongoDBAtlasAccessListAPIKeyExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*MongoDBClient).Atlas + + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("no ID is set") + } + + ids := decodeStateID(rs.Primary.ID) + + _, _, err := conn.AccessListAPIKeys.Get(context.Background(), ids["org_id"], ids["api_key_id"], ids["entry"]) + if err != nil { + return fmt.Errorf("access list API Key (%s) does not exist", ids["api_key_id"]) + } + + return nil + } +} + +func testAccCheckMongoDBAtlasAccessListAPIKeyDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*MongoDBClient).Atlas + + for _, rs := range s.RootModule().Resources { + if rs.Type != "mongodbatlas_access_list_api_key" { + continue + } + + ids := decodeStateID(rs.Primary.ID) + + _, _, err := conn.AccessListAPIKeys.Get(context.Background(), ids["project_id"], ids["api_key_id"], ids["entry"]) + if err == nil { + return fmt.Errorf("access list API Key (%s) still exists", ids["api_key_id"]) + } + } + + return nil +} + +func testAccCheckMongoDBAtlasAccessListAPIKeyImportStateIDFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("not found: %s", resourceName) + } + + ids := decodeStateID(rs.Primary.ID) + + return fmt.Sprintf("%s-%s-%s", ids["org_id"], ids["api_key_id"], ids["entry"]), nil + } +} + +func testAccMongoDBAtlasAccessListAPIKeyConfigSettingIPAddress(orgID, description, ipAddress string) string { + return fmt.Sprintf(` + + resource "mongodbatlas_api_key" "test" { + org_id = %[1]q + description = %[2]q + role_names = ["ORG_MEMBER","ORG_BILLING_ADMIN"] + } + + resource "mongodbatlas_access_list_api_key" "test" { + org_id = %[1]q + ip_address = %[3]q + api_key_id = mongodbatlas_api_key.test.api_key_id + } + `, orgID, description, ipAddress) +} +func testAccMongoDBAtlasAccessListAPIKeyConfigSettingCIDRBlock(orgID, description, cidrBlock string) string { + return fmt.Sprintf(` + + resource "mongodbatlas_api_key" "test" { + org_id = %[1]q + description = %[2]q + role_names = ["ORG_MEMBER","ORG_BILLING_ADMIN"] + } + + resource "mongodbatlas_access_list_api_key" "test" { + org_id = %[1]q + api_key_id = mongodbatlas_api_key.test.api_key_id + cidr_block = %[3]q + } + `, orgID, description, cidrBlock) +} diff --git a/mongodbatlas/resource_mongodbatlas_advanced_cluster.go b/mongodbatlas/resource_mongodbatlas_advanced_cluster.go index cc1480871c..4c5d73f61d 100644 --- a/mongodbatlas/resource_mongodbatlas_advanced_cluster.go +++ b/mongodbatlas/resource_mongodbatlas_advanced_cluster.go @@ -19,6 +19,7 @@ import ( "github.com/mwielbut/pointy" "github.com/spf13/cast" matlas "go.mongodb.org/atlas/mongodbatlas" + "golang.org/x/exp/slices" ) type acCtxKey string @@ -45,6 +46,14 @@ func resourceMongoDBAtlasAdvancedCluster() *schema.Resource { Importer: &schema.ResourceImporter{ StateContext: resourceMongoDBAtlasAdvancedClusterImportState, }, + SchemaVersion: 1, + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceMongoDBAtlasAdvancedClusterResourceV0().CoreConfigSchema().ImpliedType(), + Upgrade: resourceMongoDBAtlasAdvancedClusterStateUpgradeV0, + Version: 0, + }, + }, Schema: map[string]*schema.Schema{ "project_id": { Type: schema.TypeString, @@ -61,10 +70,33 @@ func resourceMongoDBAtlasAdvancedCluster() *schema.Resource { Computed: true, }, "bi_connector": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + ConflictsWith: []string{"bi_connector_config"}, + Deprecated: "use bi_connector_config instead", + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "read_preference": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + }, + "bi_connector_config": { + Type: schema.TypeList, + Optional: true, + ConflictsWith: []string{"bi_connector"}, + Computed: true, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { @@ -151,7 +183,7 @@ func resourceMongoDBAtlasAdvancedCluster() *schema.Resource { Computed: true, }, "replication_specs": { - Type: schema.TypeSet, + Type: schema.TypeList, Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -166,7 +198,7 @@ func resourceMongoDBAtlasAdvancedCluster() *schema.Resource { ValidateFunc: validation.IntBetween(1, 50), }, "region_configs": { - Type: schema.TypeSet, + Type: schema.TypeList, Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -206,6 +238,41 @@ func resourceMongoDBAtlasAdvancedCluster() *schema.Resource { }, }, }, + "analytics_auto_scaling": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disk_gb_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "compute_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "compute_scale_down_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "compute_min_instance_size": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "compute_max_instance_size": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + }, "backing_provider_name": { Type: schema.TypeString, Optional: true, @@ -241,7 +308,7 @@ func resourceMongoDBAtlasAdvancedCluster() *schema.Resource { }, }, }, - Set: replicationSpecsHashSet, + // Set: replicationSpecsHashSet, }, "root_cert_type": { Type: schema.TypeString, @@ -310,7 +377,7 @@ func resourceMongoDBAtlasAdvancedClusterCreate(ctx context.Context, d *schema.Re request := &matlas.AdvancedCluster{ Name: d.Get("name").(string), ClusterType: cast.ToString(d.Get("cluster_type")), - ReplicationSpecs: expandAdvancedReplicationSpecs(d.Get("replication_specs").(*schema.Set).List()), + ReplicationSpecs: expandAdvancedReplicationSpecs(d.Get("replication_specs").([]interface{})), } if v, ok := d.GetOk("backup_enabled"); ok { @@ -323,6 +390,13 @@ func resourceMongoDBAtlasAdvancedClusterCreate(ctx context.Context, d *schema.Re } request.BiConnector = biConnector } + if _, ok := d.GetOk("bi_connector_config"); ok { + biConnector, err := expandBiConnectorConfig(d) + if err != nil { + return diag.FromErr(fmt.Errorf(errorClusterAdvancedCreate, err)) + } + request.BiConnector = biConnector + } if v, ok := d.GetOk("disk_size_gb"); ok { request.DiskSizeGB = pointy.Float64(v.(float64)) } @@ -448,6 +522,10 @@ func resourceMongoDBAtlasAdvancedClusterRead(ctx context.Context, d *schema.Reso return diag.FromErr(fmt.Errorf(errorClusterAdvancedSetting, "bi_connector", clusterName, err)) } + if err := d.Set("bi_connector_config", flattenBiConnectorConfig(cluster.BiConnector)); err != nil { + return diag.FromErr(fmt.Errorf(errorClusterAdvancedSetting, "bi_connector_config", clusterName, err)) + } + if err := d.Set("cluster_type", cluster.ClusterType); err != nil { return diag.FromErr(fmt.Errorf(errorClusterAdvancedSetting, "cluster_type", clusterName, err)) } @@ -492,7 +570,7 @@ func resourceMongoDBAtlasAdvancedClusterRead(ctx context.Context, d *schema.Reso return diag.FromErr(fmt.Errorf(errorClusterAdvancedSetting, "pit_enabled", clusterName, err)) } - replicationSpecs, err := flattenAdvancedReplicationSpecs(ctx, cluster.ReplicationSpecs, d.Get("replication_specs").(*schema.Set).List(), d, conn) + replicationSpecs, err := flattenAdvancedReplicationSpecs(ctx, cluster.ReplicationSpecs, d.Get("replication_specs").([]interface{}), d, conn) if err != nil { return diag.FromErr(fmt.Errorf(errorClusterAdvancedSetting, "replication_specs", clusterName, err)) } @@ -581,6 +659,10 @@ func resourceMongoDBAtlasAdvancedClusterUpdate(ctx context.Context, d *schema.Re cluster.BackupEnabled = pointy.Bool(d.Get("backup_enabled").(bool)) } + if d.HasChange("bi_connector_config") { + cluster.BiConnector, _ = expandBiConnectorConfig(d) + } + if d.HasChange("bi_connector") { cluster.BiConnector, _ = expandBiConnectorConfig(d) } @@ -614,7 +696,7 @@ func resourceMongoDBAtlasAdvancedClusterUpdate(ctx context.Context, d *schema.Re } if d.HasChange("replication_specs") { - cluster.ReplicationSpecs = expandAdvancedReplicationSpecs(d.Get("replication_specs").(*schema.Set).List()) + cluster.ReplicationSpecs = expandAdvancedReplicationSpecs(d.Get("replication_specs").([]interface{})) } if d.HasChange("root_cert_type") { @@ -773,7 +855,7 @@ func expandAdvancedReplicationSpec(tfMap map[string]interface{}) *matlas.Advance apiObject := &matlas.AdvancedReplicationSpec{ NumShards: tfMap["num_shards"].(int), ZoneName: tfMap["zone_name"].(string), - RegionConfigs: expandRegionConfigs(tfMap["region_configs"].(*schema.Set).List()), + RegionConfigs: expandRegionConfigs(tfMap["region_configs"].([]interface{})), } return apiObject @@ -825,6 +907,9 @@ func expandRegionConfig(tfMap map[string]interface{}) *matlas.AdvancedRegionConf if v, ok := tfMap["auto_scaling"]; ok && len(v.([]interface{})) > 0 { apiObject.AutoScaling = expandRegionConfigAutoScaling(v.([]interface{})) } + if v, ok := tfMap["analytics_auto_scaling"]; ok && len(v.([]interface{})) > 0 { + apiObject.AnalyticsAutoScaling = expandRegionConfigAutoScaling(v.([]interface{})) + } if v, ok := tfMap["backing_provider_name"]; ok { apiObject.BackingProviderName = v.(string) } @@ -924,7 +1009,7 @@ func flattenAdvancedReplicationSpec(ctx context.Context, apiObject *matlas.Advan tfMap["num_shards"] = apiObject.NumShards tfMap["id"] = apiObject.ID if tfMapObject != nil { - object, containerIds, err := flattenAdvancedReplicationSpecRegionConfigs(ctx, apiObject.RegionConfigs, tfMapObject["region_configs"].(*schema.Set).List(), d, conn) + object, containerIds, err := flattenAdvancedReplicationSpecRegionConfigs(ctx, apiObject.RegionConfigs, tfMapObject["region_configs"].([]interface{}), d, conn) if err != nil { return nil, err } @@ -943,30 +1028,75 @@ func flattenAdvancedReplicationSpec(ctx context.Context, apiObject *matlas.Advan return tfMap, nil } -func flattenAdvancedReplicationSpecs(ctx context.Context, apiObjects []*matlas.AdvancedReplicationSpec, tfMapObjects []interface{}, +func doesAdvancedReplicationSpecMatchAPI(tfObject map[string]interface{}, apiObject *matlas.AdvancedReplicationSpec) bool { + return tfObject["id"] == apiObject.ID || (tfObject["id"] == nil && tfObject["zone_name"] == apiObject.ZoneName) +} + +func flattenAdvancedReplicationSpecs(ctx context.Context, rawAPIObjects []*matlas.AdvancedReplicationSpec, tfMapObjects []interface{}, d *schema.ResourceData, conn *matlas.Client) ([]map[string]interface{}, error) { + var apiObjects []*matlas.AdvancedReplicationSpec + + for _, advancedReplicationSpec := range rawAPIObjects { + if advancedReplicationSpec != nil { + apiObjects = append(apiObjects, advancedReplicationSpec) + } + } + if len(apiObjects) == 0 { return nil, nil } - var tfList []map[string]interface{} + tfList := make([]map[string]interface{}, len(apiObjects)) + wasAPIObjectUsed := make([]bool, len(apiObjects)) - for i, apiObject := range apiObjects { - if apiObject == nil { - continue + for i := 0; i < len(tfList); i++ { + var tfMapObject map[string]interface{} + + if len(tfMapObjects) > i { + tfMapObject = tfMapObjects[i].(map[string]interface{}) } + for j := 0; j < len(apiObjects); j++ { + if wasAPIObjectUsed[j] { + continue + } + + if !doesAdvancedReplicationSpecMatchAPI(tfMapObject, apiObjects[j]) { + continue + } + + advancedReplicationSpec, err := flattenAdvancedReplicationSpec(ctx, apiObjects[j], tfMapObject, d, conn) + + if err != nil { + return nil, err + } + + tfList[i] = advancedReplicationSpec + wasAPIObjectUsed[j] = true + break + } + } + + for i, tfo := range tfList { var tfMapObject map[string]interface{} + if tfo != nil { + continue + } + if len(tfMapObjects) > i { tfMapObject = tfMapObjects[i].(map[string]interface{}) } - advancedReplicationSpec, err := flattenAdvancedReplicationSpec(ctx, apiObject, tfMapObject, d, conn) + j := slices.IndexFunc(wasAPIObjectUsed, func(isUsed bool) bool { return !isUsed }) + advancedReplicationSpec, err := flattenAdvancedReplicationSpec(ctx, apiObjects[j], tfMapObject, d, conn) + if err != nil { return nil, err } - tfList = append(tfList, advancedReplicationSpec) + + tfList[i] = advancedReplicationSpec + wasAPIObjectUsed[j] = true } return tfList, nil @@ -991,11 +1121,15 @@ func flattenAdvancedReplicationSpecRegionConfig(apiObject *matlas.AdvancedRegion if v, ok := tfMapObject["auto_scaling"]; ok && len(v.([]interface{})) > 0 { tfMap["auto_scaling"] = flattenAdvancedReplicationSpecAutoScaling(apiObject.AutoScaling) } + if v, ok := tfMapObject["analytics_auto_scaling"]; ok && len(v.([]interface{})) > 0 { + tfMap["analytics_auto_scaling"] = flattenAdvancedReplicationSpecAutoScaling(apiObject.AnalyticsAutoScaling) + } } else { tfMap["analytics_specs"] = flattenAdvancedReplicationSpecRegionConfigSpec(apiObject.AnalyticsSpecs, apiObject.ProviderName, nil) tfMap["electable_specs"] = flattenAdvancedReplicationSpecRegionConfigSpec(apiObject.ElectableSpecs, apiObject.ProviderName, nil) tfMap["read_only_specs"] = flattenAdvancedReplicationSpecRegionConfigSpec(apiObject.ReadOnlySpecs, apiObject.ProviderName, nil) tfMap["auto_scaling"] = flattenAdvancedReplicationSpecAutoScaling(apiObject.AutoScaling) + tfMap["analytics_auto_scaling"] = flattenAdvancedReplicationSpecAutoScaling(apiObject.AnalyticsAutoScaling) } tfMap["region_name"] = apiObject.RegionName @@ -1171,7 +1305,7 @@ func replicationSpecsHashSet(v interface{}) int { var buf bytes.Buffer m := v.(map[string]interface{}) buf.WriteString(fmt.Sprintf("%d", m["num_shards"].(int))) - buf.WriteString(fmt.Sprintf("%+v", m["region_configs"].(*schema.Set))) + buf.WriteString(fmt.Sprintf("%+v", m["region_configs"].([]interface{}))) buf.WriteString(m["zone_name"].(string)) return schema.HashString(buf.String()) } @@ -1182,8 +1316,8 @@ func getUpgradeRequest(d *schema.ResourceData) *matlas.Cluster { } cs, us := d.GetChange("replication_specs") - currentSpecs := expandAdvancedReplicationSpecs(cs.(*schema.Set).List()) - updatedSpecs := expandAdvancedReplicationSpecs(us.(*schema.Set).List()) + currentSpecs := expandAdvancedReplicationSpecs(cs.([]interface{})) + updatedSpecs := expandAdvancedReplicationSpecs(us.([]interface{})) if len(currentSpecs) != 1 || len(updatedSpecs) != 1 || len(currentSpecs[0].RegionConfigs) != 1 || len(updatedSpecs[0].RegionConfigs) != 1 { return nil diff --git a/mongodbatlas/resource_mongodbatlas_advanced_cluster_migrate.go b/mongodbatlas/resource_mongodbatlas_advanced_cluster_migrate.go new file mode 100644 index 0000000000..3480666f61 --- /dev/null +++ b/mongodbatlas/resource_mongodbatlas_advanced_cluster_migrate.go @@ -0,0 +1,250 @@ +package mongodbatlas + +import ( + "bytes" + "context" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func resourceMongoDBAtlasAdvancedClusterResourceV0() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "project_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "cluster_id": { + Type: schema.TypeString, + Computed: true, + }, + "backup_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "bi_connector": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "read_preference": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + }, + "cluster_type": { + Type: schema.TypeString, + Required: true, + }, + "connection_strings": clusterConnectionStringsSchema(), + "create_date": { + Type: schema.TypeString, + Computed: true, + }, + "disk_size_gb": { + Type: schema.TypeFloat, + Optional: true, + Computed: true, + }, + "encryption_at_rest_provider": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "labels": { + Type: schema.TypeSet, + Optional: true, + Set: func(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(m["key"].(string)) + buf.WriteString(m["value"].(string)) + return HashCodeString(buf.String()) + }, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "value": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + }, + "mongo_db_major_version": { + Type: schema.TypeString, + Optional: true, + Computed: true, + StateFunc: formatMongoDBMajorVersion, + }, + "mongo_db_version": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "paused": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "pit_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "replication_specs": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": { + Type: schema.TypeString, + Computed: true, + }, + "num_shards": { + Type: schema.TypeInt, + Optional: true, + Default: 1, + ValidateFunc: validation.IntBetween(1, 50), + }, + "region_configs": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "analytics_specs": advancedClusterRegionConfigsSpecsSchema(), + "auto_scaling": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disk_gb_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "compute_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "compute_scale_down_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "compute_min_instance_size": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "compute_max_instance_size": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + }, + }, + }, + "backing_provider_name": { + Type: schema.TypeString, + Optional: true, + }, + "electable_specs": advancedClusterRegionConfigsSpecsSchema(), + "priority": { + Type: schema.TypeInt, + Required: true, + }, + "provider_name": { + Type: schema.TypeString, + Required: true, + }, + "read_only_specs": advancedClusterRegionConfigsSpecsSchema(), + "region_name": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "container_id": { + Type: schema.TypeMap, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Computed: true, + }, + "zone_name": { + Type: schema.TypeString, + Optional: true, + Default: "ZoneName managed by Terraform", + }, + }, + }, + Set: replicationSpecsHashSet, + }, + "root_cert_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "state_name": { + Type: schema.TypeString, + Computed: true, + }, + "termination_protection_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + "version_release_system": { + Type: schema.TypeString, + Optional: true, + Default: "LTS", + ValidateFunc: validation.StringInSlice([]string{"LTS", "CONTINUOUS"}, false), + }, + "advanced_configuration": clusterAdvancedConfigurationSchema(), + }, + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(3 * time.Hour), + Update: schema.DefaultTimeout(3 * time.Hour), + Delete: schema.DefaultTimeout(3 * time.Hour), + }, + } +} + +func resourceMongoDBAtlasAdvancedClusterStateUpgradeV0(ctx context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + return migrateBIConnectorConfig(rawState), nil +} + +func migrateBIConnectorConfig(rawState map[string]interface{}) map[string]interface{} { + rawState["bi_connector_config"] = rawState["bi_connector"] + rawState["bi_connector"] = nil + return rawState +} diff --git a/mongodbatlas/resource_mongodbatlas_advanced_cluster_migration_test.go b/mongodbatlas/resource_mongodbatlas_advanced_cluster_migration_test.go new file mode 100644 index 0000000000..dfd4082294 --- /dev/null +++ b/mongodbatlas/resource_mongodbatlas_advanced_cluster_migration_test.go @@ -0,0 +1,132 @@ +package mongodbatlas + +import ( + "fmt" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" +) + +func TestAccClusterRSAdvancedClusterMigrateState_empty_advancedConfig(t *testing.T) { + v0State := map[string]interface{}{ + "project_id": "test-id", + "name": "test-cluster", + "cluster_type": "REPLICASET", + "replication_specs": []interface{}{ + map[string]interface{}{ + "region_configs": []interface{}{ + map[string]interface{}{ + "electable_specs": []interface{}{ + map[string]interface{}{ + "instance_size": "M30", + "node_count": 3, + }, + }, + "provider_name": "AWS", + "region_name": "US_EAST_1", + "priority": 7, + }, + }, + }, + }, + "bi_connector": []interface{}{ + map[string]interface{}{ + "enabled": 1, + "read_preference": "secondary", + }, + }, + } + + v0Config := terraform.NewResourceConfigRaw(v0State) + diags := resourceMongoDBAtlasAdvancedClusterResourceV0().Validate(v0Config) + + if len(diags) > 0 { + t.Error("test precondition failed - invalid mongodb cluster v0 config") + + return + } + + // test migrate function + v1State := migrateBIConnectorConfig(v0State) + + v1Config := terraform.NewResourceConfigRaw(v1State) + diags = resourceMongoDBAtlasAdvancedCluster().Validate(v1Config) + if len(diags) > 0 { + fmt.Println(diags) + t.Error("migrated cluster advanced config is invalid") + + return + } +} + +func TestAccClusterRSAdvancedClusterV0StateUpgrade_ReplicationSpecs(t *testing.T) { + v0State := map[string]interface{}{ + "project_id": "test-id", + "name": "test-cluster", + "cluster_type": "REPLICASET", + "backup_enabled": true, + "disk_size_gb": 256, + "replication_specs": []interface{}{ + map[string]interface{}{ + "zone_name": "Test Zone", + "region_configs": []interface{}{ + map[string]interface{}{ + "priority": 7, + "provider_name": "AWS", + "region_name": "US_EAST_1", + "electable_specs": []interface{}{ + map[string]interface{}{ + "instance_size": "M30", + "node_count": 3, + }, + }, + "read_only_specs": []interface{}{ + map[string]interface{}{ + "disk_iops": 0, + "instance_size": "M30", + "node_count": 0, + }, + }, + "auto_scaling": []interface{}{ + map[string]interface{}{ + "compute_enabled": true, + "compute_max_instance_size": "M60", + "compute_min_instance_size": "M30", + "compute_scale_down_enabled": true, + "disk_gb_enabled": false, + }, + }, + }, + }, + }, + }, + } + + v0Config := terraform.NewResourceConfigRaw(v0State) + diags := resourceMongoDBAtlasAdvancedClusterResourceV0().Validate(v0Config) + + if len(diags) > 0 { + fmt.Println(diags) + t.Error("test precondition failed - invalid mongodb cluster v0 config") + + return + } + + // test migrate function + v1State := migrateBIConnectorConfig(v0State) + + v1Config := terraform.NewResourceConfigRaw(v1State) + diags = resourceMongoDBAtlasAdvancedCluster().Validate(v1Config) + if len(diags) > 0 { + fmt.Println(diags) + t.Error("migrated advanced cluster replication_specs invalid") + + return + } + + if len(v1State["replication_specs"].([]interface{})) != len(v0State["replication_specs"].([]interface{})) { + t.Error("migrated replication specs did not contain the same number of elements") + + return + } +} diff --git a/mongodbatlas/resource_mongodbatlas_advanced_cluster_test.go b/mongodbatlas/resource_mongodbatlas_advanced_cluster_test.go index 6e5b06ff22..5eb25d5e00 100644 --- a/mongodbatlas/resource_mongodbatlas_advanced_cluster_test.go +++ b/mongodbatlas/resource_mongodbatlas_advanced_cluster_test.go @@ -433,6 +433,50 @@ func TestAccClusterRSAdvancedClusterConfig_ReplicationSpecsAutoScaling(t *testin }) } +func TestAccClusterRSAdvancedClusterConfig_ReplicationSpecsAnalyticsAutoScaling(t *testing.T) { + var ( + cluster matlas.AdvancedCluster + resourceName = "mongodbatlas_advanced_cluster.test" + projectID = os.Getenv("MONGODB_ATLAS_PROJECT_ID") + rName = acctest.RandomWithPrefix("test-acc") + rNameUpdated = acctest.RandomWithPrefix("test-acc") + autoScaling = &matlas.AutoScaling{ + Compute: &matlas.Compute{Enabled: pointy.Bool(false), MaxInstanceSize: ""}, + DiskGBEnabled: pointy.Bool(true), + } + autoScalingUpdated = &matlas.AutoScaling{ + Compute: &matlas.Compute{Enabled: pointy.Bool(true), MaxInstanceSize: "M20"}, + DiskGBEnabled: pointy.Bool(true), + } + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasAdvancedClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMongoDBAtlasAdvancedClusterConfigReplicationSpecsAnalyticsAutoScaling(projectID, rName, autoScaling), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasAdvancedClusterExists(resourceName, &cluster), + testAccCheckMongoDBAtlasAdvancedClusterAttributes(&cluster, rName), + resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.region_configs.#"), + testAccCheckMongoDBAtlasAdvancedClusterAnalyticsScaling(&cluster, *autoScaling.Compute.Enabled), + ), + }, + { + Config: testAccMongoDBAtlasAdvancedClusterConfigReplicationSpecsAnalyticsAutoScaling(projectID, rNameUpdated, autoScalingUpdated), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasAdvancedClusterExists(resourceName, &cluster), + testAccCheckMongoDBAtlasAdvancedClusterAttributes(&cluster, rNameUpdated), + resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.region_configs.#"), + testAccCheckMongoDBAtlasAdvancedClusterAnalyticsScaling(&cluster, *autoScalingUpdated.Compute.Enabled), + ), + }, + }, + }) +} + func testAccCheckMongoDBAtlasAdvancedClusterExists(resourceName string, cluster *matlas.AdvancedCluster) resource.TestCheckFunc { return func(s *terraform.State) error { conn := testAccProvider.Meta().(*MongoDBClient).Atlas @@ -479,6 +523,16 @@ func testAccCheckMongoDBAtlasAdvancedClusterScaling(cluster *matlas.AdvancedClus } } +func testAccCheckMongoDBAtlasAdvancedClusterAnalyticsScaling(cluster *matlas.AdvancedCluster, computeEnabled bool) resource.TestCheckFunc { + return func(s *terraform.State) error { + if *cluster.ReplicationSpecs[0].RegionConfigs[0].AnalyticsAutoScaling.Compute.Enabled != computeEnabled { + return fmt.Errorf("compute_enabled: %d", cluster.ReplicationSpecs[0].RegionConfigs[0].AnalyticsAutoScaling.Compute.Enabled) + } + + return nil + } +} + func testAccCheckMongoDBAtlasAdvancedClusterDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*MongoDBClient).Atlas @@ -755,3 +809,37 @@ resource "mongodbatlas_advanced_cluster" "test" { `, projectID, name, *p.Compute.Enabled, *p.DiskGBEnabled, p.Compute.MaxInstanceSize) } + +func testAccMongoDBAtlasAdvancedClusterConfigReplicationSpecsAnalyticsAutoScaling(projectID, name string, p *matlas.AutoScaling) string { + return fmt.Sprintf(` +resource "mongodbatlas_advanced_cluster" "test" { + project_id = %[1]q + name = %[2]q + cluster_type = "REPLICASET" + + replication_specs { + region_configs { + electable_specs { + instance_size = "M10" + node_count = 3 + } + analytics_specs { + instance_size = "M10" + node_count = 1 + } + analytics_auto_scaling { + compute_enabled = %[3]t + disk_gb_enabled = %[4]t + compute_max_instance_size = %[5]q + } + provider_name = "AWS" + priority = 7 + region_name = "US_EAST_1" + } + } + + +} + + `, projectID, name, *p.Compute.Enabled, *p.DiskGBEnabled, p.Compute.MaxInstanceSize) +} diff --git a/mongodbatlas/resource_mongodbatlas_alert_configuration.go b/mongodbatlas/resource_mongodbatlas_alert_configuration.go index c496997d5e..c6a5dc0a57 100644 --- a/mongodbatlas/resource_mongodbatlas_alert_configuration.go +++ b/mongodbatlas/resource_mongodbatlas_alert_configuration.go @@ -689,7 +689,17 @@ func flattenAlertConfigurationThresholdConfig(m *matlas.Threshold) []interface{} } func expandAlertConfigurationNotification(d *schema.ResourceData) ([]matlas.Notification, error) { - notifications := make([]matlas.Notification, len(d.Get("notification").([]interface{}))) + notificationCount := 0 + + if notifications, ok := d.GetOk("notification"); ok { + notificationCount = len(notifications.([]interface{})) + } + + notifications := make([]matlas.Notification, notificationCount) + + if notificationCount == 0 { + return notifications, nil + } for i, value := range d.Get("notification").([]interface{}) { v := value.(map[string]interface{}) @@ -746,6 +756,8 @@ func flattenAlertConfigurationNotifications(d *schema.ResourceData, notification notifications[i].ServiceKey = notificationsSchema[i].ServiceKey notifications[i].VictorOpsAPIKey = notificationsSchema[i].VictorOpsAPIKey notifications[i].VictorOpsRoutingKey = notificationsSchema[i].VictorOpsRoutingKey + notifications[i].WebhookURL = notificationsSchema[i].WebhookURL + notifications[i].WebhookSecret = notificationsSchema[i].WebhookSecret } } diff --git a/mongodbatlas/resource_mongodbatlas_api_key.go b/mongodbatlas/resource_mongodbatlas_api_key.go new file mode 100644 index 0000000000..d44f59e525 --- /dev/null +++ b/mongodbatlas/resource_mongodbatlas_api_key.go @@ -0,0 +1,217 @@ +package mongodbatlas + +import ( + "context" + "errors" + "fmt" + "net/http" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + matlas "go.mongodb.org/atlas/mongodbatlas" +) + +func resourceMongoDBAtlasAPIKey() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceMongoDBAtlasAPIKeyCreate, + ReadContext: resourceMongoDBAtlasAPIKeyRead, + UpdateContext: resourceMongoDBAtlasAPIKeyUpdate, + DeleteContext: resourceMongoDBAtlasAPIKeyDelete, + Importer: &schema.ResourceImporter{ + StateContext: resourceMongoDBAtlasAPIKeyImportState, + }, + Schema: map[string]*schema.Schema{ + "org_id": { + Type: schema.TypeString, + Required: true, + }, + "api_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Required: true, + }, + "public_key": { + Type: schema.TypeString, + Computed: true, + }, + "private_key": { + Type: schema.TypeString, + Computed: true, + Sensitive: true, + }, + "role_names": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + } +} + +func resourceMongoDBAtlasAPIKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*MongoDBClient).Atlas + orgID := d.Get("org_id").(string) + createRequest := new(matlas.APIKeyInput) + + createRequest.Desc = d.Get("description").(string) + + createRequest.Roles = expandStringList(d.Get("role_names").(*schema.Set).List()) + + apiKey, resp, err := conn.APIKeys.Create(ctx, orgID, createRequest) + if err != nil { + if resp != nil && resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + + return diag.FromErr(fmt.Errorf("error create API key: %s", err)) + } + + if err := d.Set("private_key", apiKey.PrivateKey); err != nil { + return diag.FromErr(fmt.Errorf("error setting `public_key`: %s", err)) + } + + d.SetId(encodeStateID(map[string]string{ + "org_id": orgID, + "api_key_id": apiKey.ID, + })) + + return resourceMongoDBAtlasAPIKeyRead(ctx, d, meta) +} + +func resourceMongoDBAtlasAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + ids := decodeStateID(d.Id()) + orgID := ids["org_id"] + apiKeyID := ids["api_key_id"] + + apiKey, _, err := conn.APIKeys.Get(ctx, orgID, apiKeyID) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) + } + + if err := d.Set("api_key_id", apiKey.ID); err != nil { + return diag.FromErr(fmt.Errorf("error setting `api_key_id`: %s", err)) + } + + if err := d.Set("description", apiKey.Desc); err != nil { + return diag.FromErr(fmt.Errorf("error setting `description`: %s", err)) + } + + if err := d.Set("public_key", apiKey.PublicKey); err != nil { + return diag.FromErr(fmt.Errorf("error setting `public_key`: %s", err)) + } + + if err := d.Set("role_names", flattenOrgAPIKeyRoles(orgID, apiKey.Roles)); err != nil { + return diag.FromErr(fmt.Errorf("error setting `roles`: %s", err)) + } + + return nil +} + +func resourceMongoDBAtlasAPIKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*MongoDBClient).Atlas + ids := decodeStateID(d.Id()) + orgID := ids["org_id"] + apiKeyID := ids["api_key_id"] + + updateRequest := new(matlas.APIKeyInput) + + if d.HasChange("description") || d.HasChange("role_names") { + updateRequest.Desc = d.Get("description").(string) + + updateRequest.Roles = expandStringList(d.Get("role_names").(*schema.Set).List()) + + _, _, err := conn.APIKeys.Update(ctx, orgID, apiKeyID, updateRequest) + if err != nil { + return diag.FromErr(fmt.Errorf("error updating API key: %s", err)) + } + } + + return resourceMongoDBAtlasAPIKeyRead(ctx, d, meta) +} + +func resourceMongoDBAtlasAPIKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*MongoDBClient).Atlas + ids := decodeStateID(d.Id()) + orgID := ids["org_id"] + apiKeyID := ids["api_key_id"] + + _, err := conn.APIKeys.Delete(ctx, orgID, apiKeyID) + if err != nil { + return diag.FromErr(fmt.Errorf("error API Key: %s", err)) + } + return nil +} + +func resourceMongoDBAtlasAPIKeyImportState(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + conn := meta.(*MongoDBClient).Atlas + + parts := strings.SplitN(d.Id(), "-", 2) + if len(parts) != 2 { + return nil, errors.New("import format error: to import a api key use the format {org_id}-{api_key_id}") + } + + orgID := parts[0] + apiKeyID := parts[1] + + r, _, err := conn.APIKeys.Get(ctx, orgID, apiKeyID) + if err != nil { + return nil, fmt.Errorf("couldn't import api key %s in project %s, error: %s", orgID, apiKeyID, err) + } + + if err := d.Set("description", r.Desc); err != nil { + return nil, fmt.Errorf("error setting `description`: %s", err) + } + + if err := d.Set("public_key", r.PublicKey); err != nil { + return nil, fmt.Errorf("error setting `public_key`: %s", err) + } + + d.SetId(encodeStateID(map[string]string{ + "org_id": orgID, + "api_key_id": r.ID, + })) + + return []*schema.ResourceData{d}, nil +} + +func flattenOrgAPIKeys(ctx context.Context, conn *matlas.Client, orgID string, apiKeys []matlas.APIKey) []map[string]interface{} { + var results []map[string]interface{} + + if len(apiKeys) > 0 { + results = make([]map[string]interface{}, len(apiKeys)) + for k, apiKey := range apiKeys { + results[k] = map[string]interface{}{ + "api_key_id": apiKey.ID, + "description": apiKey.Desc, + "public_key": apiKey.PublicKey, + "role_names": flattenOrgAPIKeyRoles(orgID, apiKey.Roles), + } + } + } + return results +} + +func flattenOrgAPIKeyRoles(orgID string, apiKeyRoles []matlas.AtlasRole) []string { + if len(apiKeyRoles) == 0 { + return nil + } + + flattenedOrgRoles := []string{} + + for _, role := range apiKeyRoles { + if strings.HasPrefix(role.RoleName, "ORG_") && role.OrgID == orgID { + flattenedOrgRoles = append(flattenedOrgRoles, role.RoleName) + } + } + + return flattenedOrgRoles +} diff --git a/mongodbatlas/resource_mongodbatlas_api_key_test.go b/mongodbatlas/resource_mongodbatlas_api_key_test.go new file mode 100644 index 0000000000..99de741d1c --- /dev/null +++ b/mongodbatlas/resource_mongodbatlas_api_key_test.go @@ -0,0 +1,144 @@ +package mongodbatlas + +import ( + "context" + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" +) + +func TestAccConfigRSAPIKey_Basic(t *testing.T) { + var ( + resourceName = "mongodbatlas_api_key.test" + orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") + description = fmt.Sprintf("test-acc-api_key-%s", acctest.RandString(5)) + descriptionUpdate = fmt.Sprintf("test-acc-api_key-%s", acctest.RandString(5)) + roleName = "ORG_MEMBER" + roleNameUpdated = "ORG_BILLING_ADMIN" + ) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasAPIKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMongoDBAtlasAPIKeyConfigBasic(orgID, description, roleName), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "description"), + + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "description", description), + ), + }, + { + Config: testAccMongoDBAtlasAPIKeyConfigBasic(orgID, descriptionUpdate, roleNameUpdated), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasAPIKeyExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "org_id"), + resource.TestCheckResourceAttrSet(resourceName, "description"), + + resource.TestCheckResourceAttr(resourceName, "org_id", orgID), + resource.TestCheckResourceAttr(resourceName, "description", descriptionUpdate), + ), + }, + }, + }) +} + +func TestAccConfigRSAPIKey_importBasic(t *testing.T) { + var ( + resourceName = "mongodbatlas_api_key.test" + orgID = os.Getenv("MONGODB_ATLAS_ORG_ID") + description = fmt.Sprintf("test-acc-import-api_key-%s", acctest.RandString(5)) + roleName = "ORG_MEMBER" + ) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasAPIKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMongoDBAtlasAPIKeyConfigBasic(orgID, description, roleName), + }, + { + ResourceName: resourceName, + ImportStateIdFunc: testAccCheckMongoDBAtlasAPIKeyImportStateIDFunc(resourceName), + ImportState: true, + ImportStateVerify: false, + }, + }, + }) +} + +func testAccCheckMongoDBAtlasAPIKeyExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := testAccProvider.Meta().(*MongoDBClient).Atlas + + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("no ID is set") + } + + ids := decodeStateID(rs.Primary.ID) + + _, _, err := conn.APIKeys.Get(context.Background(), ids["org_id"], ids["api_key_id"]) + if err != nil { + return fmt.Errorf("API Key (%s) does not exist", ids["api_key_id"]) + } + + return nil + } +} + +func testAccCheckMongoDBAtlasAPIKeyDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*MongoDBClient).Atlas + + for _, rs := range s.RootModule().Resources { + if rs.Type != "mongodbatlas_api_key" { + continue + } + + ids := decodeStateID(rs.Primary.ID) + + _, _, err := conn.APIKeys.Get(context.Background(), ids["org_id"], ids["role_name"]) + if err == nil { + return fmt.Errorf("API Key (%s) still exists", ids["role_name"]) + } + } + + return nil +} + +func testAccCheckMongoDBAtlasAPIKeyImportStateIDFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("not found: %s", resourceName) + } + + return fmt.Sprintf("%s-%s", rs.Primary.Attributes["org_id"], rs.Primary.Attributes["api_key_id"]), nil + } +} + +func testAccMongoDBAtlasAPIKeyConfigBasic(orgID, description, roleNames string) string { + return fmt.Sprintf(` + resource "mongodbatlas_api_key" "test" { + org_id = "%s" + description = "%s" + + role_names = ["%s"] + } + `, orgID, description, roleNames) +} diff --git a/mongodbatlas/resource_mongodbatlas_cloud_backup_schedule.go b/mongodbatlas/resource_mongodbatlas_cloud_backup_schedule.go index a95e7ce1f8..014f1b0b87 100644 --- a/mongodbatlas/resource_mongodbatlas_cloud_backup_schedule.go +++ b/mongodbatlas/resource_mongodbatlas_cloud_backup_schedule.go @@ -60,6 +60,42 @@ func resourceMongoDBAtlasCloudBackupSchedule() *schema.Resource { Optional: true, Computed: true, }, + "copy_settings": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cloud_provider": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "frequencies": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "region_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "replication_spec_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "should_copy_oplogs": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + }, + }, + }, "export": { Type: schema.TypeList, MaxItems: 1, @@ -289,10 +325,6 @@ func resourceMongoDBAtlasCloudBackupScheduleRead(ctx context.Context, d *schema. return diag.Errorf(errorSnapshotBackupScheduleSetting, "restore_window_days", clusterName, err) } - if err := d.Set("update_snapshots", backupPolicy.UpdateSnapshots); err != nil { - return diag.Errorf(errorSnapshotBackupScheduleSetting, "update_snapshots", clusterName, err) - } - if err := d.Set("next_snapshot", backupPolicy.NextSnapshot); err != nil { return diag.Errorf(errorSnapshotBackupScheduleSetting, "next_snapshot", clusterName, err) } @@ -329,6 +361,10 @@ func resourceMongoDBAtlasCloudBackupScheduleRead(ctx context.Context, d *schema. return diag.Errorf(errorSnapshotBackupScheduleSetting, "policy_item_monthly", clusterName, err) } + if err := d.Set("copy_settings", flattenCopySettings(backupPolicy.CopySettings)); err != nil { + return diag.Errorf(errorSnapshotBackupScheduleSetting, "copy_settings", clusterName, err) + } + return nil } @@ -414,6 +450,11 @@ func cloudBackupScheduleCreateOrUpdate(ctx context.Context, conn *matlas.Client, var policiesItem []matlas.PolicyItem export := matlas.Export{} + req.CopySettings = []matlas.CopySetting{} + if v, ok := d.GetOk("copy_settings"); ok && len(v.([]interface{})) > 0 { + req.CopySettings = expandCopySettings(v.([]interface{})) + } + if v, ok := d.GetOk("policy_item_hourly"); ok { item := v.([]interface{}) itemObj := item[0].(map[string]interface{}) @@ -465,7 +506,7 @@ func cloudBackupScheduleCreateOrUpdate(ctx context.Context, conn *matlas.Client, export.ExportBucketID = itemObj["export_bucket_id"].(string) export.FrequencyType = itemObj["frequency_type"].(string) req.Export = nil - if *req.AutoExportEnabled { + if autoExportEnabled := d.Get("auto_export_enabled"); autoExportEnabled != nil && autoExportEnabled.(bool) { req.Export = &export } } @@ -531,3 +572,49 @@ func flattenExport(roles *matlas.CloudProviderSnapshotBackupPolicy) []map[string } return exportList } + +func flattenCopySettings(copySettingList []matlas.CopySetting) []map[string]interface{} { + copySettings := make([]map[string]interface{}, 0) + for _, v := range copySettingList { + copySettings = append(copySettings, map[string]interface{}{ + "cloud_provider": v.CloudProvider, + "frequencies": v.Frequencies, + "region_name": v.RegionName, + "replication_spec_id": v.ReplicationSpecID, + "should_copy_oplogs": v.ShouldCopyOplogs, + }) + } + return copySettings +} + +func expandCopySetting(tfMap map[string]interface{}) *matlas.CopySetting { + if tfMap == nil { + return nil + } + + copySetting := &matlas.CopySetting{ + CloudProvider: pointy.String(tfMap["cloud_provider"].(string)), + Frequencies: expandStringList(tfMap["frequencies"].(*schema.Set).List()), + RegionName: pointy.String(tfMap["region_name"].(string)), + ReplicationSpecID: pointy.String(tfMap["replication_spec_id"].(string)), + } + return copySetting +} + +func expandCopySettings(tfList []interface{}) []matlas.CopySetting { + if len(tfList) == 0 { + return nil + } + + var copySettings []matlas.CopySetting + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + if !ok { + continue + } + apiObject := expandCopySetting(tfMap) + copySettings = append(copySettings, *apiObject) + } + return copySettings +} diff --git a/mongodbatlas/resource_mongodbatlas_cloud_backup_schedule_test.go b/mongodbatlas/resource_mongodbatlas_cloud_backup_schedule_test.go index 7a67b6e175..868b83769e 100644 --- a/mongodbatlas/resource_mongodbatlas_cloud_backup_schedule_test.go +++ b/mongodbatlas/resource_mongodbatlas_cloud_backup_schedule_test.go @@ -118,8 +118,6 @@ func TestAccBackupRSCloudBackupSchedule_basic(t *testing.T) { } func TestAccBackupRSCloudBackupSchedule_export(t *testing.T) { - t.Skip() // TODO: Address failures in v1.4.6 - var ( resourceName = "mongodbatlas_cloud_backup_schedule.schedule_test" projectID = os.Getenv("MONGODB_ATLAS_PROJECT_ID") @@ -223,6 +221,54 @@ func TestAccBackupRSCloudBackupSchedule_onepolicy(t *testing.T) { }) } +func TestAccBackupRSCloudBackupSchedule_copySettings(t *testing.T) { + var ( + resourceName = "mongodbatlas_cloud_backup_schedule.schedule_test" + projectID = os.Getenv("MONGODB_ATLAS_PROJECT_ID") + clusterName = fmt.Sprintf("test-acc-%s", acctest.RandString(10)) + ) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasCloudBackupScheduleDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMongoDBAtlasCloudBackupScheduleCopySettingsConfig(projectID, clusterName, &matlas.CloudProviderSnapshotBackupPolicy{ + ReferenceHourOfDay: pointy.Int64(3), + ReferenceMinuteOfHour: pointy.Int64(45), + RestoreWindowDays: pointy.Int64(4), + }), + Check: resource.ComposeTestCheckFunc( + testAccCheckMongoDBAtlasCloudBackupScheduleExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "project_id", projectID), + resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterName), + resource.TestCheckResourceAttr(resourceName, "reference_hour_of_day", "3"), + resource.TestCheckResourceAttr(resourceName, "reference_minute_of_hour", "45"), + resource.TestCheckResourceAttr(resourceName, "restore_window_days", "4"), + resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_item_daily.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_item_weekly.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_item_monthly.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.frequency_interval", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.retention_unit", "days"), + resource.TestCheckResourceAttr(resourceName, "policy_item_hourly.0.retention_value", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_item_daily.0.frequency_interval", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_item_daily.0.retention_unit", "days"), + resource.TestCheckResourceAttr(resourceName, "policy_item_daily.0.retention_value", "2"), + resource.TestCheckResourceAttr(resourceName, "policy_item_weekly.0.frequency_interval", "4"), + resource.TestCheckResourceAttr(resourceName, "policy_item_weekly.0.retention_unit", "weeks"), + resource.TestCheckResourceAttr(resourceName, "policy_item_weekly.0.retention_value", "3"), + resource.TestCheckResourceAttr(resourceName, "policy_item_monthly.0.frequency_interval", "5"), + resource.TestCheckResourceAttr(resourceName, "policy_item_monthly.0.retention_unit", "months"), + resource.TestCheckResourceAttr(resourceName, "policy_item_monthly.0.retention_value", "4"), + resource.TestCheckResourceAttr(resourceName, "copy_settings.0.cloud_provider", "AWS"), + resource.TestCheckResourceAttr(resourceName, "copy_settings.0.region_name", "US_EAST_1"), + ), + }, + }, + }) +} func TestAccBackupRSCloudBackupScheduleImport_basic(t *testing.T) { var ( resourceName = "mongodbatlas_cloud_backup_schedule.schedule_test" @@ -384,7 +430,7 @@ func testAccMongoDBAtlasCloudBackupScheduleConfigNoPolicies(projectID, clusterNa resource "mongodbatlas_cluster" "my_cluster" { project_id = "%s" name = "%s" - + // Provider Settings "block" provider_name = "AWS" provider_region_name = "EU_CENTRAL_1" @@ -408,7 +454,7 @@ func testAccMongoDBAtlasCloudBackupScheduleDefaultConfig(projectID, clusterName resource "mongodbatlas_cluster" "my_cluster" { project_id = "%s" name = "%s" - + // Provider Settings "block" provider_name = "AWS" provider_region_name = "EU_CENTRAL_1" @@ -448,12 +494,78 @@ func testAccMongoDBAtlasCloudBackupScheduleDefaultConfig(projectID, clusterName `, projectID, clusterName, *p.ReferenceHourOfDay, *p.ReferenceMinuteOfHour, *p.RestoreWindowDays) } -func testAccMongoDBAtlasCloudBackupScheduleOnePolicyConfig(projectID, clusterName string, p *matlas.CloudProviderSnapshotBackupPolicy) string { +func testAccMongoDBAtlasCloudBackupScheduleCopySettingsConfig(projectID, clusterName string, p *matlas.CloudProviderSnapshotBackupPolicy) string { return fmt.Sprintf(` resource "mongodbatlas_cluster" "my_cluster" { project_id = "%s" name = "%s" + cluster_type = "REPLICASET" + replication_specs { + num_shards = 1 + regions_config { + region_name = "US_EAST_2" + electable_nodes = 3 + priority = 7 + read_only_nodes = 0 + } + } + // Provider Settings "block" + provider_name = "AWS" + provider_region_name = "US_EAST_2" + provider_instance_size_name = "M10" + cloud_backup = true //enable cloud provider snapshots + } + + resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { + project_id = mongodbatlas_cluster.my_cluster.project_id + cluster_name = mongodbatlas_cluster.my_cluster.name + + reference_hour_of_day = %d + reference_minute_of_hour = %d + restore_window_days = %d + + policy_item_hourly { + frequency_interval = 1 + retention_unit = "days" + retention_value = 1 + } + policy_item_daily { + frequency_interval = 1 + retention_unit = "days" + retention_value = 2 + } + policy_item_weekly { + frequency_interval = 4 + retention_unit = "weeks" + retention_value = 3 + } + policy_item_monthly { + frequency_interval = 5 + retention_unit = "months" + retention_value = 4 + } + copy_settings { + cloud_provider = "AWS" + frequencies = ["HOURLY", + "DAILY", + "WEEKLY", + "MONTHLY", + "ON_DEMAND"] + region_name = "US_EAST_1" + replication_spec_id = mongodbatlas_cluster.my_cluster.replication_specs.*.id[0] + should_copy_oplogs = false + } + } + `, projectID, clusterName, *p.ReferenceHourOfDay, *p.ReferenceMinuteOfHour, *p.RestoreWindowDays) +} + +func testAccMongoDBAtlasCloudBackupScheduleOnePolicyConfig(projectID, clusterName string, p *matlas.CloudProviderSnapshotBackupPolicy) string { + return fmt.Sprintf(` + resource "mongodbatlas_cluster" "my_cluster" { + project_id = "%s" + name = "%s" + // Provider Settings "block" provider_name = "AWS" provider_region_name = "EU_CENTRAL_1" @@ -483,7 +595,7 @@ func testAccMongoDBAtlasCloudBackupScheduleNewPoliciesConfig(projectID, clusterN resource "mongodbatlas_cluster" "my_cluster" { project_id = "%s" name = "%s" - + // Provider Settings "block" provider_name = "AWS" provider_region_name = "EU_CENTRAL_1" @@ -498,7 +610,7 @@ func testAccMongoDBAtlasCloudBackupScheduleNewPoliciesConfig(projectID, clusterN reference_hour_of_day = %d reference_minute_of_hour = %d restore_window_days = %d - + policy_item_hourly { frequency_interval = 2 retention_unit = "days" @@ -554,7 +666,7 @@ func testAccMongoDBAtlasCloudBackupScheduleAdvancedPoliciesConfig(projectID, clu resource "mongodbatlas_cluster" "my_cluster" { project_id = "%s" name = "%s" - + // Provider Settings "block" provider_name = "AWS" provider_region_name = "EU_CENTRAL_1" @@ -569,7 +681,7 @@ func testAccMongoDBAtlasCloudBackupScheduleAdvancedPoliciesConfig(projectID, clu reference_hour_of_day = %d reference_minute_of_hour = %d restore_window_days = %d - + policy_item_hourly { frequency_interval = 2 retention_unit = "days" @@ -620,7 +732,7 @@ provider "aws" { resource "mongodbatlas_cluster" "my_cluster" { project_id = %[1]q name = %[2]q - + // Provider Settings "block" provider_name = "AWS" provider_region_name = "US_WEST_2" @@ -628,7 +740,7 @@ resource "mongodbatlas_cluster" "my_cluster" { cloud_backup = true //enable cloud provider snapshots depends_on = ["mongodbatlas_cloud_backup_snapshot_export_bucket.test"] } - + resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { project_id = mongodbatlas_cluster.my_cluster.project_id cluster_name = mongodbatlas_cluster.my_cluster.name @@ -636,18 +748,18 @@ resource "mongodbatlas_cloud_backup_schedule" "schedule_test" { reference_hour_of_day = 20 reference_minute_of_hour = "05" restore_window_days = 4 - + policy_item_daily { frequency_interval = 1 retention_unit = "days" retention_value = 4 } export { - export_bucket_id = mongodbatlas_cloud_backup_snapshot_export_bucket.test.export_bucket_id - frequency_type = "daily" + export_bucket_id = mongodbatlas_cloud_backup_snapshot_export_bucket.test.export_bucket_id + frequency_type = "daily" } } - + resource "aws_s3_bucket" "backup" { bucket = "${local.mongodbatlas_project_id}-s3-mongodb-backups" force_destroy = true @@ -655,33 +767,33 @@ resource "aws_s3_bucket" "backup" { object_lock_enabled = "Enabled" } } - + resource "mongodbatlas_cloud_provider_access_setup" "setup_only" { project_id = %[1]q provider_name = "AWS" } - + resource "mongodbatlas_cloud_provider_access_authorization" "auth_role" { project_id = %[1]q role_id = mongodbatlas_cloud_provider_access_setup.setup_only.role_id - + aws { iam_assumed_role_arn = aws_iam_role.test_role.arn } } - + resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" { project_id = %[1]q - + iam_role_id = mongodbatlas_cloud_provider_access_authorization.auth_role.role_id bucket_name = aws_s3_bucket.backup.bucket cloud_provider = "AWS" } - + resource "aws_iam_role_policy" "test_policy" { name = %[1]q role = aws_iam_role.test_role.id - + policy = <<-EOF { "Version": "2012-10-17", @@ -695,10 +807,10 @@ resource "aws_iam_role_policy" "test_policy" { } EOF } - + resource "aws_iam_role" "test_role" { name = %[4]q - + assume_role_policy = <= 0 { + res.OplogMinRetentionHours = pointy.Float64(cast.ToFloat64(p["oplog_min_retention_hours"])) + } else { + log.Printf(errorClusterSetting, `oplog_min_retention_hours`, "", cast.ToString(minRetentionHours)) + } + } + return res } @@ -1446,6 +1454,7 @@ func flattenProcessArgs(p *matlas.ProcessArgs) []interface{} { "minimum_enabled_tls_protocol": p.MinimumEnabledTLSProtocol, "no_table_scan": cast.ToBool(p.NoTableScan), "oplog_size_mb": p.OplogSizeMB, + "oplog_min_retention_hours": p.OplogMinRetentionHours, "sample_size_bi_connector": p.SampleSizeBIConnector, "sample_refresh_interval_bi_connector": p.SampleRefreshIntervalBIConnector, }, @@ -1709,6 +1718,11 @@ func clusterAdvancedConfigurationSchema() *schema.Schema { Optional: true, Computed: true, }, + "oplog_min_retention_hours": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, "sample_size_bi_connector": { Type: schema.TypeInt, Optional: true, diff --git a/mongodbatlas/resource_mongodbatlas_custom_db_role.go b/mongodbatlas/resource_mongodbatlas_custom_db_role.go index cb73e7ed79..53cb65bdb0 100644 --- a/mongodbatlas/resource_mongodbatlas_custom_db_role.go +++ b/mongodbatlas/resource_mongodbatlas_custom_db_role.go @@ -8,6 +8,7 @@ import ( "net/http" "regexp" "strings" + "sync" "time" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" @@ -101,7 +102,13 @@ func resourceMongoDBAtlasCustomDBRole() *schema.Resource { } } +var ( + customRoleLock sync.Mutex +) + func resourceMongoDBAtlasCustomDBRoleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + customRoleLock.Lock() + defer customRoleLock.Unlock() conn := meta.(*MongoDBClient).Atlas projectID := d.Get("project_id").(string) @@ -180,6 +187,8 @@ func resourceMongoDBAtlasCustomDBRoleRead(ctx context.Context, d *schema.Resourc } func resourceMongoDBAtlasCustomDBRoleUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + customRoleLock.Lock() + defer customRoleLock.Unlock() conn := meta.(*MongoDBClient).Atlas ids := decodeStateID(d.Id()) projectID := ids["project_id"] diff --git a/mongodbatlas/resource_mongodbatlas_custom_db_role_test.go b/mongodbatlas/resource_mongodbatlas_custom_db_role_test.go index 223807ec9e..be3fee27ce 100644 --- a/mongodbatlas/resource_mongodbatlas_custom_db_role_test.go +++ b/mongodbatlas/resource_mongodbatlas_custom_db_role_test.go @@ -412,7 +412,6 @@ func TestAccConfigRSCustomDBRoles_MultipleCustomRoles(t *testing.T) { } func TestAccConfigRSCustomDBRoles_MultipleResources(t *testing.T) { - t.Skip() // The error seems appear to be similar to whitelist behavior, skip it temporally var ( resourceName = "mongodbatlas_custom_db_role.test" projectID = os.Getenv("MONGODB_ATLAS_PROJECT_ID") diff --git a/mongodbatlas/resource_mongodbatlas_ldap_configuration.go b/mongodbatlas/resource_mongodbatlas_ldap_configuration.go index 9504e9531f..014fb4ed98 100644 --- a/mongodbatlas/resource_mongodbatlas_ldap_configuration.go +++ b/mongodbatlas/resource_mongodbatlas_ldap_configuration.go @@ -7,6 +7,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/mwielbut/pointy" matlas "go.mongodb.org/atlas/mongodbatlas" ) @@ -104,35 +105,35 @@ func resourceMongoDBAtlasLDAPConfigurationCreate(ctx context.Context, d *schema. ldap := &matlas.LDAP{} if v, ok := d.GetOk("authentication_enabled"); ok { - ldap.AuthenticationEnabled = v.(bool) + ldap.AuthenticationEnabled = pointy.Bool(v.(bool)) } if v, ok := d.GetOk("authorization_enabled"); ok { - ldap.AuthorizationEnabled = v.(bool) + ldap.AuthorizationEnabled = pointy.Bool(v.(bool)) } if v, ok := d.GetOk("hostname"); ok { - ldap.Hostname = v.(string) + ldap.Hostname = pointy.String(v.(string)) } if v, ok := d.GetOk("port"); ok { - ldap.Port = v.(int) + ldap.Port = pointy.Int(v.(int)) } if v, ok := d.GetOk("bind_username"); ok { - ldap.BindUsername = v.(string) + ldap.BindUsername = pointy.String(v.(string)) } if v, ok := d.GetOk("bind_password"); ok { - ldap.BindPassword = v.(string) + ldap.BindPassword = pointy.String(v.(string)) } if v, ok := d.GetOk("ca_certificate"); ok { - ldap.CaCertificate = v.(string) + ldap.CaCertificate = pointy.String(v.(string)) } if v, ok := d.GetOk("authz_query_template"); ok { - ldap.AuthzQueryTemplate = v.(string) + ldap.AuthzQueryTemplate = pointy.String(v.(string)) } if v, ok := d.GetOk("user_to_dn_mapping"); ok { @@ -201,35 +202,35 @@ func resourceMongoDBAtlasLDAPConfigurationUpdate(ctx context.Context, d *schema. ldap := &matlas.LDAP{} if d.HasChange("authentication_enabled") { - ldap.AuthenticationEnabled = d.Get("").(bool) + ldap.AuthenticationEnabled = pointy.Bool(d.Get("").(bool)) } if d.HasChange("authorization_enabled") { - ldap.AuthorizationEnabled = d.Get("authorization_enabled").(bool) + ldap.AuthorizationEnabled = pointy.Bool(d.Get("authorization_enabled").(bool)) } if d.HasChange("hostname") { - ldap.Hostname = d.Get("hostname").(string) + ldap.Hostname = pointy.String(d.Get("hostname").(string)) } if d.HasChange("port") { - ldap.Port = d.Get("port").(int) + ldap.Port = pointy.Int(d.Get("port").(int)) } if d.HasChange("bind_username") { - ldap.BindUsername = d.Get("bind_username").(string) + ldap.BindUsername = pointy.String(d.Get("bind_username").(string)) } if d.HasChange("bind_password") { - ldap.BindPassword = d.Get("bind_password").(string) + ldap.BindPassword = pointy.String(d.Get("bind_password").(string)) } if d.HasChange("ca_certificate") { - ldap.CaCertificate = d.Get("ca_certificate").(string) + ldap.CaCertificate = pointy.String(d.Get("ca_certificate").(string)) } if d.HasChange("authz_query_template") { - ldap.AuthzQueryTemplate = d.Get("authz_query_template").(string) + ldap.AuthzQueryTemplate = pointy.String(d.Get("authz_query_template").(string)) } if d.HasChange("user_to_dn_mapping") { diff --git a/mongodbatlas/resource_mongodbatlas_ldap_verify.go b/mongodbatlas/resource_mongodbatlas_ldap_verify.go index 4dda10b26d..756f7ca249 100644 --- a/mongodbatlas/resource_mongodbatlas_ldap_verify.go +++ b/mongodbatlas/resource_mongodbatlas_ldap_verify.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/mwielbut/pointy" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" matlas "go.mongodb.org/atlas/mongodbatlas" @@ -118,22 +119,22 @@ func resourceMongoDBAtlasLDAPVerifyCreate(ctx context.Context, d *schema.Resourc ldapReq := &matlas.LDAP{} if v, ok := d.GetOk("hostname"); ok { - ldapReq.Hostname = v.(string) + ldapReq.Hostname = pointy.String(v.(string)) } if v, ok := d.GetOk("port"); ok { - ldapReq.Port = v.(int) + ldapReq.Port = pointy.Int(v.(int)) } if v, ok := d.GetOk("bind_username"); ok { - ldapReq.BindUsername = v.(string) + ldapReq.BindUsername = pointy.String(v.(string)) } if v, ok := d.GetOk("bind_password"); ok { - ldapReq.BindPassword = v.(string) + ldapReq.BindPassword = pointy.String(v.(string)) } if v, ok := d.GetOk("ca_certificate"); ok { - ldapReq.CaCertificate = v.(string) + ldapReq.CaCertificate = pointy.String(v.(string)) } if v, ok := d.GetOk("authz_query_template"); ok { - ldapReq.AuthzQueryTemplate = v.(string) + ldapReq.AuthzQueryTemplate = pointy.String(v.(string)) } ldap, _, err := conn.LDAPConfigurations.Verify(ctx, projectID, ldapReq) diff --git a/mongodbatlas/resource_mongodbatlas_org_invitation.go b/mongodbatlas/resource_mongodbatlas_org_invitation.go index af4a28c427..6dde3dc2c1 100644 --- a/mongodbatlas/resource_mongodbatlas_org_invitation.go +++ b/mongodbatlas/resource_mongodbatlas_org_invitation.go @@ -80,54 +80,56 @@ func resourceMongoDBAtlasOrgInvitationRead(ctx context.Context, d *schema.Resour username := ids["username"] invitationID := ids["invitation_id"] - orgInvitation, _, err := conn.Organizations.Invitation(ctx, orgID, invitationID) - if err != nil { - // case 404 - // deleted in the backend case - - if strings.Contains(err.Error(), "404") { - accepted, _ := validateOrgInvitationAlreadyAccepted(ctx, meta.(*MongoDBClient), username, orgID) - if !accepted { - d.SetId("") + if orgID != invitationID { + orgInvitation, _, err := conn.Organizations.Invitation(ctx, orgID, invitationID) + if err != nil { + // case 404 + // deleted in the backend case + + if strings.Contains(err.Error(), "404") { + accepted, _ := validateOrgInvitationAlreadyAccepted(ctx, meta.(*MongoDBClient), username, orgID) + if accepted { + d.SetId("") + return nil + } + return nil } - return nil - } - return diag.Errorf("error getting Organization Invitation information: %s", err) - } + return diag.Errorf("error getting Organization Invitation information: %s", err) + } - if err := d.Set("username", orgInvitation.Username); err != nil { - return diag.Errorf("error getting `username` for Organization Invitation (%s): %s", d.Id(), err) - } + if err := d.Set("username", orgInvitation.Username); err != nil { + return diag.Errorf("error getting `username` for Organization Invitation (%s): %s", d.Id(), err) + } - if err := d.Set("org_id", orgInvitation.OrgID); err != nil { - return diag.Errorf("error getting `username` for Organization Invitation (%s): %s", d.Id(), err) - } + if err := d.Set("org_id", orgInvitation.OrgID); err != nil { + return diag.Errorf("error getting `username` for Organization Invitation (%s): %s", d.Id(), err) + } - if err := d.Set("invitation_id", orgInvitation.ID); err != nil { - return diag.Errorf("error getting `invitation_id` for Organization Invitation (%s): %s", d.Id(), err) - } + if err := d.Set("invitation_id", orgInvitation.ID); err != nil { + return diag.Errorf("error getting `invitation_id` for Organization Invitation (%s): %s", d.Id(), err) + } - if err := d.Set("expires_at", orgInvitation.ExpiresAt); err != nil { - return diag.Errorf("error getting `expires_at` for Organization Invitation (%s): %s", d.Id(), err) - } + if err := d.Set("expires_at", orgInvitation.ExpiresAt); err != nil { + return diag.Errorf("error getting `expires_at` for Organization Invitation (%s): %s", d.Id(), err) + } - if err := d.Set("created_at", orgInvitation.CreatedAt); err != nil { - return diag.Errorf("error getting `created_at` for Organization Invitation (%s): %s", d.Id(), err) - } + if err := d.Set("created_at", orgInvitation.CreatedAt); err != nil { + return diag.Errorf("error getting `created_at` for Organization Invitation (%s): %s", d.Id(), err) + } - if err := d.Set("inviter_username", orgInvitation.InviterUsername); err != nil { - return diag.Errorf("error getting `inviter_username` for Organization Invitation (%s): %s", d.Id(), err) - } + if err := d.Set("inviter_username", orgInvitation.InviterUsername); err != nil { + return diag.Errorf("error getting `inviter_username` for Organization Invitation (%s): %s", d.Id(), err) + } - if err := d.Set("teams_ids", orgInvitation.TeamIDs); err != nil { - return diag.Errorf("error getting `teams_ids` for Organization Invitation (%s): %s", d.Id(), err) - } + if err := d.Set("teams_ids", orgInvitation.TeamIDs); err != nil { + return diag.Errorf("error getting `teams_ids` for Organization Invitation (%s): %s", d.Id(), err) + } - if err := d.Set("roles", orgInvitation.Roles); err != nil { - return diag.Errorf("error getting `roles` for Organization Invitation (%s): %s", d.Id(), err) + if err := d.Set("roles", orgInvitation.Roles); err != nil { + return diag.Errorf("error getting `roles` for Organization Invitation (%s): %s", d.Id(), err) + } } - d.SetId(encodeStateID(map[string]string{ "username": username, "org_id": orgID, @@ -148,17 +150,25 @@ func resourceMongoDBAtlasOrgInvitationCreate(ctx context.Context, d *schema.Reso Username: d.Get("username").(string), } - invitationRes, _, err := conn.Organizations.InviteUser(ctx, orgID, invitationReq) - if err != nil { - return diag.Errorf("error creating Organization invitation for user %s: %s", d.Get("username").(string), err) - } - - d.SetId(encodeStateID(map[string]string{ - "username": invitationRes.Username, - "org_id": invitationRes.OrgID, - "invitation_id": invitationRes.ID, - })) + accepted, _ := validateOrgInvitationAlreadyAccepted(ctx, meta.(*MongoDBClient), invitationReq.Username, orgID) + if accepted { + d.SetId(encodeStateID(map[string]string{ + "username": invitationReq.Username, + "org_id": orgID, + "invitation_id": orgID, + })) + } else { + invitationRes, _, err := conn.Organizations.InviteUser(ctx, orgID, invitationReq) + if err != nil { + return diag.Errorf("error creating Organization invitation for user %s: %s", d.Get("username").(string), err) + } + d.SetId(encodeStateID(map[string]string{ + "username": invitationRes.Username, + "org_id": invitationRes.OrgID, + "invitation_id": invitationRes.ID, + })) + } return resourceMongoDBAtlasOrgInvitationRead(ctx, d, meta) } @@ -169,11 +179,25 @@ func resourceMongoDBAtlasOrgInvitationDelete(ctx context.Context, d *schema.Reso username := ids["username"] invitationID := ids["invitation_id"] - _, err := conn.Organizations.DeleteInvitation(ctx, orgID, invitationID) + _, _, err := conn.Organizations.Invitation(ctx, orgID, invitationID) + if err != nil { + // case 404 + // deleted in the backend case + + if strings.Contains(err.Error(), "404") { + accepted, _ := validateOrgInvitationAlreadyAccepted(ctx, meta.(*MongoDBClient), username, orgID) + if accepted { + d.SetId("") + return nil + } + return nil + } + } + _, err = conn.Organizations.DeleteInvitation(ctx, orgID, invitationID) if err != nil { return diag.Errorf("error deleting Organization invitation for user %s: %s", username, err) } - + d.SetId("") return nil } diff --git a/mongodbatlas/resource_mongodbatlas_private_ip_mode.go b/mongodbatlas/resource_mongodbatlas_private_ip_mode.go index b72ea102f6..8a95d280dd 100644 --- a/mongodbatlas/resource_mongodbatlas_private_ip_mode.go +++ b/mongodbatlas/resource_mongodbatlas_private_ip_mode.go @@ -39,6 +39,7 @@ func resourceMongoDBAtlasPrivateIPMode() *schema.Resource { Required: true, }, }, + DeprecationMessage: "This resource is deprecated, and will be removed in v1.9 release. Please transition to Multiple Horizons connection strings as soon as possible", } } diff --git a/mongodbatlas/resource_mongodbatlas_project_api_key.go b/mongodbatlas/resource_mongodbatlas_project_api_key.go new file mode 100644 index 0000000000..5364492617 --- /dev/null +++ b/mongodbatlas/resource_mongodbatlas_project_api_key.go @@ -0,0 +1,257 @@ +package mongodbatlas + +import ( + "context" + "errors" + "fmt" + "log" + "net/http" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + matlas "go.mongodb.org/atlas/mongodbatlas" +) + +func resourceMongoDBAtlasProjectAPIKey() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceMongoDBAtlasProjectAPIKeyCreate, + ReadContext: resourceMongoDBAtlasProjectAPIKeyRead, + UpdateContext: resourceMongoDBAtlasProjectAPIKeyUpdate, + DeleteContext: resourceMongoDBAtlasProjectAPIKeyDelete, + Importer: &schema.ResourceImporter{ + StateContext: resourceMongoDBAtlasProjectAPIKeyImportState, + }, + Schema: map[string]*schema.Schema{ + "project_id": { + Type: schema.TypeString, + Required: true, + }, + "api_key_id": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Required: true, + }, + "public_key": { + Type: schema.TypeString, + Computed: true, + }, + "private_key": { + Type: schema.TypeString, + Computed: true, + Sensitive: true, + }, + "role_names": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + } +} + +func resourceMongoDBAtlasProjectAPIKeyCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*MongoDBClient).Atlas + projectID := d.Get("project_id").(string) + createRequest := new(matlas.APIKeyInput) + + createRequest.Desc = d.Get("description").(string) + + createRequest.Roles = expandStringList(d.Get("role_names").(*schema.Set).List()) + + apiKey, resp, err := conn.ProjectAPIKeys.Create(ctx, projectID, createRequest) + if err != nil { + if resp != nil && resp.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + + return diag.FromErr(fmt.Errorf("error create API key: %s", err)) + } + + if err := d.Set("public_key", apiKey.PublicKey); err != nil { + return diag.FromErr(fmt.Errorf("error setting `public_key`: %s", err)) + } + + if err := d.Set("private_key", apiKey.PrivateKey); err != nil { + return diag.FromErr(fmt.Errorf("error setting `private_key`: %s", err)) + } + + d.SetId(encodeStateID(map[string]string{ + "project_id": projectID, + "api_key_id": apiKey.ID, + })) + + return resourceMongoDBAtlasProjectAPIKeyRead(ctx, d, meta) +} + +func resourceMongoDBAtlasProjectAPIKeyRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + // Get client connection. + conn := meta.(*MongoDBClient).Atlas + ids := decodeStateID(d.Id()) + projectID := ids["project_id"] + apiKeyID := ids["api_key_id"] + + projectAPIKeys, _, err := conn.ProjectAPIKeys.List(ctx, projectID, nil) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) + } + for _, val := range projectAPIKeys { + if val.ID == apiKeyID { + if err := d.Set("api_key_id", val.ID); err != nil { + return diag.FromErr(fmt.Errorf("error setting `api_key_id`: %s", err)) + } + + if err := d.Set("description", val.Desc); err != nil { + return diag.FromErr(fmt.Errorf("error setting `description`: %s", err)) + } + + if err := d.Set("public_key", val.PublicKey); err != nil { + return diag.FromErr(fmt.Errorf("error setting `public_key`: %s", err)) + } + + if err := d.Set("role_names", flattenProjectAPIKeyRoles(projectID, val.Roles)); err != nil { + return diag.FromErr(fmt.Errorf("error setting `roles`: %s", err)) + } + } + } + + if err := d.Set("project_id", projectID); err != nil { + return diag.FromErr(fmt.Errorf("error setting `project_id`: %s", err)) + } + + d.SetId(encodeStateID(map[string]string{ + "project_id": projectID, + "api_key_id": apiKeyID, + })) + + return nil +} + +func resourceMongoDBAtlasProjectAPIKeyUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*MongoDBClient).Atlas + ids := decodeStateID(d.Id()) + projectID := ids["project_id"] + apiKeyID := ids["api_key_id"] + + updateRequest := new(matlas.AssignAPIKey) + + if d.HasChange("role_names") { + updateRequest.Roles = expandStringList(d.Get("role_names").(*schema.Set).List()) + + _, err := conn.ProjectAPIKeys.Assign(ctx, projectID, apiKeyID, updateRequest) + if err != nil { + return diag.FromErr(fmt.Errorf("error updating API key: %s", err)) + } + } + + return resourceMongoDBAtlasProjectAPIKeyRead(ctx, d, meta) +} + +func resourceMongoDBAtlasProjectAPIKeyDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*MongoDBClient).Atlas + ids := decodeStateID(d.Id()) + projectID := ids["project_id"] + apiKeyID := ids["api_key_id"] + var orgID string + + projectAPIKeys, _, err := conn.ProjectAPIKeys.List(ctx, projectID, nil) + if err != nil { + return diag.FromErr(fmt.Errorf("error getting api key information: %s", err)) + } + + for _, val := range projectAPIKeys { + if val.ID == apiKeyID { + for _, role := range val.Roles { + if strings.HasPrefix(role.RoleName, "ORG_") { + orgID = val.Roles[0].OrgID + } + } + } + } + + _, err = conn.ProjectAPIKeys.Unassign(ctx, projectID, apiKeyID) + if err != nil { + return diag.FromErr(fmt.Errorf("error deleting project api key: %s", err)) + } + _, err = conn.APIKeys.Delete(ctx, orgID, apiKeyID) + if err != nil { + log.Printf("[WARN] unable to delete Key (%s): %s\n", apiKeyID, err) + } + + d.SetId("") + return nil +} + +func resourceMongoDBAtlasProjectAPIKeyImportState(ctx context.Context, d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + conn := meta.(*MongoDBClient).Atlas + + parts := strings.SplitN(d.Id(), "-", 2) + if len(parts) != 2 { + return nil, errors.New("import format error: to import a api key use the format {org_id}-{api_key_id}") + } + + projectID := parts[0] + apiKeyID := parts[1] + + projectAPIKeys, _, err := conn.ProjectAPIKeys.List(ctx, projectID, nil) + if err != nil { + return nil, fmt.Errorf("couldn't import api key %s in project %s, error: %s", projectID, apiKeyID, err) + } + for _, val := range projectAPIKeys { + if val.ID == apiKeyID { + if err := d.Set("description", val.Desc); err != nil { + return nil, fmt.Errorf("error setting `description`: %s", err) + } + + if err := d.Set("public_key", val.PublicKey); err != nil { + return nil, fmt.Errorf("error setting `public_key`: %s", err) + } + + d.SetId(encodeStateID(map[string]string{ + "project_id": projectID, + "api_key_id": val.ID, + })) + } + } + return []*schema.ResourceData{d}, nil +} + +func flattenProjectAPIKeys(ctx context.Context, conn *matlas.Client, projectID string, apiKeys []matlas.APIKey) []map[string]interface{} { + var results []map[string]interface{} + + if len(apiKeys) > 0 { + results = make([]map[string]interface{}, len(apiKeys)) + for k, apiKey := range apiKeys { + results[k] = map[string]interface{}{ + "api_key_id": apiKey.ID, + "description": apiKey.Desc, + "public_key": apiKey.PublicKey, + "private_key": apiKey.PrivateKey, + "role_names": flattenProjectAPIKeyRoles(projectID, apiKey.Roles), + } + } + } + return results +} + +func flattenProjectAPIKeyRoles(projectID string, apiKeyRoles []matlas.AtlasRole) []string { + if len(apiKeyRoles) == 0 { + return nil + } + + flattenedOrgRoles := []string{} + + for _, role := range apiKeyRoles { + if strings.HasPrefix(role.RoleName, "GROUP_") && role.GroupID == projectID { + flattenedOrgRoles = append(flattenedOrgRoles, role.RoleName) + } + } + + return flattenedOrgRoles +} diff --git a/mongodbatlas/resource_mongodbatlas_project_api_key_test.go b/mongodbatlas/resource_mongodbatlas_project_api_key_test.go new file mode 100644 index 0000000000..7f80a2d12d --- /dev/null +++ b/mongodbatlas/resource_mongodbatlas_project_api_key_test.go @@ -0,0 +1,111 @@ +package mongodbatlas + +import ( + "context" + "fmt" + "os" + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" +) + +func TestAccConfigRSProjectAPIKey_Basic(t *testing.T) { + var ( + resourceName = "mongodbatlas_project_api_key.test" + projectID = os.Getenv("MONGODB_ATLAS_PROJECT_ID") + description = fmt.Sprintf("test-acc-project-api_key-%s", acctest.RandString(5)) + roleName = "GROUP_OWNER" + ) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasProjectAPIKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMongoDBAtlasProjectAPIKeyConfigBasic(projectID, description, roleName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrSet(resourceName, "project_id"), + resource.TestCheckResourceAttrSet(resourceName, "description"), + + resource.TestCheckResourceAttr(resourceName, "project_id", projectID), + resource.TestCheckResourceAttr(resourceName, "description", description), + ), + }, + }, + }) +} + +func TestAccConfigRSProjectAPIKey_importBasic(t *testing.T) { + var ( + resourceName = "mongodbatlas_project_api_key.test" + projectID = os.Getenv("MONGODB_ATLAS_PROJECT_ID") + description = fmt.Sprintf("test-acc-import-project-api_key-%s", acctest.RandString(5)) + roleName = "GROUP_OWNER" + ) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ProviderFactories: testAccProviderFactories, + CheckDestroy: testAccCheckMongoDBAtlasProjectAPIKeyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMongoDBAtlasProjectAPIKeyConfigBasic(projectID, description, roleName), + }, + { + ResourceName: resourceName, + ImportStateIdFunc: testAccCheckMongoDBAtlasProjectAPIKeyImportStateIDFunc(resourceName), + ImportState: true, + ImportStateVerify: false, + }, + }, + }) +} + +func testAccCheckMongoDBAtlasProjectAPIKeyDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*MongoDBClient).Atlas + + for _, rs := range s.RootModule().Resources { + if rs.Type != "mongodbatlas_project_api_key" { + continue + } + + ids := decodeStateID(rs.Primary.ID) + + projectAPIKeys, _, err := conn.ProjectAPIKeys.List(context.Background(), ids["project_id"], nil) + if err != nil { + return nil + } + + for _, val := range projectAPIKeys { + if val.ID == ids["api_key_id"] { + return fmt.Errorf("Project API Key (%s) still exists", ids["role_name"]) + } + } + } + + return nil +} + +func testAccCheckMongoDBAtlasProjectAPIKeyImportStateIDFunc(resourceName string) resource.ImportStateIdFunc { + return func(s *terraform.State) (string, error) { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return "", fmt.Errorf("not found: %s", resourceName) + } + + return fmt.Sprintf("%s-%s", rs.Primary.Attributes["project_id"], rs.Primary.Attributes["api_key_id"]), nil + } +} + +func testAccMongoDBAtlasProjectAPIKeyConfigBasic(projectID, description, roleNames string) string { + return fmt.Sprintf(` + resource "mongodbatlas_project_api_key" "test" { + project_id = %[1]q + description = %[2]q + role_names = [%[3]q] + } + `, projectID, description, roleNames) +} diff --git a/mongodbatlas/resource_mongodbatlas_search_index.go b/mongodbatlas/resource_mongodbatlas_search_index.go index 7b7222aec9..8ef26102c2 100644 --- a/mongodbatlas/resource_mongodbatlas_search_index.go +++ b/mongodbatlas/resource_mongodbatlas_search_index.go @@ -180,7 +180,7 @@ func resourceMongoDBAtlasSearchIndexUpdate(ctx context.Context, d *schema.Resour } if d.HasChange("collection_name") { - searchIndex.CollectionName = d.Get("collectionName").(string) + searchIndex.CollectionName = d.Get("collection_name").(string) } if d.HasChange("database") { @@ -192,7 +192,7 @@ func resourceMongoDBAtlasSearchIndexUpdate(ctx context.Context, d *schema.Resour } if d.HasChange("search_analyzer") { - searchIndex.SearchAnalyzer = d.Get("searchAnalyzer").(string) + searchIndex.SearchAnalyzer = d.Get("search_analyzer").(string) } if d.HasChange("mappings_dynamic") { diff --git a/mongodbatlas/resource_mongodbatlas_search_index_test.go b/mongodbatlas/resource_mongodbatlas_search_index_test.go index f4271cb366..739954a037 100644 --- a/mongodbatlas/resource_mongodbatlas_search_index_test.go +++ b/mongodbatlas/resource_mongodbatlas_search_index_test.go @@ -162,39 +162,35 @@ func testAccMongoDBAtlasSearchIndexConfig(projectID, clusterName string) string project_id = "%[1]s" name = "%[2]s" disk_size_gb = 10 - + cluster_type = "REPLICASET" replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } + num_shards = 1 + regions_config { + region_name = "US_EAST_2" + electable_nodes = 3 + priority = 7 + read_only_nodes = 0 + } } - backup_enabled = false auto_scaling_disk_gb_enabled = false - + // Provider Settings "block" provider_name = "AWS" provider_instance_size_name = "M10" - } resource "mongodbatlas_search_index" "test" { - project_id = mongodbatlas_cluster.aws_conf.project_id - cluster_name = mongodbatlas_cluster.aws_conf.name - analyzer = "lucene.simple" - collection_name = "collection_test" - database = "database_test" + project_id = mongodbatlas_cluster.aws_conf.project_id + cluster_name = mongodbatlas_cluster.aws_conf.name + analyzer = "lucene.simple" + collection_name = "collection_test" + database = "database_test" mappings_dynamic = "true" - name = "name_test" - search_analyzer = "lucene.standard" + name = "name_test" + search_analyzer = "lucene.standard" } - - `, projectID, clusterName) } @@ -207,13 +203,13 @@ func testAccMongoDBAtlasSearchIndexConfigAdvanced(projectID, clusterName string) cluster_type = "REPLICASET" replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } + num_shards = 1 + regions_config { + region_name = "US_EAST_2" + electable_nodes = 3 + priority = 7 + read_only_nodes = 0 + } } backup_enabled = false @@ -226,69 +222,77 @@ func testAccMongoDBAtlasSearchIndexConfigAdvanced(projectID, clusterName string) } resource "mongodbatlas_search_index" "test" { - project_id = mongodbatlas_cluster.aws_conf.project_id - cluster_name = mongodbatlas_cluster.aws_conf.name + project_id = mongodbatlas_cluster.aws_conf.project_id + cluster_name = mongodbatlas_cluster.aws_conf.name - analyzer = "lucene.simple" - collection_name = "collection_test" - database = "database_test" + analyzer = "lucene.simple" + collection_name = "collection_test" + database = "database_test" mappings_dynamic = false - mappings_fields = <<-EOF - { - "address": { - "type": "document", - "fields": { - "city": { - "type": "string", - "analyzer": "lucene.simple", - "ignoreAbove": 255 - }, - "state": { - "type": "string", - "analyzer": "lucene.english" - } + mappings_fields = <<-EOF + { + "address":{ + "type":"document", + "fields":{ + "city":{ + "type":"string", + "analyzer":"lucene.simple", + "ignoreAbove":255 + }, + "state":{ + "type":"string", + "analyzer":"lucene.english" + } } - }, - "company": { - "type": "string", - "analyzer": "lucene.whitespace", - "multi": { - "mySecondaryAnalyzer": { - "type": "string", - "analyzer": "lucene.french" - } + }, + "company":{ + "type":"string", + "analyzer":"lucene.whitespace", + "multi":{ + "mySecondaryAnalyzer":{ + "type":"string", + "analyzer":"lucene.french" + } } - }, - "employees": { - "type": "string", - "analyzer": "lucene.standard" - } + }, + "employees":{ + "type":"string", + "analyzer":"lucene.standard" } - EOF - name = "name_test" - search_analyzer = "lucene.standard" - analyzers = <<-EOF - [{ - "name": "index_analyzer_test_name", - "charFilters": [{ - "type": "mapping", - "mappings": {"\\" : "/"} - }], - "tokenizer": [{ - "type": "nGram", - "minGram": 2, - "maxGram": 5 - }], - "tokenFilters": [{ - "type": "length", - "min": 20, - "max": 33 - }] - }] + } + EOF + name = "name_test" + search_analyzer = "lucene.standard" + analyzers = <<-EOF + [ + { + "name":"index_analyzer_test_name", + "charFilters":[ + { + "type":"mapping", + "mappings":{ + "\\":"/" + } + } + ], + "tokenizer":[ + { + "type":"nGram", + "minGram":2, + "maxGram":5s + } + ], + "tokenFilters":[ + { + "type":"length", + "min":20, + "max":33 + } + ] + } + ] EOF } - - `, projectID, clusterName) } @@ -298,44 +302,42 @@ func testAccMongoDBAtlasSearchIndexConfigSynonyms(projectID, clusterName string) project_id = "%[1]s" name = "%[2]s" disk_size_gb = 10 - + cluster_type = "REPLICASET" replication_specs { - num_shards = 1 - regions_config { - region_name = "US_EAST_2" - electable_nodes = 3 - priority = 7 - read_only_nodes = 0 - } + num_shards = 1 + regions_config { + region_name = "US_EAST_2" + electable_nodes = 3 + priority = 7 + read_only_nodes = 0 + } } - + backup_enabled = false auto_scaling_disk_gb_enabled = false - + // Provider Settings "block" provider_name = "AWS" provider_instance_size_name = "M10" - + } - + resource "mongodbatlas_search_index" "test" { - project_id = mongodbatlas_cluster.test_cluster.project_id - cluster_name = mongodbatlas_cluster.test_cluster.name - analyzer = "lucene.standard" - collection_name = "collection_test" - database = "database_test" + project_id = mongodbatlas_cluster.test_cluster.project_id + cluster_name = mongodbatlas_cluster.test_cluster.name + analyzer = "lucene.standard" + collection_name = "collection_test" + database = "database_test" mappings_dynamic = "true" - name = "name_test" - search_analyzer = "lucene.standard" + name = "name_test" + search_analyzer = "lucene.standard" synonyms { - analyzer = "lucene.simple" - name = "synonym_test" + analyzer = "lucene.simple" + name = "synonym_test" source_collection = "collection_test" } } - - `, projectID, clusterName) } diff --git a/mongodbatlas/resource_mongodbatlas_third_party_integration.go b/mongodbatlas/resource_mongodbatlas_third_party_integration.go index 2ec5bf57eb..e068356017 100644 --- a/mongodbatlas/resource_mongodbatlas_third_party_integration.go +++ b/mongodbatlas/resource_mongodbatlas_third_party_integration.go @@ -55,6 +55,7 @@ func resourceMongoDBAtlasThirdPartyIntegration() *schema.Resource { Required: true, ForceNew: true, ValidateFunc: validation.StringInSlice(integrationTypes, false), + Deprecated: "This field type has values (NEW_RELIC, FLOWDOCK) that are deprecated and will be removed in 1.9.0 release ", }, "license_key": { Type: schema.TypeString, @@ -206,7 +207,7 @@ func resourceMongoDBAtlasThirdPartyIntegrationRead(ctx context.Context, d *schem return diag.FromErr(fmt.Errorf("error getting third party integration resource info %s %w", integrationType, err)) } - integrationMap := integrationToSchema(integration) + integrationMap := integrationToSchema(d, integration) for key, val := range integrationMap { if err := d.Set(key, val); err != nil { diff --git a/mongodbatlas/resource_mongodbatlas_x509_authentication_database_user.go b/mongodbatlas/resource_mongodbatlas_x509_authentication_database_user.go index 511e91b397..92afca1181 100644 --- a/mongodbatlas/resource_mongodbatlas_x509_authentication_database_user.go +++ b/mongodbatlas/resource_mongodbatlas_x509_authentication_database_user.go @@ -102,7 +102,7 @@ func resourceMongoDBAtlasX509AuthDBUserCreate(ctx context.Context, d *schema.Res projectID := d.Get("project_id").(string) username := d.Get("username").(string) - var currentCertificate string + var serialNumber string if expirationMonths, ok := d.GetOk("months_until_expiration"); ok { res, _, err := conn.X509AuthDBUsers.CreateUserCertificate(ctx, projectID, username, expirationMonths.(int)) @@ -110,7 +110,10 @@ func resourceMongoDBAtlasX509AuthDBUserCreate(ctx context.Context, d *schema.Res return diag.FromErr(fmt.Errorf(errorX509AuthDBUsersCreate, username, projectID, err)) } - currentCertificate = res.Certificate + serialNumber = cast.ToString(res.ID) + if err := d.Set("current_certificate", cast.ToString(res.Certificate)); err != nil { + return diag.FromErr(fmt.Errorf(errorX509AuthDBUsersSetting, "current_certificate", username, err)) + } } else { customerX509Cas := d.Get("customer_x509_cas").(string) _, _, err := conn.X509AuthDBUsers.SaveConfiguration(ctx, projectID, &matlas.CustomerX509{Cas: customerX509Cas}) @@ -120,9 +123,9 @@ func resourceMongoDBAtlasX509AuthDBUserCreate(ctx context.Context, d *schema.Res } d.SetId(encodeStateID(map[string]string{ - "project_id": projectID, - "username": username, - "current_certificate": currentCertificate, + "project_id": projectID, + "username": username, + "serial_number": serialNumber, })) return resourceMongoDBAtlasX509AuthDBUserRead(ctx, d, meta) @@ -134,11 +137,11 @@ func resourceMongoDBAtlasX509AuthDBUserRead(ctx context.Context, d *schema.Resou ids := decodeStateID(d.Id()) projectID := ids["project_id"] username := ids["username"] - currentCertificate := ids["current_certificate"] var ( certificates []matlas.UserCertificate err error + serialNumber string ) if username != "" { @@ -152,16 +155,21 @@ func resourceMongoDBAtlasX509AuthDBUserRead(ctx context.Context, d *schema.Resou } return diag.FromErr(fmt.Errorf(errorX509AuthDBUsersRead, username, projectID, err)) } - } - - if err := d.Set("current_certificate", cast.ToString(currentCertificate)); err != nil { - return diag.FromErr(fmt.Errorf(errorX509AuthDBUsersSetting, "current_certificate", username, err)) + for _, val := range certificates { + serialNumber = cast.ToString(val.ID) + } } if err := d.Set("certificates", flattenCertificates(certificates)); err != nil { return diag.FromErr(fmt.Errorf(errorX509AuthDBUsersSetting, "certificates", username, err)) } + d.SetId(encodeStateID(map[string]string{ + "project_id": projectID, + "username": username, + "serial_number": serialNumber, + })) + return nil } diff --git a/website/docs/d/access_list_api_key.html.markdown b/website/docs/d/access_list_api_key.html.markdown new file mode 100644 index 0000000000..0d7466d409 --- /dev/null +++ b/website/docs/d/access_list_api_key.html.markdown @@ -0,0 +1,67 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: access_list_api_key" +sidebar_current: "docs-mongodbatlas-datasource-access-list-api-key" +description: |- + Provides an Access List API Key resource. +--- + +# Data Source: mongodbatlas_access_list_api_key + +`mongodbatlas_access_list_api_key` describes an Access List API Key entry resource. The access list grants access from IPs, CIDRs) to clusters within the Project. + +-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. + +~> **IMPORTANT:** +When you remove an entry from the access list, existing connections from the removed address(es) may remain open for a variable amount of time. How much time passes before Atlas closes the connection depends on several factors, including how the connection was established, the particular behavior of the application or driver using the address, and the connection protocol (e.g., TCP or UDP). This is particularly important to consider when changing an existing IP address or CIDR block as they cannot be updated via the Provider (comments can however), hence a change will force the destruction and recreation of entries. + + +## Example Usage + +### Using CIDR Block +```terraform +resource "mongodbatlas_access_list_api_key" "test" { + org_id = "" + cidr_block = "1.2.3.4/32" + api_key = "a29120e123cd" +} + +data "mongodbatlas_access_list_api_key" "test" { + org_id = mongodbatlas_access_list_api_key.test.org_id + cidr_block = mongodbatlas_access_list_api_key.test.cidr_block + api_key_id = mongodbatlas_access_list_api_key.test.api_key_id +} +``` + +### Using IP Address +```terraform +resource "mongodbatlas_access_list_api_key" "test" { + org_id = "" + ip_address = "2.3.4.5" + api_key = "a29120e123cd" +} + +data "mongodbatlas_access_list_api_key" "test" { + org_id = mongodbatlas_access_list_api_key.test.org_id + ip_address = mongodbatlas_access_list_api_key.test.ip_address + api_key_id = mongodbatlas_access_list_api_key.test.api_key_id +} +``` + +## Argument Reference + +* `org_id` - (Required) Unique identifier for the Organization to which you want to retrieve one or more access list entries. +* `cidr_block` - (Optional) Range of IP addresses in CIDR notation to be added to the access list. +* `ip_address` - (Optional) Single IP address to be added to the access list. +* `api_key_id` - (Required) Unique identifier for the Organization API Key for which you want to retrieve an access list entry. +* +-> **NOTE:** One of the following attributes must set: `cidr_block` or `ip_address`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Unique identifier used by Terraform for internal management and can be used to import. +* `comment` - Comment to add to the access list entry. + +For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/access-lists/) diff --git a/website/docs/d/access_list_api_keys.html.markdown b/website/docs/d/access_list_api_keys.html.markdown new file mode 100644 index 0000000000..6d110eeb81 --- /dev/null +++ b/website/docs/d/access_list_api_keys.html.markdown @@ -0,0 +1,75 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: access_list_api_keys" +sidebar_current: "docs-mongodbatlas-datasource-access-list-api-keys" +description: |- + Provides an Access List API Keys resource. +--- + +# Data Source: mongodbatlas_access_list_api_key + +`mongodbatlas_access_list_api_keys` describes an Access List API Key entry resource. The access list grants access from IPs, CIDRs) to clusters within the Project. + +-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. + +~> **IMPORTANT:** +When you remove an entry from the access list, existing connections from the removed address(es) may remain open for a variable amount of time. How much time passes before Atlas closes the connection depends on several factors, including how the connection was established, the particular behavior of the application or driver using the address, and the connection protocol (e.g., TCP or UDP). This is particularly important to consider when changing an existing IP address or CIDR block as they cannot be updated via the Provider (comments can however), hence a change will force the destruction and recreation of entries. + + +## Example Usage + +### Using CIDR Block +```terraform +resource "mongodbatlas_access_list_api_key" "test" { + org_id = "" + cidr_block = "1.2.3.4/32" + api_key = "a29120e123cd" +} + +data "mongodbatlas_access_list_api_key" "test" { + org_id = mongodbatlas_access_list_api_key.test.org_id + cidr_block = mongodbatlas_access_list_api_key.test.cidr_block + api_key_id = mongodbatlas_access_list_api_key.test.api_key_id +} +``` + +### Using IP Address +```terraform +resource "mongodbatlas_access_list_api_key" "test" { + org_id = "" + ip_address = "2.3.4.5" + api_key = "a29120e123cd" +} + +data "mongodbatlas_access_list_api_key" "test" { + org_id = mongodbatlas_access_list_api_key.test.org_id + ip_address = mongodbatlas_access_list_api_key.test.ip_address + api_key_id = mongodbatlas_access_list_api_key.test.api_key_id +} +``` + + +## Argument Reference + +* `page_num` - (Optional) The page to return. Defaults to `1`. +* `items_per_page` - (Optional) Number of items to return per page, up to a maximum of 500. Defaults to `100`. + +* `id` - Autogenerated Unique ID for this data source. +* `results` - A list where each represents a Projects. + +### API Keys +* `org_id` - (Required) Unique identifier for the Organization to which you want to retrieve one or more access list entries. +* `cidr_block` - (Optional) Range of IP addresses in CIDR notation to be added to the access list. +* `ip_address` - (Optional) Single IP address to be added to the access list. +* `api_key_id` - (Required) Unique identifier for the Organization API Key for which you want to retrieve an access list entry. +* +-> **NOTE:** One of the following attributes must set: `cidr_block` or `ip_address`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Unique identifier used by Terraform for internal management and can be used to import. +* `comment` - Comment to add to the access list entry. + +For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/access-lists/) diff --git a/website/docs/d/advanced_cluster.html.markdown b/website/docs/d/advanced_cluster.html.markdown index 01627c19a0..413b46bf9b 100644 --- a/website/docs/d/advanced_cluster.html.markdown +++ b/website/docs/d/advanced_cluster.html.markdown @@ -53,7 +53,7 @@ data "mongodbatlas_advanced_cluster" "example" { In addition to all arguments above, the following attributes are exported: * `id` - The cluster ID. -* `bi_connector` - Configuration settings applied to BI Connector for Atlas on this cluster. See [below](#bi_connector). +* `bi_connector_config` - Configuration settings applied to BI Connector for Atlas on this cluster. See [below](#bi_connector_config). **NOTE** Prior version of provider had parameter as `bi_connector` * `cluster_type` - Type of the cluster that you want to create. * `disk_size_gb` - Capacity, in gigabytes, of the host's root volume. * `encryption_at_rest_provider` - Possible values are AWS, GCP, AZURE or NONE. @@ -67,7 +67,7 @@ In addition to all arguments above, the following attributes are exported: * `advanced_configuration` - Get the advanced configuration options. See [Advanced Configuration](#advanced-configuration) below for more details. -### bi_connector +### bi_connector_config Specifies BI Connector for Atlas configuration. @@ -94,6 +94,7 @@ Key-value pairs that tag and categorize the cluster. Each key and value has a ma * `analytics_specs` - Hardware specifications for [analytics nodes](https://docs.atlas.mongodb.com/reference/faq/deployment/#std-label-analytics-nodes-overview) needed in the region. See [below](#specs) * `auto_scaling` - Configuration for the Collection of settings that configures auto-scaling information for the cluster. See [below](#auto_scaling) +* `analytics_auto_scaling` - Configuration for the Collection of settings that configures analytics-auto-scaling information for the cluster. See [below](#analytics_auto_scaling) * `backing_provider_name` - Cloud service provider on which you provision the host for a multi-tenant cluster. * `electable_specs` - Hardware specifications for electable nodes in the region. * `priority` - Election priority of the region. @@ -118,6 +119,13 @@ Key-value pairs that tag and categorize the cluster. Each key and value has a ma * `compute_min_instance_size` - Minimum instance size to which your cluster can automatically scale (such as M10). * `compute_max_instance_size` - Maximum instance size to which your cluster can automatically scale (such as M40). +### analytics_auto_scaling + +* `disk_gb_enabled` - Flag that indicates whether this cluster enables disk auto-scaling. +* `compute_enabled` - Flag that indicates whether instance size auto-scaling is enabled. +* `compute_scale_down_enabled` - Flag that indicates whether the instance size may scale down. +* `compute_min_instance_size` - Minimum instance size to which your cluster can automatically scale (such as M10). +* `compute_max_instance_size` - Maximum instance size to which your cluster can automatically scale (such as M40). #### Advanced Configuration * `default_read_concern` - [Default level of acknowledgment requested from MongoDB for read operations](https://docs.mongodb.com/manual/reference/read-concern/) set for this cluster. MongoDB 4.4 clusters default to [available](https://docs.mongodb.com/manual/reference/read-concern-available/). @@ -132,6 +140,7 @@ Key-value pairs that tag and categorize the cluster. Each key and value has a ma * `no_table_scan` - When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations. * `oplog_size_mb` - The custom oplog size of the cluster. Without a value that indicates that the cluster uses the default oplog size calculated by Atlas. +* `oplog_min_retention_hours` - Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates. * `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. @@ -159,7 +168,7 @@ In addition to all arguments above, the following attributes are exported: - `connection_strings.private_endpoint.#.srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[n].connection_string` - `connection_strings.private_endpoint.#.type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters. - `connection_strings.private_endpoint.#.endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[n].connection_string` or `connection_strings.private_endpoint[n].srv_connection_string` - - `connection_strings.private_endoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. + - `connection_strings.private_endpoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. - `connection_strings.private_endpoint.#.endpoints.#.provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`. - `connection_strings.private_endpoint.#.endpoints.#.region` - Region to which you deployed the private endpoint. * `paused` - Flag that indicates whether the cluster is paused or not. diff --git a/website/docs/d/advanced_clusters.html.markdown b/website/docs/d/advanced_clusters.html.markdown index f1147420a9..0d9a42c29b 100644 --- a/website/docs/d/advanced_clusters.html.markdown +++ b/website/docs/d/advanced_clusters.html.markdown @@ -55,7 +55,7 @@ In addition to all arguments above, the following attributes are exported: ### Advanced Cluster -* `bi_connector` - Configuration settings applied to BI Connector for Atlas on this cluster. See [below](#bi_connector). +* `bi_connector_config` - Configuration settings applied to BI Connector for Atlas on this cluster. See [below](#bi_connector_config). **NOTE** Prior version of provider had parameter as `bi_connector` * `cluster_type` - Type of the cluster that you want to create. * `disk_size_gb` - Capacity, in gigabytes, of the host's root volume. * `encryption_at_rest_provider` - Possible values are AWS, GCP, AZURE or NONE. @@ -69,7 +69,7 @@ In addition to all arguments above, the following attributes are exported: * `advanced_configuration` - Get the advanced configuration options. See [Advanced Configuration](#advanced-configuration) below for more details. -### bi_connector +### bi_connector_config Specifies BI Connector for Atlas configuration. @@ -96,6 +96,7 @@ Key-value pairs that tag and categorize the cluster. Each key and value has a ma * `analytics_specs` - Hardware specifications for [analytics nodes](https://docs.atlas.mongodb.com/reference/faq/deployment/#std-label-analytics-nodes-overview) needed in the region. See [below](#specs) * `auto_scaling` - Configuration for the Collection of settings that configures auto-scaling information for the cluster. See [below](#auto_scaling) +* `analytics_auto_scaling` - Configuration for the Collection of settings that configures analytis-auto-scaling information for the cluster. See [below](#analytics_auto_scaling) * `backing_provider_name` - Cloud service provider on which you provision the host for a multi-tenant cluster. * `electable_specs` - Hardware specifications for electable nodes in the region. * `priority` - Election priority of the region. @@ -120,6 +121,14 @@ Key-value pairs that tag and categorize the cluster. Each key and value has a ma * `compute_min_instance_size` - Minimum instance size to which your cluster can automatically scale (such as M10). * `compute_max_instance_size` - Maximum instance size to which your cluster can automatically scale (such as M40). +### analytics_auto_scaling + +* `disk_gb_enabled` - Flag that indicates whether this cluster enables disk auto-scaling. +* `compute_enabled` - Flag that indicates whether instance size auto-scaling is enabled. +* `compute_scale_down_enabled` - Flag that indicates whether the instance size may scale down. +* `compute_min_instance_size` - Minimum instance size to which your cluster can automatically scale (such as M10). +* `compute_max_instance_size` - Maximum instance size to which your cluster can automatically scale (such as M40). + #### Advanced Configuration * `default_read_concern` - [Default level of acknowledgment requested from MongoDB for read operations](https://docs.mongodb.com/manual/reference/read-concern/) set for this cluster. MongoDB 4.4 clusters default to [available](https://docs.mongodb.com/manual/reference/read-concern-available/). @@ -134,6 +143,7 @@ Key-value pairs that tag and categorize the cluster. Each key and value has a ma * `no_table_scan` - When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations. * `oplog_size_mb` - The custom oplog size of the cluster. Without a value that indicates that the cluster uses the default oplog size calculated by Atlas. +* `oplog_min_retention_hours` - Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates. * `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. @@ -162,7 +172,7 @@ In addition to all arguments above, the following attributes are exported: - `connection_strings.private_endpoint.#.srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[n].connection_string` - `connection_strings.private_endpoint.#.type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters. - `connection_strings.private_endpoint.#.endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[n].connection_string` or `connection_strings.private_endpoint[n].srv_connection_string` - - `connection_strings.private_endoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. + - `connection_strings.private_endpoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. - `connection_strings.private_endpoint.#.endpoints.#.provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`. - `connection_strings.private_endpoint.#.endpoints.#.region` - Region to which you deployed the private endpoint. * `paused` - Flag that indicates whether the cluster is paused or not. diff --git a/website/docs/d/alert_configuration.html.markdown b/website/docs/d/alert_configuration.html.markdown index 786c5fbf1e..752c6d31be 100644 --- a/website/docs/d/alert_configuration.html.markdown +++ b/website/docs/d/alert_configuration.html.markdown @@ -87,16 +87,36 @@ data "mongodbatlas_alert_configuration" "test" { } ``` +Utilize data_source to generate resource hcl and import statement. Useful if you have a specific alert_configuration_id and are looking to manage it as is in state. To import all alerts, refer to the documentation on [data_source_mongodbatlas_alert_configurations](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/data-sources/alert_configurations) +``` +data "mongodbatlas_alert_configuration" "test" { + project_id = var.project_id + alert_configuration_id = var.alert_configuration_id + + output { + type = "resource_hcl" + label = "test" + } + + output { + type = "resource_import" + label = "test" + } +} +``` + ## Argument Reference * `project_id` - (Required) The ID of the project where the alert configuration will create. * `alert_configuration_id` - (Required) Unique identifier for the alert configuration. +* `output` - (Optional) List of formatted output requested for this alert configuration +* `output.#.type` - (Required) If the output is requested, you must specify its type. The format is computed as `output.#.value`, the following are the supported types: +- `resource_hcl`: This string is used to define the resource as it exists in MongoDB Atlas. +- `resource_import`: This string is used to import the existing resource into the state file. ## Attributes Reference In addition to all arguments above, the following attributes are exported: - -* `group_id` - Unique identifier of the project that owns this alert configuration. * `created` - Timestamp in ISO 8601 date and time format in UTC when this alert configuration was created. * `updated` - Timestamp in ISO 8601 date and time format in UTC when this alert configuration was last updated. * `enabled` - If set to true, the alert configuration is enabled. If enabled is not exported it is set to false. diff --git a/website/docs/d/alert_configurations.html.markdown b/website/docs/d/alert_configurations.html.markdown new file mode 100644 index 0000000000..4a903d7ceb --- /dev/null +++ b/website/docs/d/alert_configurations.html.markdown @@ -0,0 +1,74 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: Alert Configurations" +sidebar_current: "docs-mongodbatlas-datasource-alert-configurations" +description: |- + Describe all Alert Configurations in Project. +--- + +# Data Source: mongodbatlas_alert_configurations + +`mongodbatlas_alert_configurations` describes all Alert Configurations by the provided project_id. The data source requires your Project ID. + +-> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation. + +## Example Usage + +```terraform +data "mongodbatlas_alert_configurations" "import" { + project_id = var.project_id + + output_type = ["resource_hcl", "resource_import"] +} + +locals { + alerts = data.mongodbatlas_alert_configurations.import.results + + outputs = flatten([ + for i, alert in local.alerts : + alert.output == null ? [] : alert.output + ]) + + output_values = compact([for i, o in local.outputs : o.value]) +} + +output "alert_output" { + value = join("\n", local.output_values) +} +``` + +Refer to the following for a full example on using this data_source as a tool to import all resources: +* [atlas-alert-configurations](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/atlas-alert-configurations) + +## Argument Reference + +* `project_id` - (Required) The unique ID for the project to get the alert configurations. +* `list_options` - (Optional) Arguments that dictate how many and which results are returned by the data source +* `list_options.page_num` - Which page of results to retrieve (default to first page) +* `list_options.items_per_page` - How many alerts to retrieve per page (default 100) +* `list_options.include_count` - Whether to include total count of results in the response (default false) +* `output_type` - (Optional) List of requested string formatted output to be included on each individual result. Options are `resource_hcl` and `resource_import`. Available to make it easy to gather resource statements for existing alert configurations, and corresponding import statements to import said resource state into the statefile. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `total_count` - Total count of results +* `results` - A list of alert configurations for the project_id, constrained by the `list_options`. + +### Alert Configuration + +* `project_id` - The ID of the project where the alert configuration exists +* `alert_configuration_id` - The ID of the alert configuration +* `created` - Timestamp in ISO 8601 date and time format in UTC when this alert configuration was created. +* `updated` - Timestamp in ISO 8601 date and time format in UTC when this alert configuration was last updated. +* `enabled` - If set to true, the alert configuration is enabled. If enabled is not exported it is set to false. +* `event_type` - The type of event that will trigger an alert. +* `matcher` - Rules to apply when matching an object against this alert configuration +* `metric_threshold_config` - The threshold that causes an alert to be triggered. Required if `event_type_name` : `OUTSIDE_METRIC_THRESHOLD` or `OUTSIDE_SERVERLESS_METRIC_THRESHOLD` +* `threshold_config` - Threshold that triggers an alert. Required if `event_type_name` is any value other than `OUTSIDE_METRIC_THRESHOLD` or `OUTSIDE_SERVERLESS_METRIC_THRESHOLD`. +* `notifications` - List of notifications to send when an alert condition is detected. +* `output` - Requested output string format for the alert configuration + +For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/alert-configurations/) +Or refer to the individual resource or data_source documentation on alert configuration. \ No newline at end of file diff --git a/website/docs/d/api_key.html.markdown b/website/docs/d/api_key.html.markdown new file mode 100644 index 0000000000..925df52e42 --- /dev/null +++ b/website/docs/d/api_key.html.markdown @@ -0,0 +1,54 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: api_key" +sidebar_current: "docs-mongodbatlas-datasource-api-key" +description: |- + Describes a API Key. +--- + +# Data Source: mongodbatlas_api_key + +`mongodbatlas_api_key` describes a MongoDB Atlas API Key. This represents a API Key that has been created. + +~> **IMPORTANT WARNING:** Creating, Reading, Updating, or Deleting Atlas API Keys may key expose sensitive organizational secrets to Terraform State. For best security practices consider storing sensitive API Key secrets instead via the [HashiCorp Vault MongoDB Atlas Secrets Engine](https://developer.hashicorp.com/vault/docs/secrets/mongodbatlas). + +-> **NOTE:** You may find org_id in the official documentation. + +## Example Usage + +### Using org_id attribute to query +```terraform +resource "mongodbatlas_api_key" "test" { + description = "key-name" + org_id = "" + role_names = ["ORG_READ_ONLY"] + } +} + +data "mongodbatlas_api_key" "test" { + org_id = "${mongodbatlas_api_key.test.org_id}" + api_key_id = "${mongodbatlas_api_key.test.api_key_id}" +} +``` + +## Argument Reference + +* `org_id` - (Required) The unique ID for the project. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `org_id` - Unique identifier for the organization whose API keys you want to retrieve. Use the /orgs endpoint to retrieve all organizations to which the authenticated user has access. +* `description` - Description of this Organization API key. +* `public_key` - Public key for this Organization API key. +* `private_key` - Private key for this Organization API key. +* `role_names` - Name of the role. This resource returns all the roles the user has in Atlas. +The following are valid roles: + * `ORG_OWNER` + * `ORG_GROUP_CREATOR` + * `ORG_BILLING_ADMIN` + * `ORG_READ_ONLY` + * `ORG_MEMBER` + +See [MongoDB Atlas API - API Key](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Programmatic-API-Keys/operation/returnOneOrganizationApiKey) - Documentation for more information. diff --git a/website/docs/d/api_keys.html.markdown b/website/docs/d/api_keys.html.markdown new file mode 100644 index 0000000000..3779889025 --- /dev/null +++ b/website/docs/d/api_keys.html.markdown @@ -0,0 +1,56 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: api_keys" +sidebar_current: "docs-mongodbatlas-api-keys" +description: |- + Describes a API Keys. +--- + +# Data Source: mongodbatlas_api_keys + +`mongodbatlas_api_keys` describe all API Keys. This represents API Keys that have been created. + +~> **IMPORTANT WARNING:** Creating, Reading, Updating, or Deleting Atlas API Keys may key expose sensitive organizational secrets to Terraform State. Consider storing sensitive API Key secrets instead via the [HashiCorp Vault MongoDB Atlas Secrets Engine](https://developer.hashicorp.com/vault/docs/secrets/mongodbatlas). + +-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. + +## Example Usage + +```terraform +resource "mongodbatlas_api_key" "test" { + description = "key-name" + org_id = "" + role_names = ["ORG_READ_ONLY"] + } +} + +data "mongodbatlas_api_keys" "test" { + page_num = 1 + items_per_page = 5 +} +``` + +## Argument Reference +* `page_num` - (Optional) The page to return. Defaults to `1`. +* `items_per_page` - (Optional) Number of items to return per page, up to a maximum of 500. Defaults to `100`. + + +* `id` - Autogenerated Unique ID for this data source. +* `results` - A list where each represents a List of API Keys. + + +### API Keys + +* `org_id` - Unique identifier for the organization whose API keys you want to retrieve. Use the /orgs endpoint to retrieve all organizations to which the authenticated user has access. +* `api_key_id` - Unique identifier for the API key you want to update. Use the /orgs/{ORG-ID}/apiKeys endpoint to retrieve all API keys to which the authenticated user has access for the specified organization. +* `description` - Description of this Organization API key. +* `role_names` - Name of the role. This resource returns all the roles the user has in Atlas. + +The following are valid roles: + * `ORG_OWNER` + * `ORG_GROUP_CREATOR` + * `ORG_BILLING_ADMIN` + * `ORG_READ_ONLY` + * `ORG_MEMBER` + +See [MongoDB Atlas API - API Keys](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Programmatic-API-Keys/operation/returnAllOrganizationApiKeys) - Documentation for more information. diff --git a/website/docs/d/cloud_backup_schedule.html.markdown b/website/docs/d/cloud_backup_schedule.html.markdown index c09f3d94f6..2a7a2fe965 100644 --- a/website/docs/d/cloud_backup_schedule.html.markdown +++ b/website/docs/d/cloud_backup_schedule.html.markdown @@ -69,36 +69,45 @@ In addition to all arguments above, the following attributes are exported: ### Export * `export_bucket_id` - Unique identifier of the mongodbatlas_cloud_backup_snapshot_export_bucket export_bucket_id value. * `frequency_type` - Frequency associated with the export snapshot item. + ### Policy Item Hourly -* * `id` - Unique identifier of the backup policy item. -* `frequency_type` - Frequency associated with the backup policy item. -* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type`. -* `retention_unit` - Scope of the backup policy item: days, weeks, or months. +* `frequency_type` - Frequency associated with the backup policy item. For hourly policies, the frequency type is defined as `hourly`. Note that this is a read-only value and not required in plan files - its value is implied from the policy resource type. +* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type` (hourly in this case). The supported values for hourly policies are `1`, `2`, `4`, `6`, `8` or `12` hours. Note that `12` hours is the only accepted value for NVMe clusters. +* `retention_unit` - Scope of the backup policy item: `days`, `weeks`, or `months`. * `retention_value` - Value to associate with `retention_unit`. ### Policy Item Daily -* * `id` - Unique identifier of the backup policy item. -* `frequency_type` - Frequency associated with the backup policy item. -* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type`. -* `retention_unit` - Scope of the backup policy item: days, weeks, or months. -* `retention_value` - Value to associate with `retention_unit`. +* `frequency_type` - Frequency associated with the backup policy item. For daily policies, the frequency type is defined as `daily`. Note that this is a read-only value and not required in plan files - its value is implied from the policy resource type. +* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type` (daily in this case). The only supported value for daily policies is `1` day. +* `retention_unit` - Scope of the backup policy item: `days`, `weeks`, or `months`. +* `retention_value` - Value to associate with `retention_unit`. Note that for less frequent policy items, Atlas requires that you specify a retention period greater than or equal to the retention period specified for more frequent policy items. For example: If the hourly policy item specifies a retention of two days, the daily retention policy must specify two days or greater. ### Policy Item Weekly -* * `id` - Unique identifier of the backup policy item. -* `frequency_type` - Frequency associated with the backup policy item. -* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type`. -* `retention_unit` - Scope of the backup policy item: days, weeks, or months. -* `retention_value` - Value to associate with `retention_unit`. +* `frequency_type` - Frequency associated with the backup policy item. For weekly policies, the frequency type is defined as `weekly`. Note that this is a read-only value and not required in plan files - its value is implied from the policy resource type. +* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type` (weekly in this case). The supported values for weekly policies are `1` through `7`, where `1` represents Monday and `7` represents Sunday. +* `retention_unit` - Scope of the backup policy item: `days`, `weeks`, or `months`. +* `retention_value` - Value to associate with `retention_unit`. Weekly policy must have retention of at least 7 days or 1 week. Note that for less frequent policy items, Atlas requires that you specify a retention period greater than or equal to the retention period specified for more frequent policy items. For example: If the daily policy item specifies a retention of two weeks, the weekly retention policy must specify two weeks or greater. ### Policy Item Monthly -* * `id` - Unique identifier of the backup policy item. -* `frequency_type` - Frequency associated with the backup policy item. -* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type`. -* `retention_unit` - Scope of the backup policy item: days, weeks, or months. -* `retention_value` - Value to associate with `retention_unit`. +* `frequency_type` - Frequency associated with the backup policy item. For monthly policies, the frequency type is defined as `monthly`. Note that this is a read-only value and not required in plan files - its value is implied from the policy resource type. +* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type` (monthly in this case). The supported values for weekly policies are + * `1` through `28` where the number represents the day of the month i.e. `1` is the first of the month and `5` is the fifth day of the month. + * `40` represents the last day of the month (depending on the month). +* `retention_unit` - Scope of the backup policy item: `days`, `weeks`, or `months`. +* `retention_value` - Value to associate with `retention_unit`. Monthly policy must have retention days of at least 31 days or 5 weeks or 1 month. Note that for less frequent policy items, Atlas requires that you specify a retention period greater than or equal to the retention period specified for more frequent policy items. For example: If the weekly policy item specifies a retention of two weeks, the montly retention policy must specify two weeks or greater. + +### Snapshot Distribution +* +* `cloud_provider` - Human-readable label that identifies the cloud provider that stores the snapshot copy. i.e. "AWS" "AZURE" "GCP" +* `frequencies` - List that describes which types of snapshots to copy. i.e. "HOURLY" "DAILY" "WEEKLY" "MONTHLY" "ON_DEMAND" +* `region_name` - Target region to copy snapshots belonging to replicationSpecId to. Please supply the 'Atlas Region' which can be found under https://www.mongodb.com/docs/atlas/reference/cloud-providers/ 'regions' link +* `replication_spec_id` - Unique 24-hexadecimal digit string that identifies the replication object for a zone in a cluster. For global clusters, there can be multiple zones to choose from. For sharded clusters and replica set clusters, there is only one zone in the cluster. To find the Replication Spec Id, do a GET request to Return One Cluster in One Project and consult the replicationSpecs array https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#operation/returnOneCluster +* `should_copy_oplogs` - Flag that indicates whether to copy the oplogs to the target region. You can use the oplogs to perform point-in-time restores. + +**Note** The parameter deleteCopiedBackups is not supported in terraform please leverage Atlas Admin API or AtlasCLI instead to manage the lifecycle of backup snaphot copies. For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/schedule/get-all-schedules/) \ No newline at end of file diff --git a/website/docs/d/cluster.html.markdown b/website/docs/d/cluster.html.markdown index 1bd999eb0d..e5f0d2a5bf 100644 --- a/website/docs/d/cluster.html.markdown +++ b/website/docs/d/cluster.html.markdown @@ -94,7 +94,7 @@ In addition to all arguments above, the following attributes are exported: - `connection_strings.private_endpoint.#.srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. - `connection_strings.private_endpoint.#.type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters. - `connection_strings.private_endpoint.#.endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[n].connection_string` or `connection_strings.private_endpoint[n].srv_connection_string` - - `connection_strings.private_endoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. + - `connection_strings.private_endpoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. - `connection_strings.private_endpoint.#.endpoints.#.provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`. - `connection_strings.private_endpoint.#.endpoints.#.region` - Region to which you deployed the private endpoint. * `disk_size_gb` - Indicates the size in gigabytes of the server’s root volume (AWS/GCP Only). @@ -213,6 +213,7 @@ Contains a key-value pair that tags that the cluster was created by a Terraform * `no_table_scan` - When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations. * `oplog_size_mb` - The custom oplog size of the cluster. Without a value that indicates that the cluster uses the default oplog size calculated by Atlas. +* `oplog_min_retention_hours` - Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates. * `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. diff --git a/website/docs/d/clusters.html.markdown b/website/docs/d/clusters.html.markdown index fd661d1009..e91f4fbd9a 100644 --- a/website/docs/d/clusters.html.markdown +++ b/website/docs/d/clusters.html.markdown @@ -96,7 +96,7 @@ In addition to all arguments above, the following attributes are exported: - `connection_strings.private_endpoint.#.srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. - `connection_strings.private_endpoint.#.type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters. - `connection_strings.private_endpoint.#.endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[n].connection_string` or `connection_strings.private_endpoint[n].srv_connection_string` - - `connection_strings.private_endoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. + - `connection_strings.private_endpoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. - `connection_strings.private_endpoint.#.endpoints.#.provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`. - `connection_strings.private_endpoint.#.endpoints.#.region` - Region to which you deployed the private endpoint. * `disk_size_gb` - Indicates the size in gigabytes of the server’s root volume (AWS/GCP Only). @@ -213,6 +213,7 @@ Contains a key-value pair that tags that the cluster was created by a Terraform * `no_table_scan` - When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations. * `oplog_size_mb` - The custom oplog size of the cluster. Without a value that indicates that the cluster uses the default oplog size calculated by Atlas. +* `oplog_min_retention_hours` - Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates. * `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. diff --git a/website/docs/d/project.html.markdown b/website/docs/d/project.html.markdown index beadf19fe8..6a40fde0f6 100644 --- a/website/docs/d/project.html.markdown +++ b/website/docs/d/project.html.markdown @@ -16,9 +16,12 @@ description: |- ### Using project_id attribute to query ```terraform +data "mongodbatlas_roles_org_id" "test" { +} + resource "mongodbatlas_project" "test" { name = "project-name" - org_id = "" + org_id = data.mongodbatlas_roles_org_id.test.org_id teams { team_id = "5e0fa8c99ccf641c722fe645" @@ -75,8 +78,8 @@ In addition to all arguments above, the following attributes are exported: * `name` - The name of the project you want to create. (Cannot be changed via this Provider after creation.) * `org_id` - The ID of the organization you want to create the project within. -*`cluster_count` - The number of Atlas clusters deployed in the project. -*`created` - The ISO-8601-formatted timestamp of when Atlas created the project. +* `cluster_count` - The number of Atlas clusters deployed in the project. +* `created` - The ISO-8601-formatted timestamp of when Atlas created the project. * `teams.#.team_id` - The unique identifier of the team you want to associate with the project. The team and project must share the same parent organization. * `teams.#.role_names` - Each string in the array represents a project role assigned to the team. Every user associated with the team inherits these roles. The following are valid roles: @@ -105,4 +108,4 @@ The following are valid roles: * `region_usage_restrictions` - If GOV_REGIONS_ONLY the project can be used for government regions only, otherwise defaults to standard regions. For more information see [MongoDB Atlas for Government](https://www.mongodb.com/docs/atlas/government/api/#creating-a-project). -See [MongoDB Atlas API - Project](https://docs.atlas.mongodb.com/reference/api/project-get-one/) - [and MongoDB Atlas API - Teams](https://docs.atlas.mongodb.com/reference/api/project-get-teams/) Documentation for more information. +See [MongoDB Atlas API - Project](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Projects) - [and MongoDB Atlas API - Teams](https://docs.atlas.mongodb.com/reference/api/project-get-teams/) Documentation for more information. diff --git a/website/docs/d/project_api_key.html.markdown b/website/docs/d/project_api_key.html.markdown new file mode 100644 index 0000000000..f5fa982b11 --- /dev/null +++ b/website/docs/d/project_api_key.html.markdown @@ -0,0 +1,55 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: project_api_key" +sidebar_current: "docs-mongodbatlas-datasource-project-api-key" +description: |- + Describes a Project API Key. +--- + +# Data Source: mongodbatlas_project_api_key + +`mongodbatlas_project_api_key` describes a MongoDB Atlas Project API Key. This represents a Project API Key that has been created. + +~> **IMPORTANT WARNING:** Creating, Reading, Updating, or Deleting Atlas API Keys may key expose sensitive organizational secrets to Terraform State. For best security practices consider storing sensitive API Key secrets instead via the [HashiCorp Vault MongoDB Atlas Secrets Engine](https://developer.hashicorp.com/vault/docs/secrets/mongodbatlas). + +-> **NOTE:** You may find project_id in the official documentation. + +## Example Usage + +### Using org_id attribute to query +```terraform +resource "mongodbatlas_project_api_key" "test" { + description = "key-name" + project_id = "" + role_names = ["GROUP_READ_ONLY"] + } +} + +data "mongodbatlas_project_api_key" "test" { + project_id = "${mongodbatlas_api_key.test.project_id}" + api_key_id = "${mongodbatlas_api_key.test.api_key_id}" +} +``` + +## Argument Reference + +* `project_id` - (Required) The unique ID for the project. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `project_id` - Unique identifier for the project whose API keys you want to retrieve. Use the /groups endpoint to retrieve all projects to which the authenticated user has access. +* `description` - Description of this Project API key. +* `public_key` - Public key for this Organization API key. +* `private_key` - Private key for this Organization API key. +* `role_names` - Name of the role. This resource returns all the roles the user has in Atlas. +The following are valid roles: + * `GROUP_OWNER` + * `GROUP_READ_ONLY` + * `GROUP_DATA_ACCESS_ADMIN` + * `GROUP_DATA_ACCESS_READ_WRITE` + * `GROUP_DATA_ACCESS_READ_ONLY` + * `GROUP_CLUSTER_MANAGER` + +See [MongoDB Atlas API - API Key](https://www.mongodb.com/docs/atlas/reference/api/projectApiKeys/get-all-apiKeys-in-one-project/) - Documentation for more information. diff --git a/website/docs/d/project_api_keys.html.markdown b/website/docs/d/project_api_keys.html.markdown new file mode 100644 index 0000000000..0f59fb210f --- /dev/null +++ b/website/docs/d/project_api_keys.html.markdown @@ -0,0 +1,57 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: api_keys" +sidebar_current: "docs-mongodbatlas-api-keys" +description: |- + Describes a API Keys. +--- + +# Data Source: mongodbatlas_api_keys + +`mongodbatlas_api_keys` describe all API Keys. This represents API Keys that have been created. + +~> **IMPORTANT WARNING:** Creating, Reading, Updating, or Deleting Atlas API Keys may key expose sensitive organizational secrets to Terraform State. Consider storing sensitive API Key secrets instead via the [HashiCorp Vault MongoDB Atlas Secrets Engine](https://developer.hashicorp.com/vault/docs/secrets/mongodbatlas). + +-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. + +## Example Usage + +```terraform +resource "mongodbatlas_project_api_key" "test" { + description = "key-name" + project_id = "" + role_names = ["GROUP_READ_ONLY"] + } +} + +data "mongodbatlas_project_api_keys" "test" { + page_num = 1 + items_per_page = 5 + project_id = "" +} +``` + +## Argument Reference +* `page_num` - (Optional) The page to return. Defaults to `1`. +* `items_per_page` - (Optional) Number of items to return per page, up to a maximum of 500. Defaults to `100`. + + +* `id` - Autogenerated Unique ID for this data source. +* `results` - A list where each represents a Projects. + + +### API Keys + +* `project_id` - Unique identifier for the project whose API keys you want to retrieve. Use the /groups endpoint to retrieve all projects to which the authenticated user has access. +* `api_key_id` - Unique identifier for the API key you want to update. Use the /orgs/{ORG-ID}/apiKeys endpoint to retrieve all API keys to which the authenticated user has access for the specified organization. +* `description` - Description of this Project API key. +* `role_names` - Name of the role. This resource returns all the roles the user has in Atlas. +The following are valid roles: + * `GROUP_OWNER` + * `GROUP_READ_ONLY` + * `GROUP_DATA_ACCESS_ADMIN` + * `GROUP_DATA_ACCESS_READ_WRITE` + * `GROUP_DATA_ACCESS_READ_ONLY` + * `GROUP_CLUSTER_MANAGER` + +See [MongoDB Atlas API - API Keys](https://www.mongodb.com/docs/atlas/reference/api/projectApiKeys/get-all-apiKeys-in-one-project/) - Documentation for more information. diff --git a/website/docs/d/projects.html.markdown b/website/docs/d/projects.html.markdown index f64d252690..fd5e9e7ec8 100644 --- a/website/docs/d/projects.html.markdown +++ b/website/docs/d/projects.html.markdown @@ -15,9 +15,12 @@ description: |- ## Example Usage ```terraform +data "mongodbatlas_roles_org_id" "test" { +} + resource "mongodbatlas_project" "test" { name = "project-name" - org_id = "" + org_id = data.mongodbatlas_roles_org_id.test.org_id teams { team_id = "5e0fa8c99ccf641c722fe645" @@ -55,8 +58,8 @@ data "mongodbatlas_projects" "test" { * `name` - The name of the project you want to create. (Cannot be changed via this Provider after creation.) * `org_id` - The ID of the organization you want to create the project within. -*`cluster_count` - The number of Atlas clusters deployed in the project. -*`created` - The ISO-8601-formatted timestamp of when Atlas created the project. +* `cluster_count` - The number of Atlas clusters deployed in the project. +* `created` - The ISO-8601-formatted timestamp of when Atlas created the project. * `teams.#.team_id` - The unique identifier of the team you want to associate with the project. The team and project must share the same parent organization. * `teams.#.role_names` - Each string in the array represents a project role assigned to the team. Every user associated with the team inherits these roles. The following are valid roles: @@ -83,4 +86,4 @@ The following are valid roles: * `is_schema_advisor_enabled` - Flag that indicates whether to enable Schema Advisor for the project. If enabled, you receive customized recommendations to optimize your data model and enhance performance. Disable this setting to disable schema suggestions in the [Performance Advisor](https://www.mongodb.com/docs/atlas/performance-advisor/#std-label-performance-advisor) and the [Data Explorer](https://www.mongodb.com/docs/atlas/atlas-ui/#std-label-atlas-ui). * `region_usage_restrictions` - If GOV_REGIONS_ONLY the project can be used for government regions only, otherwise defaults to standard regions. For more information see [MongoDB Atlas for Government](https://www.mongodb.com/docs/atlas/government/api/#creating-a-project). -See [MongoDB Atlas API - Projects](https://docs.atlas.mongodb.com/reference/api/project-get-all/) - [and MongoDB Atlas API - Teams](https://docs.atlas.mongodb.com/reference/api/project-get-teams/) Documentation for more information. +See [MongoDB Atlas API - Projects](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Projects) - [and MongoDB Atlas API - Teams](https://docs.atlas.mongodb.com/reference/api/project-get-teams/) Documentation for more information. diff --git a/website/docs/d/roles_org_id.html.markdown b/website/docs/d/roles_org_id.html.markdown new file mode 100644 index 0000000000..20da960236 --- /dev/null +++ b/website/docs/d/roles_org_id.html.markdown @@ -0,0 +1,35 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: roles_org_id" +sidebar_current: "docs-mongodbatlas-datasource-roles-org-id" +description: |- + Describes a Roles Org ID. +--- + +# Data Source: mongodbatlas_project + +`mongodbatlas_project` describes a MongoDB Atlas Roles Org ID. This represents a Roles Org ID. + +## Example Usage + +### Using project_id attribute to query +```terraform +data "mongodbatlas_roles_org_id" "test" { +} + +output "org_id" { + value = data.mongodbatlas_roles_org_id.test.org_id +} +``` + +## Argument Reference + +* No parameters required + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `org_id` - The ID of the organization you want to retrieve associated to an API Key. + +See [MongoDB Atlas API - Role Org ID](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Root/operation/getSystemStatus) - Documentation for more information. diff --git a/website/docs/r/access_list_api_key.html.markdown b/website/docs/r/access_list_api_key.html.markdown new file mode 100644 index 0000000000..5e8d6cf6e9 --- /dev/null +++ b/website/docs/r/access_list_api_key.html.markdown @@ -0,0 +1,58 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: access_list_api_key" +sidebar_current: "docs-mongodbatlas-resource-access_list-api-key" +description: |- + Provides an Access List API Key resource. +--- + +# Resource: mongodbatlas_project_ip_access_list + +`mongodbatlas_access_list_api_key` provides an IP Access List entry resource. The access list grants access from IPs, CIDRs or AWS Security Groups (if VPC Peering is enabled) to clusters within the Project. + +-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. + +~> **IMPORTANT:** +When you remove an entry from the access list, existing connections from the removed address(es) may remain open for a variable amount of time. How much time passes before Atlas closes the connection depends on several factors, including how the connection was established, the particular behavior of the application or driver using the address, and the connection protocol (e.g., TCP or UDP). This is particularly important to consider when changing an existing IP address or CIDR block as they cannot be updated via the Provider (comments can however), hence a change will force the destruction and recreation of entries. + +~> **IMPORTANT WARNING:** Creating, Reading, Updating, or Deleting Atlas API Keys may key expose sensitive organizational secrets to Terraform State. Consider storing sensitive API Key secrets instead via the HashiCorp Vault MongoDB Atlas Secrets Engine. + + +## Example Usage + +### Using CIDR Block +```terraform +resource "mongodbatlas_access_list_api_key" "test" { + org_id = "" + cidr_block = "1.2.3.4/32" + api_key_id = "a29120e123cd" +} +``` + +### Using IP Address +```terraform +resource "mongodbatlas_access_list_api_key" "test" { + org_id = "" + ip_address = "2.3.4.5" + api_key_id = "a29120e123cd" +} +``` + +## Argument Reference + +* `org_id` - (Required) Unique identifier for the organinzation to which you want to add one or more access list entries. +* `cidr_block` - (Optional) Range of IP addresses in CIDR notation to be added to the access list. Your access list entry can include only one `cidrBlock`, or one `ipAddress`. +* `ip_address` - (Optional) Single IP address to be added to the access list. +* `api_key_id` - Unique identifier for the Organization API Key for which you want to create a new access list entry. + +-> **NOTE:** One of the following attributes must set: `cidr_block` or `ip_address`. + +## Import + +IP Access List entries can be imported using the `org_id` , `api_key_id` and `cidr_block` or `ip_address`, e.g. + +``` +$ terraform import mongodbatlas_access_list_api_key.test 5d0f1f74cf09a29120e123cd-a29120e123cd-10.242.88.0/21 +``` + +For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Programmatic-API-Keys/operation/createAccessListEntriesForOneOrganizationApiKey) diff --git a/website/docs/r/advanced_cluster.html.markdown b/website/docs/r/advanced_cluster.html.markdown index 2335eb9702..213d97c59a 100644 --- a/website/docs/r/advanced_cluster.html.markdown +++ b/website/docs/r/advanced_cluster.html.markdown @@ -194,7 +194,9 @@ Refer to the following for full endpoint service connection string examples: This parameter defaults to false. -* `bi_connector` - (Optional) Configuration settings applied to BI Connector for Atlas on this cluster. The MongoDB Connector for Business Intelligence for Atlas (BI Connector) is only available for M10 and larger clusters. The BI Connector is a powerful tool which provides users SQL-based access to their MongoDB databases. As a result, the BI Connector performs operations which may be CPU and memory intensive. Given the limited hardware resources on M10 and M20 cluster tiers, you may experience performance degradation of the cluster when enabling the BI Connector. If this occurs, upgrade to an M30 or larger cluster or disable the BI Connector. See [below](#bi_connector). +**NOTE** Prior version of provider had parameter as `bi_connector` state will migrate it to new value you only need to update parameter in your terraform file + +* `bi_connector_config` - (Optional) Configuration settings applied to BI Connector for Atlas on this cluster. The MongoDB Connector for Business Intelligence for Atlas (BI Connector) is only available for M10 and larger clusters. The BI Connector is a powerful tool which provides users SQL-based access to their MongoDB databases. As a result, the BI Connector performs operations which may be CPU and memory intensive. Given the limited hardware resources on M10 and M20 cluster tiers, you may experience performance degradation of the cluster when enabling the BI Connector. If this occurs, upgrade to an M30 or larger cluster or disable the BI Connector. See [below](#bi_connector_config). * `cluster_type` - (Required)Type of the cluster that you want to create. Accepted values include: - `REPLICASET` Replica set @@ -220,16 +222,16 @@ This parameter defaults to false. * `timeouts`- (Optional) The duration of time to wait for Cluster to be created, updated, or deleted. The timeout value is defined by a signed sequence of decimal numbers with an time unit suffix such as: `1h45m`, `300s`, `10m`, .... The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Private Endpoint create & delete is `3h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts). -### bi_connector +### bi_connector_config Specifies BI Connector for Atlas configuration. - ```terraform - bi_connector { +```terraform +bi_connector_config { enabled = true - read_preference = secondary + read_preference = "secondary" } - ``` +``` * `enabled` - (Optional) Specifies whether or not BI Connector for Atlas is enabled on the cluster.l * @@ -272,6 +274,8 @@ Include **desired options** within advanced_configuration: * `no_table_scan` - (Optional) When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations. * `oplog_size_mb` - (Optional) The custom oplog size of the cluster. Without a value that indicates that the cluster uses the default oplog size calculated by Atlas. +* `oplog_min_retention_hours` - (Optional) Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates. +* **Note** A minimum oplog retention is required when seeking to change a cluster's class to Local NVMe SSD. To learn more and for latest guidance see [`oplogMinRetentionHours`](https://www.mongodb.com/docs/manual/core/replica-set-oplog/#std-label-replica-set-minimum-oplog-size) * `sample_size_bi_connector` - (Optional) Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - (Optional) Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. @@ -334,6 +338,7 @@ replication_specs { * `analytics_specs` - (Optional) Hardware specifications for [analytics nodes](https://docs.atlas.mongodb.com/reference/faq/deployment/#std-label-analytics-nodes-overview) needed in the region. Analytics nodes handle analytic data such as reporting queries from BI Connector for Atlas. Analytics nodes are read-only and can never become the [primary](https://docs.atlas.mongodb.com/reference/glossary/#std-term-primary). If you don't specify this parameter, no analytics nodes deploy to this region. See [below](#specs) * `auto_scaling` - (Optional) Configuration for the Collection of settings that configures auto-scaling information for the cluster. The values for the `auto_scaling` parameter must be the same for every item in the `replication_specs` array. See [below](#auto_scaling) +* `analytics_auto_scaling` - (Optional) Configuration for the Collection of settings that configures analytics-auto-scaling information for the cluster. The values for the `analytics_auto_scaling` parameter must be the same for every item in the `replication_specs` array. See [below](#analytics_auto_scaling) * `backing_provider_name` - (Optional) Cloud service provider on which you provision the host for a multi-tenant cluster. Use this only when a `provider_name` is `TENANT` and `instance_size` of a specs is `M2` or `M5`. * `electable_specs` - (Optional) Hardware specifications for electable nodes in the region. Electable nodes can become the [primary](https://docs.atlas.mongodb.com/reference/glossary/#std-term-primary) and can enable local reads. If you do not specify this option, no electable nodes are deployed to the region. See [below](#specs) * `priority` - (Optional) Election priority of the region. For regions with only read-only nodes, set this value to 0. @@ -376,6 +381,23 @@ After adding the `lifecycle` block to explicitly change `instance_size` comment * `compute_max_instance_size` - (Optional) Maximum instance size to which your cluster can automatically scale (such as M40). Atlas requires this parameter if `replication_specs.#.region_configs.#.auto_scaling.0.compute_enabled` is true. +### analytics_auto_scaling + +* `disk_gb_enabled` - (Optional) Flag that indicates whether this cluster enables disk auto-scaling. This parameter defaults to true. +* `compute_enabled` - (Optional) Flag that indicates whether instance size auto-scaling is enabled. This parameter defaults to false. + +~> **IMPORTANT:** If `compute_enabled` is true, then Atlas will automatically scale up to the maximum provided and down to the minimum, if provided. +This will cause the value of `instance_size` returned to potential be different than what is specified in the Terraform config and if one then applies a plan, not noting this, Terraform will scale the cluster back down to the original `instance_size` value. +To prevent this a lifecycle customization should be used, i.e.: +`lifecycle { + ignore_changes = [instance_size] +}` +After adding the `lifecycle` block to explicitly change `instance_size` comment out the `lifecycle` block and run `terraform apply`. Please be sure to uncomment the `lifecycle` block once done to prevent any accidental changes. + +* `compute_scale_down_enabled` - (Optional) Flag that indicates whether the instance size may scale down. Atlas requires this parameter if `replication_specs.#.region_configs.#.analytics_auto_scaling.0.compute_enabled` : true. If you enable this option, specify a value for `replication_specs.#.region_configs.#.analytics_auto_scaling.0.compute_min_instance_size`. +* `compute_min_instance_size` - (Optional) Minimum instance size to which your cluster can automatically scale (such as M10). Atlas requires this parameter if `replication_specs.#.region_configs.#.analytics_auto_scaling.0.compute_scale_down_enabled` is true. +* `compute_max_instance_size` - (Optional) Maximum instance size to which your cluster can automatically scale (such as M40). Atlas requires this parameter if `replication_specs.#.region_configs.#.analytics_auto_scaling.0.compute_enabled` is true. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: @@ -400,7 +422,7 @@ In addition to all arguments above, the following attributes are exported: - `connection_strings.private_endpoint.#.srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[n].connection_string` - `connection_strings.private_endpoint.#.type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters. - `connection_strings.private_endpoint.#.endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[n].connection_string` or `connection_strings.private_endpoint[n].srv_connection_string` - - `connection_strings.private_endoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. + - `connection_strings.private_endpoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint. - `connection_strings.private_endpoint.#.endpoints.#.provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`. - `connection_strings.private_endpoint.#.endpoints.#.region` - Region to which you deployed the private endpoint. * `state_name` - Current state of the cluster. The possible states are: diff --git a/website/docs/r/api_key.html.markdown b/website/docs/r/api_key.html.markdown new file mode 100644 index 0000000000..7402503e55 --- /dev/null +++ b/website/docs/r/api_key.html.markdown @@ -0,0 +1,64 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: api_key" +sidebar_current: "docs-mongodbatlas-resource-api-key" +description: |- + Provides a API Key resource. +--- + +# Resource: mongodbatlas_api_key + +`mongodbatlas_api_key` provides a Organization API key resource. This allows an Organizational API key to be created. + +~> **IMPORTANT WARNING:** Creating, Reading, Updating, or Deleting Atlas API Keys may key expose sensitive organizational secrets to Terraform State. Consider storing sensitive API Key secrets instead via the [HashiCorp Vault MongoDB Atlas Secrets Engine](https://developer.hashicorp.com/vault/docs/secrets/mongodbatlas). +## Example Usage + +```terraform +resource "mongodbatlas_api_key" "test" { + description = "key-name" + org_id = "" + role_names = ["ORG_READ_ONLY"] + } +} +``` + +## Argument Reference + +* `org_id` - Unique identifier for the organization whose API keys you want to retrieve. Use the /orgs endpoint to retrieve all organizations to which the authenticated user has access. +* `description` - Description of this Organization API key. +* `role_names` - Name of the role. This resource returns all the roles the user has in Atlas. +The following are valid roles: + * `ORG_OWNER` + * `ORG_GROUP_CREATOR` + * `ORG_BILLING_ADMIN` + * `ORG_READ_ONLY` + * `ORG_MEMBER` + +~> **NOTE:** Project created by API Keys must belong to an existing organization. + +### Programmatic API Keys +api_keys allows one to assign an existing organization programmatic API key to a Project. The api_keys attribute is optional. + +* `api_key_id` - (Required) The unique identifier of the Programmatic API key you want to associate with the Project. The Programmatic API key and Project must share the same parent organization. Note: this is not the `publicKey` of the Programmatic API key but the `id` of the key. See [Programmatic API Keys](https://docs.atlas.mongodb.com/reference/api/apiKeys/) for more. + +* `role_names` - (Required) List of Project roles that the Programmatic API key needs to have. Ensure you provide: at least one role and ensure all roles are valid for the Project. You must specify an array even if you are only associating a single role with the Programmatic API key. + The following are valid roles: + * `GROUP_OWNER` + * `GROUP_READ_ONLY` + * `GROUP_DATA_ACCESS_ADMIN` + * `GROUP_DATA_ACCESS_READ_WRITE` + * `GROUP_DATA_ACCESS_READ_ONLY` + * `GROUP_CLUSTER_MANAGER` + ## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `api_key_id` - Unique identifier for this Organization API key. +## Import + +API Keys must be imported using org ID, API Key ID e.g. + +``` +$ terraform import mongodbatlas_api_key.test 5d09d6a59ccf6445652a444a-6576974933969669 +``` +See [MongoDB Atlas API - API Key](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Programmatic-API-Keys/operation/createOneOrganizationApiKey) - Documentation for more information. diff --git a/website/docs/r/cloud_backup_schedule.html.markdown b/website/docs/r/cloud_backup_schedule.html.markdown index b06300db21..885f18eabb 100644 --- a/website/docs/r/cloud_backup_schedule.html.markdown +++ b/website/docs/r/cloud_backup_schedule.html.markdown @@ -10,9 +10,9 @@ description: |- `mongodbatlas_cloud_backup_schedule` provides a cloud backup schedule resource. The resource lets you create, read, update and delete a cloud backup schedule. --> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. +-> **NOTE** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. --> **API Key Access List**: This resource requires an Atlas API Access Key List to utilize this feature. This means to manage this resources you must have the IP address or CIDR block that the Terraform connection is coming from added to the Atlas API Key Access List of the Atlas API key you are using. See [Resources that require API Key List](https://www.mongodb.com/docs/atlas/configure-api-access/#use-api-resources-that-require-an-access-list) for details. +-> **API Key Access List** This resource requires an Atlas API Access Key List to utilize this feature. This means to manage this resources you must have the IP address or CIDR block that the Terraform connection is coming from added to the Atlas API Key Access List of the Atlas API key you are using. See [Resources that require API Key List](https://www.mongodb.com/docs/atlas/configure-api-access/#use-api-resources-that-require-an-access-list) for details. In the Terraform MongoDB Atlas Provider 1.0.0 we have re-architected the way in which Cloud Backup Policies are manged with Terraform to significantly reduce the complexity. Due to this change we've provided multiple examples below to help express how this new resource functions. @@ -113,22 +113,23 @@ resource "mongodbatlas_cloud_backup_schedule" "test" { // This will now add the desired policy items to the existing mongodbatlas_cloud_backup_schedule resource policy_item_hourly { - frequency_interval = 1 + frequency_interval = 1 #accepted values = 1, 2, 4, 6, 8, 12 -> every n hours retention_unit = "days" retention_value = 1 } policy_item_daily { - frequency_interval = 1 + frequency_interval = 1 #accepted values = 1 -> every 1 day retention_unit = "days" retention_value = 2 } policy_item_weekly { - frequency_interval = 4 + frequency_interval = 4 # accepted values = 1 to 7 -> every 1=Monday,2=Tuesday,3=Wednesday,4=Thursday,5=Friday,6=Saturday,7=Sunday day of the week retention_unit = "weeks" retention_value = 3 } policy_item_monthly { - frequency_interval = 5 + frequency_interval = 5 # accepted values = 1 to 28 -> 1 to 28 every nth day of the month + # accepted values = 40 -> every last day of the month retention_unit = "months" retention_value = 4 } @@ -136,6 +137,51 @@ resource "mongodbatlas_cloud_backup_schedule" "test" { } ``` +## Example Usage - Create a Cluster with Cloud Backup Enabled with Snapshot Distribution + +You can enable `cloud_backup` in the Cluster resource and then use the `cloud_backup_schedule` resource with a basic policy for Cloud Backup. + +```terraform +resource "mongodbatlas_cluster" "my_cluster" { + project_id = "" + name = "clusterTest" + disk_size_gb = 5 + + //Provider Settings "block" + provider_name = "AWS" + provider_region_name = "US_EAST_2" + provider_instance_size_name = "M10" + cloud_backup = true // must be enabled in order to use cloud_backup_schedule resource +} + +resource "mongodbatlas_cloud_backup_schedule" "test" { + project_id = mongodbatlas_cluster.my_cluster.project_id + cluster_name = mongodbatlas_cluster.my_cluster.name + + reference_hour_of_day = 3 + reference_minute_of_hour = 45 + restore_window_days = 4 + + policy_item_daily { + frequency_interval = 1 + retention_unit = "days" + retention_value = 14 + } + + copy_settings { + cloud_provider = "AWS" + frequencies = ["HOURLY", + "DAILY", + "WEEKLY", + "MONTHLY", + "ON_DEMAND"] + region_name = "US_EAST_1" + replication_spec_id = mongodbatlas_cluster.my_cluster.replication_specs.*.id[0] + should_copy_oplogs = false + } + +} +``` ## Argument Reference * `project_id` - (Required) The unique identifier of the project for the Atlas cluster. @@ -143,7 +189,10 @@ resource "mongodbatlas_cloud_backup_schedule" "test" { * `reference_hour_of_day` - (Optional) UTC Hour of day between 0 and 23, inclusive, representing which hour of the day that Atlas takes snapshots for backup policy items. * `reference_minute_of_hour` - (Optional) UTC Minutes after `reference_hour_of_day` that Atlas takes snapshots for backup policy items. Must be between 0 and 59, inclusive. * `restore_window_days` - (Optional) Number of days back in time you can restore to with point-in-time accuracy. Must be a positive, non-zero integer. -* `update_snapshots` - (Optional) Specify true to apply the retention changes in the updated backup policy to snapshots that Atlas took previously. +* `update_snapshots` - (Optional) Specify true to apply the retention changes in the updated backup policy to snapshots that Atlas took previously. + + **Note** This parameter does not return updates on return from API, this is a feature of the MongoDB Atlas Admin API itself and not Terraform. For more details about this resource see: https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Cloud-Backup-Schedule + * `policy_item_hourly` - (Optional) Hourly policy item * `policy_item_daily` - (Optional) Daily policy item * `policy_item_weekly` - (Optional) Weekly policy item @@ -158,29 +207,42 @@ resource "mongodbatlas_cloud_backup_schedule" "test" { * `frequency_type` - Frequency associated with the export snapshot item. ### Policy Item Hourly -* -* `frequency_interval` - (Required) Desired frequency of the new backup policy item specified by `frequency_type`. -* `retention_unit` - (Required) Scope of the backup policy item: days, weeks, or months. -* `retention_value` - (Required) Value to associate with `retention_unit`. +* `id` - Unique identifier of the backup policy item. +* `frequency_type` - Frequency associated with the backup policy item. For hourly policies, the frequency type is defined as `hourly`. Note that this is a read-only value and not required in plan files - its value is implied from the policy resource type. +* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type` (hourly in this case). The supported values for hourly policies are `1`, `2`, `4`, `6`, `8` or `12` hours. Note that `12` hours is the only accepted value for NVMe clusters. +* `retention_unit` - Scope of the backup policy item: `days`, `weeks`, or `months`. +* `retention_value` - Value to associate with `retention_unit`. ### Policy Item Daily -* -* `frequency_interval` - (Required) Desired frequency of the new backup policy item specified by `frequency_type`. -* `retention_unit` - (Required) Scope of the backup policy item: days, weeks, or months. -* `retention_value` - (Required) Value to associate with `retention_unit`. +* `id` - Unique identifier of the backup policy item. +* `frequency_type` - Frequency associated with the backup policy item. For daily policies, the frequency type is defined as `daily`. Note that this is a read-only value and not required in plan files - its value is implied from the policy resource type. +* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type` (daily in this case). The only supported value for daily policies is `1` day. +* `retention_unit` - Scope of the backup policy item: `days`, `weeks`, or `months`. +* `retention_value` - Value to associate with `retention_unit`. Note that for less frequent policy items, Atlas requires that you specify a retention period greater than or equal to the retention period specified for more frequent policy items. For example: If the hourly policy item specifies a retention of two days, the daily retention policy must specify two days or greater. ### Policy Item Weekly -* -* `frequency_interval` - (Required) Desired frequency of the new backup policy item specified by `frequency_type`. -* `retention_unit` - (Required) Scope of the backup policy item: days, weeks, or months. -* `retention_value` - (Required) Value to associate with `retention_unit`. +* `id` - Unique identifier of the backup policy item. +* `frequency_type` - Frequency associated with the backup policy item. For weekly policies, the frequency type is defined as `weekly`. Note that this is a read-only value and not required in plan files - its value is implied from the policy resource type. +* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type` (weekly in this case). The supported values for weekly policies are `1` through `7`, where `1` represents Monday and `7` represents Sunday. +* `retention_unit` - Scope of the backup policy item: `days`, `weeks`, or `months`. +* `retention_value` - Value to associate with `retention_unit`. Weekly policy must have retention of at least 7 days or 1 week. Note that for less frequent policy items, Atlas requires that you specify a retention period greater than or equal to the retention period specified for more frequent policy items. For example: If the daily policy item specifies a retention of two weeks, the weekly retention policy must specify two weeks or greater. ### Policy Item Monthly +* `id` - Unique identifier of the backup policy item. +* `frequency_type` - Frequency associated with the backup policy item. For monthly policies, the frequency type is defined as `monthly`. Note that this is a read-only value and not required in plan files - its value is implied from the policy resource type. +* `frequency_interval` - Desired frequency of the new backup policy item specified by `frequency_type` (monthly in this case). The supported values for weekly policies are + * `1` through `28` where the number represents the day of the month i.e. `1` is the first of the month and `5` is the fifth day of the month. + * `40` represents the last day of the month (depending on the month). +* `retention_unit` - Scope of the backup policy item: `days`, `weeks`, or `months`. +* `retention_value` - Value to associate with `retention_unit`. Monthly policy must have retention days of at least 31 days or 5 weeks or 1 month. Note that for less frequent policy items, Atlas requires that you specify a retention period greater than or equal to the retention period specified for more frequent policy items. For example: If the weekly policy item specifies a retention of two weeks, the montly retention policy must specify two weeks or greater. + +### Snapshot Distribution * -* `frequency_interval` - (Required) Desired frequency of the new backup policy item specified by `frequency_type`. -* `retention_unit` - (Required) Scope of the backup policy item: days, weeks, or months. -* `retention_value` - (Required) Value to associate with `retention_unit`. - +* `cloud_provider` - (Required) Human-readable label that identifies the cloud provider that stores the snapshot copy. i.e. "AWS" "AZURE" "GCP" +* `frequencies` - (Required) List that describes which types of snapshots to copy. i.e. "HOURLY" "DAILY" "WEEKLY" "MONTHLY" "ON_DEMAND" +* `region_name` - (Required) Target region to copy snapshots belonging to replicationSpecId to. Please supply the 'Atlas Region' which can be found under https://www.mongodb.com/docs/atlas/reference/cloud-providers/ 'regions' link +* `replication_spec_id` -(Required) Unique 24-hexadecimal digit string that identifies the replication object for a zone in a cluster. For global clusters, there can be multiple zones to choose from. For sharded clusters and replica set clusters, there is only one zone in the cluster. To find the Replication Spec Id, do a GET request to Return One Cluster in One Project and consult the replicationSpecs array https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#operation/returnOneCluster +* `should_copy_oplogs` - (Required) Flag that indicates whether to copy the oplogs to the target region. You can use the oplogs to perform point-in-time restores. ## Attributes Reference diff --git a/website/docs/r/cluster.html.markdown b/website/docs/r/cluster.html.markdown index cf7d948f66..12868713f7 100644 --- a/website/docs/r/cluster.html.markdown +++ b/website/docs/r/cluster.html.markdown @@ -435,12 +435,12 @@ replication_specs { Specifies BI Connector for Atlas configuration. - ```terraform - bi_connector = { - enabled = true - read_preference = secondary - } - ``` +```terraform +bi_connector_config { + enabled = true + read_preference = "secondary" +} +``` * `enabled` - (Optional) Specifies whether or not BI Connector for Atlas is enabled on the cluster.l * @@ -483,6 +483,8 @@ Include **desired options** within advanced_configuration: * `no_table_scan` - (Optional) When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations. * `oplog_size_mb` - (Optional) The custom oplog size of the cluster. Without a value that indicates that the cluster uses the default oplog size calculated by Atlas. +* `oplog_min_retention_hours` - (Optional) Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates. +* **Note** A minimum oplog retention is required when seeking to change a cluster's class to Local NVMe SSD. To learn more and for latest guidance see [`oplogMinRetentionHours`](https://www.mongodb.com/docs/manual/core/replica-set-oplog/#std-label-replica-set-minimum-oplog-size) * `sample_size_bi_connector` - (Optional) Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled. * `sample_refresh_interval_bi_connector` - (Optional) Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled. diff --git a/website/docs/r/encryption_at_rest.html.markdown b/website/docs/r/encryption_at_rest.html.markdown index 969a770dba..d83ea2937d 100644 --- a/website/docs/r/encryption_at_rest.html.markdown +++ b/website/docs/r/encryption_at_rest.html.markdown @@ -143,5 +143,12 @@ Refer to the example in the [official github repository](https://github.com/mong * `service_account_key` - String-formatted JSON object containing GCP KMS credentials from your GCP account. * `key_version_resource_id` - The Key Version Resource ID from your GCP account. +## Import -For more information see: [MongoDB Atlas API Reference for Encryption at Rest using Customer Key Management.](https://docs.atlas.mongodb.com/reference/api/encryption-at-rest/) +Encryption at Rest Settings can be imported using project ID, in the format `project_id`, e.g. + +``` +$ terraform import mongodbatlas_encryption_at_rest.example 1112222b3bf99403840e8934 +``` + +For more information see: [MongoDB Atlas API Reference for Encryption at Rest using Customer Key Management.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Encryption-at-Rest-using-Customer-Key-Management) diff --git a/website/docs/r/federated_settings_org_config.html.markdown b/website/docs/r/federated_settings_org_config.html.markdown index c471eccdd3..8af3ae8663 100644 --- a/website/docs/r/federated_settings_org_config.html.markdown +++ b/website/docs/r/federated_settings_org_config.html.markdown @@ -49,7 +49,7 @@ In addition to all arguments above, the following attributes are exported: FederatedSettingsOrgConfig must be imported using federation_settings_id-org_id, e.g. ``` -$ terraform import mongodbatlas_federated_settings_org_config.org_connection 6287a663c7f7f7f71c441c6c-627a96837f7f7f7e306f14-628ae97f7f7468ea3727 +$ terraform import mongodbatlas_federated_settings_org_config.org_connection 627a9687f7f7f7f774de306f14-627a9683ea7ff7f74de306f14 ``` For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api/federation-configuration/) diff --git a/website/docs/r/project.html.markdown b/website/docs/r/project.html.markdown index 145a0af72a..1857cec8db 100644 --- a/website/docs/r/project.html.markdown +++ b/website/docs/r/project.html.markdown @@ -15,9 +15,12 @@ description: |- ## Example Usage ```terraform +data "mongodbatlas_roles_org_id" "test" { +} + resource "mongodbatlas_project" "test" { name = "project-name" - org_id = "" + org_id = data.mongodbatlas_roles_org_id.test.org_id project_owner_id = "" teams { @@ -104,4 +107,4 @@ Project must be imported using project ID, e.g. ``` $ terraform import mongodbatlas_project.my_project 5d09d6a59ccf6445652a444a ``` -For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/projects/) - [and MongoDB Atlas API - Teams](https://docs.atlas.mongodb.com/reference/api/teams/) Documentation for more information. +For more information see: [MongoDB Atlas Admin API Projects](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Projects) and [MongoDB Atlas Admin API Teams](https://docs.atlas.mongodb.com/reference/api/teams/) Documentation for more information. diff --git a/website/docs/r/project_api_key.html.markdown b/website/docs/r/project_api_key.html.markdown new file mode 100644 index 0000000000..655683959d --- /dev/null +++ b/website/docs/r/project_api_key.html.markdown @@ -0,0 +1,59 @@ +--- +layout: "mongodbatlas" +page_title: "MongoDB Atlas: project_api_key" +sidebar_current: "docs-mongodbatlas-resource-project-api-key" +description: |- + Provides a Project API Key resource. +--- + +# Resource: mongodbatlas_project_api_key + +`mongodbatlas_project_api_key` provides a Project API Key resource. This allows project API Key to be created. + +~> **IMPORTANT WARNING:** Creating, Reading, Updating, or Deleting Atlas API Keys may key expose sensitive organizational secrets to Terraform State. Consider storing sensitive API Key secrets instead via the [HashiCorp Vault MongoDB Atlas Secrets Engine](https://developer.hashicorp.com/vault/docs/secrets/mongodbatlas). + +## Example Usage + +```terraform +resource "mongodbatlas_project_api_key" "test" { + description = "key-name" + project_id = "" + role_names = ["GROUP_OWNER"] + } +} +``` + +## Argument Reference + +* `project__id` - Unique identifier for the project whose API keys you want to retrieve. Use the /orgs endpoint to retrieve all organizations to which the authenticated user has access. +* `description` - Description of this Organization API key. +* `role_names` - (Required) List of Project roles that the Programmatic API key needs to have. Ensure you provide: at least one role and ensure all roles are valid for the Project. You must specify an array even if you are only associating a single role with the Programmatic API key. + The following are valid roles: + * `GROUP_OWNER` + * `GROUP_READ_ONLY` + * `GROUP_DATA_ACCESS_ADMIN` + * `GROUP_DATA_ACCESS_READ_WRITE` + * `GROUP_DATA_ACCESS_READ_ONLY` + * `GROUP_CLUSTER_MANAGER` + +~> **NOTE:** Project created by API Keys must belong to an existing organization. + +### Programmatic API Keys +api_keys allows one to assign an existing organization programmatic API key to a Project. The api_keys attribute is optional. + +* `api_key_id` - (Required) The unique identifier of the Programmatic API key you want to associate with the Project. The Programmatic API key and Project must share the same parent organization. Note: this is not the `publicKey` of the Programmatic API key but the `id` of the key. See [Programmatic API Keys](https://docs.atlas.mongodb.com/reference/api/apiKeys/) for more. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `api_key_id` - Unique identifier for this Project API key. + +## Import + +API Keys must be imported using org ID, API Key ID e.g. + +``` +$ terraform import mongodbatlas_project_api_key.test 5d09d6a59ccf6445652a444a-6576974933969669 +``` +See [MongoDB Atlas API - API Key](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Programmatic-API-Keys/operation/createAndAssignOneOrganizationApiKeyToOneProject) - Documentation for more information. diff --git a/website/docs/r/third_party_integration.markdown b/website/docs/r/third_party_integration.markdown index 1a112643c4..e83d2b96e7 100644 --- a/website/docs/r/third_party_integration.markdown +++ b/website/docs/r/third_party_integration.markdown @@ -12,6 +12,8 @@ description: |- -> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation. +-> **WARNING:** This field type has values (NEW_RELIC, FLOWDOCK) that are deprecated and will be removed in 1.9.0 release release + -> **NOTE:** Slack integrations now use the OAuth2 verification method and must be initially configured, or updated from a legacy integration, through the Atlas third-party service integrations page. Legacy tokens will soon no longer be supported.[Read more about slack setup](https://docs.atlas.mongodb.com/tutorial/third-party-service-integrations/) ~> **IMPORTANT** Each project can only have one configuration per {INTEGRATION-TYPE}.