-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc: Defining migration guide for Independent Shard Scaling feature #2434
Merged
Merged
Changes from 5 commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
f47081c
define 1.18.0 migration guide
AgustinBettati fa10655
small fixes
AgustinBettati 292d736
Update website/docs/guides/1.18.0-upgrade-guide.html.markdown
AgustinBettati f4c50e4
Update website/docs/guides/1.18.0-upgrade-guide.html.markdown
AgustinBettati 4fcb403
addressing comments and suggestions
AgustinBettati 455a6c1
rename and move files anticipating to new structure
AgustinBettati File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
--- | ||
page_title: "Upgrade Guide 1.18.0" | ||
--- | ||
|
||
# MongoDB Atlas Provider 1.18.0: Upgrade and Information Guide | ||
|
||
The Terraform MongoDB Atlas Provider version 1.18.0 has a number of new and exciting features. | ||
|
||
**New Resources, Data Sources, and Features:** | ||
|
||
- Sharded and geo-sharded clusters defined with `mongodbatlas_advanced_cluster` are now capable of scaling the instance size and disk IOPS independently for each individual shard. For more details and migration guidelines, please reference [advanced_cluster - Migration to new sharding schema and leveraging Independent Shard Scaling](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema). As part of these changes 2 new attributes have been included: | ||
- New attribute `replication_specs.*.zone_id` is added in `mongodbatlas_advanced_cluster` resource and data sources to enable identifying the zone of each `replication_specs` object. | ||
- New attribute `use_replication_spec_per_shard` defined in `mongodbatlas_advanced_cluster` data sources to configure if the users want to obtain `replication_specs` objects for each shard. | ||
|
||
**Deprecations:** | ||
|
||
- Deprecations in `mongodbatlas_advanced_cluster` resource and data sources: | ||
- `replication_specs.*.num_shards`: The `replication_specs` list now supports defining an object for each inidividual shard. This new schema is favoured over using `num_shards` attribute. For more details and migration guidelines, please reference [advanced_cluster - Migration to new sharding schema and leveraging Independent Shard Scaling](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema). | ||
- `disk_size_gb`: The same attribute is now defined under `replication_specs.*.region_configs.*.(electable_specs|analytics_specs|read_only_specs).disk_size_gb`. Replacing the root value into existing inner specs will have no change in the underlying cluster. The motivation behind this change in location is to align with the new API schema and facilitate new features related to independent storage size scaling in the future. | ||
- `replication_specs.*.id`: This attribute was being used by `mongodbatlas_cloud_backup_schedule` resource to identify cluster zones. As of 1.18.0 `mongodbatlas_cloud_backup_schedule` resource can reference cluster zones using the new `zone_id` attribute. | ||
- `advanced_configuration.default_read_concern`: MongoDB 5.0 and later clusters default to `local`. To use a custom read concern level, please refer to your driver documentation. | ||
- `advanced_configuration.fail_index_key_too_long`: This attribute only applies to older versions of MongoDB (removed in 4.4). | ||
|
||
### Helpful Links | ||
|
||
* [Report bugs](https://github.com/mongodb/terraform-provider-mongodbatlas/issues) | ||
|
||
* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723) | ||
|
||
* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above. |
344 changes: 344 additions & 0 deletions
344
website/docs/guides/advanced-cluster-new-sharding-schema.html.markdown
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,344 @@ | ||
--- | ||
page_title: "advanced_cluster - Migration to new sharding schema and leveraging Independent Shard Scaling" | ||
--- | ||
|
||
**Objective**: Guide users to migrate their existing advanced_cluster configurations to use the new sharding schema which was introduced in version `1.18.0`. Additionally, a section is included to describe how Independent Shard Scaling can be used once the new sharding schema is adopted. Exiting sharding configurations will continue to work but will have deprecation messages if not using the new sharding schema. | ||
|
||
- [Overview of schema changes](#overview) | ||
- [Migrating existing advanced_cluster type SHARDED](#migration-sharded) | ||
- [Migrating existing advanced_cluster type GEOSHARDED](#migration-geosharded) | ||
- [Migrating existing advanced_cluster type REPLICASET](#migration-replicaset) | ||
- [Leveraging Independent Shard Scaling](#leveraging-iss) | ||
|
||
<a id="overview"></a> | ||
# Overview of schema changes | ||
|
||
`replication_specs` attribute has been modified to now be able to represent each individual shard of a cluster with a unique replication spec element. As such, when using the new sharding schema the existing attribute `num_shards` will no longer be defined, and instead the number of shards will be defined by the number of `replication_specs` elements. | ||
|
||
<a id="migration-sharded"></a> | ||
## Migrating existing advanced_cluster type SHARDED | ||
|
||
Considering the following configuration of a SHARDED cluster using the deprecated `num_shards`: | ||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "SymmetricShardedCluster" | ||
cluster_type = "SHARDED" | ||
|
||
replication_specs { | ||
# deprecation warning will be encoutered for using num_shards | ||
num_shards = 2 | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
disk_iops = 3000 | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
In order to update our configuration to the new schema, we will remove the use of `num_shards` and add a new identical `replication_specs` element for each shard. Note that these 2 changes must be done at the same time. | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "SymmetricShardedCluster" | ||
cluster_type = "SHARDED" | ||
|
||
replication_specs { # first shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
disk_iops = 3000 | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
|
||
replication_specs { # second shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
disk_iops = 3000 | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
This updated configuration will trigger a terraform update plan. However, the underlying cluster will not face any changes after the apply, as both configurations represent a sharded cluster composed of 2 shards. | ||
|
||
<a id="migration-geosharded"></a> | ||
## Migrating existing advanced_cluster type GEOSHARDED | ||
|
||
Considering the following configuration of a GEOSHARDED cluster using the deprecated `num_shards`: | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "GeoShardedCluster" | ||
cluster_type = "GEOSHARDED" | ||
|
||
replication_specs { | ||
zone_name = "zone n1" | ||
num_shards = 2 | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "US_EAST_1" | ||
} | ||
} | ||
|
||
replication_specs { | ||
zone_name = "zone n2" | ||
num_shards = 2 | ||
|
||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
In order to update our configuration to the new schema, we will remove the use of `num_shards` and add a new identical `replication_specs` element for each shard. Note that these 2 changes must be done at the same time. | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "GeoShardedCluster" | ||
cluster_type = "GEOSHARDED" | ||
|
||
replication_specs { # first shard for zone n1 | ||
zone_name = "zone n1" | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "US_EAST_1" | ||
} | ||
} | ||
|
||
replication_specs { # second shard for zone n1 | ||
zone_name = "zone n1" | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "US_EAST_1" | ||
} | ||
} | ||
|
||
replication_specs { # first shard for zone n2 | ||
zone_name = "zone n2" | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
|
||
replication_specs { # second shard for zone n2 | ||
zone_name = "zone n2" | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
|
||
|
||
This updated configuration will trigger a terraform update plan. However, the underlying cluster will not face any changes after the apply, as both configurations represent a geo sharded cluster with 2 zones and 2 shards in each one. | ||
|
||
<a id="migration-replicaset"></a> | ||
## Migrating existing advanced_cluster type REPLICASET | ||
|
||
-> **NOTE:** Please consider the following complementary documentation providing details on transitioning from a replicaset to a sharded cluster: https://www.mongodb.com/docs/atlas/scale-cluster/#convert-a-replica-set-to-a-sharded-cluster. | ||
|
||
Considering the following replica set configuration: | ||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ReplicaSetTransition" | ||
cluster_type = "REPLICASET" | ||
|
||
replication_specs { | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AZURE" | ||
priority = 7 | ||
region_name = "US_EAST" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
To transition a replica set to sharded cluster 2 separate updates must be applied. First, update the `cluster_type` to SHARDED, and apply this change to the cluster. | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ReplicaSetTransition" | ||
cluster_type = "SHARDED" | ||
|
||
replication_specs { | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AZURE" | ||
priority = 7 | ||
region_name = "US_EAST" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
Once the cluster type is adjusted accordingly, we can proceed to add a new shard using the new schema: | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ReplicaSetTransition" | ||
cluster_type = "SHARDED" | ||
|
||
replication_specs { # first shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AZURE" | ||
priority = 7 | ||
region_name = "US_EAST" | ||
} | ||
} | ||
|
||
replication_specs { # second shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AZURE" | ||
priority = 7 | ||
region_name = "US_EAST" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
<a id="leveraging-iss"></a> | ||
## Leveraging Independent Shard Scaling | ||
|
||
The new sharding schema must be used. Each shard must be represented with a unique replication_specs element and `num_shards` must not be used, as illustrated in the following example. | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ShardedCluster" | ||
cluster_type = "SHARDED" | ||
|
||
replication_specs { # first shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
|
||
replication_specs { # second shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
With each shard's `replication_specs` defined independently, we can now define distinct `instance_size`, and `disk_iops` (only for AWS) values for each shard in the cluster. In the following example, we define an upgraded instance size of M40 only for the first shard in the cluster. | ||
|
||
Consider reviewing the Metrics Dashboard in the MongoDB Atlas UI (e.g. https://cloud.mongodb.com/v2/<PROJECT-ID>#/clusters/detail/ShardedCluster) for insight into how each shard within your cluster is currently performing, which will inform any shard-specific resource allocation changes you might require. | ||
|
||
AgustinBettati marked this conversation as resolved.
Show resolved
Hide resolved
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ShardedCluster" | ||
cluster_type = "SHARDED" | ||
|
||
replication_specs { # first shard upgraded to M40 | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M40" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
|
||
replication_specs { # second shard preserves M30 | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean that if a cluster is currently deployed with a value of "X" set to
disk_size_gb
and that same value is set to the new field location, the cluster configuration will not change?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that is correct