-
Notifications
You must be signed in to change notification settings - Fork 178
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
doc: Defining migration guide for Independent Shard Scaling feature (#…
…2434) * define 1.18.0 migration guide * small fixes * Update website/docs/guides/1.18.0-upgrade-guide.html.markdown Co-authored-by: maastha <[email protected]> * Update website/docs/guides/1.18.0-upgrade-guide.html.markdown Co-authored-by: maastha <[email protected]> * addressing comments and suggestions * rename and move files anticipating to new structure --------- Co-authored-by: maastha <[email protected]>
- Loading branch information
1 parent
10044af
commit 6072d51
Showing
2 changed files
with
374 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
--- | ||
page_title: "Upgrade Guide 1.18.0" | ||
--- | ||
|
||
# MongoDB Atlas Provider 1.18.0: Upgrade and Information Guide | ||
|
||
The Terraform MongoDB Atlas Provider version 1.18.0 has a number of new and exciting features. | ||
|
||
**New Resources, Data Sources, and Features:** | ||
|
||
- Sharded and geo-sharded clusters defined with `mongodbatlas_advanced_cluster` are now capable of scaling the instance size and disk IOPS independently for each individual shard. For more details and migration guidelines, please reference [advanced_cluster - Migration to new sharding schema and leveraging Independent Shard Scaling](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema). As part of these changes 2 new attributes have been included: | ||
- New attribute `replication_specs.*.zone_id` is added in `mongodbatlas_advanced_cluster` resource and data sources to enable identifying the zone of each `replication_specs` object. | ||
- New attribute `use_replication_spec_per_shard` defined in `mongodbatlas_advanced_cluster` data sources to configure if the users want to obtain `replication_specs` objects for each shard. | ||
|
||
**Deprecations:** | ||
|
||
- Deprecations in `mongodbatlas_advanced_cluster` resource and data sources: | ||
- `replication_specs.*.num_shards`: The `replication_specs` list now supports defining an object for each inidividual shard. This new schema is favoured over using `num_shards` attribute. For more details and migration guidelines, please reference [advanced_cluster - Migration to new sharding schema and leveraging Independent Shard Scaling](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema). | ||
- `disk_size_gb`: The same attribute is now defined under `replication_specs.*.region_configs.*.(electable_specs|analytics_specs|read_only_specs).disk_size_gb`. Replacing the root value into existing inner specs will have no change in the underlying cluster. The motivation behind this change in location is to align with the new API schema and facilitate new features related to independent storage size scaling in the future. | ||
- `replication_specs.*.id`: This attribute was being used by `mongodbatlas_cloud_backup_schedule` resource to identify cluster zones. As of 1.18.0 `mongodbatlas_cloud_backup_schedule` resource can reference cluster zones using the new `zone_id` attribute. | ||
- `advanced_configuration.default_read_concern`: MongoDB 5.0 and later clusters default to `local`. To use a custom read concern level, please refer to your driver documentation. | ||
- `advanced_configuration.fail_index_key_too_long`: This attribute only applies to older versions of MongoDB (removed in 4.4). | ||
|
||
### Helpful Links | ||
|
||
* [Report bugs](https://github.com/mongodb/terraform-provider-mongodbatlas/issues) | ||
|
||
* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723) | ||
|
||
* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,344 @@ | ||
--- | ||
page_title: "advanced_cluster - Migration to new sharding schema and leveraging Independent Shard Scaling" | ||
--- | ||
|
||
**Objective**: Guide users to migrate their existing advanced_cluster configurations to use the new sharding schema which was introduced in version `1.18.0`. Additionally, a section is included to describe how Independent Shard Scaling can be used once the new sharding schema is adopted. Exiting sharding configurations will continue to work but will have deprecation messages if not using the new sharding schema. | ||
|
||
- [Overview of schema changes](#overview) | ||
- [Migrating existing advanced_cluster type SHARDED](#migration-sharded) | ||
- [Migrating existing advanced_cluster type GEOSHARDED](#migration-geosharded) | ||
- [Migrating existing advanced_cluster type REPLICASET](#migration-replicaset) | ||
- [Leveraging Independent Shard Scaling](#leveraging-iss) | ||
|
||
<a id="overview"></a> | ||
# Overview of schema changes | ||
|
||
`replication_specs` attribute has been modified to now be able to represent each individual shard of a cluster with a unique replication spec element. As such, when using the new sharding schema the existing attribute `num_shards` will no longer be defined, and instead the number of shards will be defined by the number of `replication_specs` elements. | ||
|
||
<a id="migration-sharded"></a> | ||
## Migrating existing advanced_cluster type SHARDED | ||
|
||
Considering the following configuration of a SHARDED cluster using the deprecated `num_shards`: | ||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "SymmetricShardedCluster" | ||
cluster_type = "SHARDED" | ||
replication_specs { | ||
# deprecation warning will be encoutered for using num_shards | ||
num_shards = 2 | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
disk_iops = 3000 | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
In order to update our configuration to the new schema, we will remove the use of `num_shards` and add a new identical `replication_specs` element for each shard. Note that these 2 changes must be done at the same time. | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "SymmetricShardedCluster" | ||
cluster_type = "SHARDED" | ||
replication_specs { # first shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
disk_iops = 3000 | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
replication_specs { # second shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
disk_iops = 3000 | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
This updated configuration will trigger a terraform update plan. However, the underlying cluster will not face any changes after the apply, as both configurations represent a sharded cluster composed of 2 shards. | ||
|
||
<a id="migration-geosharded"></a> | ||
## Migrating existing advanced_cluster type GEOSHARDED | ||
|
||
Considering the following configuration of a GEOSHARDED cluster using the deprecated `num_shards`: | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "GeoShardedCluster" | ||
cluster_type = "GEOSHARDED" | ||
replication_specs { | ||
zone_name = "zone n1" | ||
num_shards = 2 | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "US_EAST_1" | ||
} | ||
} | ||
replication_specs { | ||
zone_name = "zone n2" | ||
num_shards = 2 | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
In order to update our configuration to the new schema, we will remove the use of `num_shards` and add a new identical `replication_specs` element for each shard. Note that these 2 changes must be done at the same time. | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "GeoShardedCluster" | ||
cluster_type = "GEOSHARDED" | ||
replication_specs { # first shard for zone n1 | ||
zone_name = "zone n1" | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "US_EAST_1" | ||
} | ||
} | ||
replication_specs { # second shard for zone n1 | ||
zone_name = "zone n1" | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "US_EAST_1" | ||
} | ||
} | ||
replication_specs { # first shard for zone n2 | ||
zone_name = "zone n2" | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
replication_specs { # second shard for zone n2 | ||
zone_name = "zone n2" | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
|
||
|
||
This updated configuration will trigger a terraform update plan. However, the underlying cluster will not face any changes after the apply, as both configurations represent a geo sharded cluster with 2 zones and 2 shards in each one. | ||
|
||
<a id="migration-replicaset"></a> | ||
## Migrating existing advanced_cluster type REPLICASET | ||
|
||
-> **NOTE:** Please consider the following complementary documentation providing details on transitioning from a replicaset to a sharded cluster: https://www.mongodb.com/docs/atlas/scale-cluster/#convert-a-replica-set-to-a-sharded-cluster. | ||
|
||
Considering the following replica set configuration: | ||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ReplicaSetTransition" | ||
cluster_type = "REPLICASET" | ||
replication_specs { | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AZURE" | ||
priority = 7 | ||
region_name = "US_EAST" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
To transition a replica set to sharded cluster 2 separate updates must be applied. First, update the `cluster_type` to SHARDED, and apply this change to the cluster. | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ReplicaSetTransition" | ||
cluster_type = "SHARDED" | ||
replication_specs { | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AZURE" | ||
priority = 7 | ||
region_name = "US_EAST" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
Once the cluster type is adjusted accordingly, we can proceed to add a new shard using the new schema: | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ReplicaSetTransition" | ||
cluster_type = "SHARDED" | ||
replication_specs { # first shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AZURE" | ||
priority = 7 | ||
region_name = "US_EAST" | ||
} | ||
} | ||
replication_specs { # second shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M10" | ||
node_count = 3 | ||
} | ||
provider_name = "AZURE" | ||
priority = 7 | ||
region_name = "US_EAST" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
<a id="leveraging-iss"></a> | ||
## Leveraging Independent Shard Scaling | ||
|
||
The new sharding schema must be used. Each shard must be represented with a unique replication_specs element and `num_shards` must not be used, as illustrated in the following example. | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ShardedCluster" | ||
cluster_type = "SHARDED" | ||
replication_specs { # first shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
replication_specs { # second shard | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
With each shard's `replication_specs` defined independently, we can now define distinct `instance_size`, and `disk_iops` (only for AWS) values for each shard in the cluster. In the following example, we define an upgraded instance size of M40 only for the first shard in the cluster. | ||
|
||
Consider reviewing the Metrics Dashboard in the MongoDB Atlas UI (e.g. https://cloud.mongodb.com/v2/<PROJECT-ID>#/clusters/detail/ShardedCluster) for insight into how each shard within your cluster is currently performing, which will inform any shard-specific resource allocation changes you might require. | ||
|
||
``` | ||
resource "mongodbatlas_advanced_cluster" "test" { | ||
project_id = var.project_id | ||
name = "ShardedCluster" | ||
cluster_type = "SHARDED" | ||
replication_specs { # first shard upgraded to M40 | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M40" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
replication_specs { # second shard preserves M30 | ||
region_configs { | ||
electable_specs { | ||
instance_size = "M30" | ||
node_count = 3 | ||
} | ||
provider_name = "AWS" | ||
priority = 7 | ||
region_name = "EU_WEST_1" | ||
} | ||
} | ||
} | ||
``` |