Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to restore Snapshot into new Redshift cluster with different configuration #11367

Closed
ghost opened this issue Dec 19, 2019 · 8 comments · Fixed by #13203
Closed

Unable to restore Snapshot into new Redshift cluster with different configuration #11367

ghost opened this issue Dec 19, 2019 · 8 comments · Fixed by #13203
Labels
bug Addresses a defect in current functionality. service/redshift Issues and PRs that pertain to the redshift service.
Milestone

Comments

@ghost
Copy link

ghost commented Dec 19, 2019

This issue was originally opened by @hdryx as hashicorp/terraform#23720. It was migrated here as a result of the provider split. The original body of the issue is below.


Hi,

This is maybe a bug in Terraform when using Redshift. I have created a Snapshot of a Redshift cluster having this config : 8 nodes of dc2.large

Now i'm trying to restore the snapshot into a new cluster with a different config : 2 nodes of dc2.8xlarge but i'm getting this error :

Error: InvalidParameterValue: Snapshot cluster-redshift-dev-snapshot-manual-18122019 does not fit into cluster of node type dc2.8xlarge with 8 node(s) due to technical limitations. Try to restore to a cluster with a different node configuration.
status code: 400, request id: f7b03b91-223e-11ea-b863-2593ae69f79f

Terraform Version

0.12.18

Terraform Configuration Files

This is code of the module creating the cluster : 

resource "aws_redshift_cluster" "redshift" {
  cluster_identifier                  = "${var.cluster_name}"
  database_name                       = var.db_name
  port                                = var.db_port
  master_username                     = var.db_master_username
  master_password                     = random_string.db_password.result
  node_type                           = var.node_type
  cluster_type                        = var.cluster_type
  number_of_nodes                     = var.number_of_nodes
  automated_snapshot_retention_period = var.automated_snapshot_retention_period
  skip_final_snapshot                 = true
  encrypted                           = true
  kms_key_id                          = var.kms_key_id

  publicly_accessible       = var.publicly_accessible
  vpc_security_group_ids    = var.security_groups
  cluster_subnet_group_name = aws_redshift_subnet_group.subnet.id
  iam_roles                 = var.iam_roles

  logging {
    enable        = true
    bucket_name   = var.logs_bucket_name
    s3_key_prefix = "redshift/${var.cluster_name}"
  }

  snapshot_identifier = var.snapshot_identifier != "" ? var.snapshot_identifier : null
  owner_account       = var.owner_account != "" ? var.owner_account : null

  lifecycle { ignore_changes = [tags.DateTimeTag] }
  tags = "${merge(var.resource_tagging, map("Name", "${var.cluster_name}"))}"
}


And this is how i'm calling the module : 

module "redshift_test_restore" {
  source = "../00_modules/redshift"

  cluster_name       = var.redshift_cluster_name_test_restore
  db_name            = var.db_name
  db_port            = var.redshift_cluster_port
  db_master_username = var.db_master_username
  node_type          = "dc2.8xlarge"
  cluster_type       = var.cluster_type
  number_of_nodes    = "2"
  kms_key_id         = ""

  publicly_accessible = "false"
  subnet_ids          = [data.aws_subnet.private_1.id, data.aws_subnet.private_2.id]
  security_groups     = [data.aws_security_group.sg_redshift_cluster.id]
  iam_roles           = [aws_iam_role.redshift_role.arn]

  logs_bucket_name    = data.aws_s3_bucket.logs.id

  snapshot_identifier = "cluster-redshift-dev-snapshot-manual-18122019"
  owner_account       = var.owner_account

  resource_tagging = var.resource_tagging
  environment      = var.environment
  project          = var.project
}

Debug Output

Crash Output

Error: InvalidParameterValue: Snapshot cluster-redshift-dev-snapshot-manual-18122019 does not fit into cluster of node type dc2.8xlarge with 8 node(s) due to technical limitations. Try to restore to a cluster with a different node configuration.
status code: 400, request id: f7b03b91-223e-11ea-b863-2593ae69f79f

Actual Behavior

I was excepecting the creation of a new cluster with the Snapshot restored;

Steps to Reproduce

  1. terraform init
  2. terraform apply

It looks like that Terraform is trying the apply the value of number_of_nodes stored in the Snapshot instead the value passed to the module.

@ghost ghost added the service/redshift Issues and PRs that pertain to the redshift service. label Dec 19, 2019
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Dec 19, 2019
@hdryx
Copy link

hdryx commented Dec 26, 2019

I confirm that this is probably a Terraform bug as i tried today to create manually in the AWS console a new cluster based on the Snapshot. The Redshift was created successfully as expected.

@hdryx
Copy link

hdryx commented Jan 16, 2020

Any news about this issue ?

@jdmchone
Copy link

jdmchone commented Mar 5, 2020

Any update on this? I'm not getting an error message, but when i choose to restore from a snapshot, it is using the snapshot cluster's number_of_instances, instead of the one i set in the terraform config.

@marksergeant
Copy link

Confirming, I'm seeing the same issues here as well.

In addition the complaint regarding 'due to technical limitations' is a red herring as it is not an AWS limitation, only a terraform limitation.

@justinretzolk
Copy link
Member

Hey y'all 👋 Thank you for taking the time to file this, and for the additional discussion around it. Given that there's been a number of Terraform and AWS provider releases since the last update, can anyone confirm whether you're still experiencing this behavior?

@justinretzolk justinretzolk added waiting-response Maintainers are waiting on response from community or contributor. and removed needs-triage Waiting for first response or review from a maintainer. labels Nov 18, 2021
@acavnar-ibm
Copy link

Hey y'all 👋 Thank you for taking the time to file this, and for the additional discussion around it. Given that there's been a number of Terraform and AWS provider releases since the last update, can anyone confirm whether you're still experiencing this behavior?

Hi Justin,

I can confirm this is still happening with the latest version of the provider.

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label Mar 17, 2022
@justinretzolk justinretzolk added the bug Addresses a defect in current functionality. label Mar 17, 2022
@github-actions github-actions bot added this to the v4.9.0 milestone Mar 29, 2022
@github-actions
Copy link

github-actions bot commented Apr 7, 2022

This functionality has been released in v4.9.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

github-actions bot commented May 8, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 8, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/redshift Issues and PRs that pertain to the redshift service.
Projects
None yet
5 participants