Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resource/aws_elasticache_replication_group: computed member_clusters attribute not updated #17161

Closed
gdavison opened this issue Jan 19, 2021 · 3 comments · Fixed by #17201
Closed
Assignees
Labels
service/elasticache Issues and PRs that pertain to the elasticache service.
Milestone

Comments

@gdavison
Copy link
Contributor

When updating an aws_elasticache_replication_group by adding or removing cache clusters using number_cache_clusters, the computed attribute member_clusters is not updated.

This is likely related to the Terraform Plugin SDK issue hashicorp/terraform-plugin-sdk#195.

Terraform CLI and Terraform AWS Provider Version

Terraform v0.13.4
+ provider registry.terraform.io/hashicorp/aws v3.24.1
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/aws v3.24.1

Affected Resource

  • aws_elasticache_replication_group

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

resource "aws_elasticache_replication_group" "test" {
  automatic_failover_enabled    = false
  node_type                     = "cache.t3.medium"
  number_cache_clusters         = var. number_cache_clusters
  replication_group_id          = "gmd-test-elasticache"
  replication_group_description = "Terraform Acceptance Testing - number_cache_clusters"
  subnet_group_name             = aws_elasticache_subnet_group.test.name
}

output "number_cache_clusters"{
    value = aws_elasticache_replication_group.test.number_cache_clusters
}

output "member_clusters"{
    value = aws_elasticache_replication_group.test.member_clusters
}

data "aws_availability_zones" "available" {
  state = "available"

  filter {
    name   = "opt-in-status"
    values = ["opt-in-not-required"]
  }
}

resource "aws_vpc" "test" {
  cidr_block = "192.168.0.0/16"
}

resource "aws_subnet" "test" {
  count = 2

  availability_zone = data.aws_availability_zones.available.names[count.index]
  cidr_block        = "192.168.${count.index}.0/24"
  vpc_id            = aws_vpc.test.id
}

resource "aws_elasticache_subnet_group" "test" {
  name       = "gmd-test-elasticache"
  subnet_ids = aws_subnet.test[*].id
}
$ terraform apply -var 'number_cache_clusters=2'
...
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

member_clusters = toset([
  "gmd-test-elasticache-001",
  "gmd-test-elasticache-002",
])
number_cache_clusters = 2

$  terraform apply -var 'number_cache_clusters=4'
...
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Outputs:

member_clusters = toset([
  "gmd-test-elasticache-001",
  "gmd-test-elasticache-002",
])
number_cache_clusters = 4

Expected Behavior

The contents of member_clusters should be updated.

Actual Behavior

The contents of member_clusters is not updated, though number_cache_clusters (which is set to the length of the same API response that is used to set member_clusters) is.

Calling terraform refresh updates state to the correct values.

References

@ghost ghost added service/ec2 Issues and PRs that pertain to the ec2 service. service/elasticache Issues and PRs that pertain to the elasticache service. labels Jan 19, 2021
@gdavison gdavison added bug Addresses a defect in current functionality. upstream-terraform Addresses functionality related to the Terraform core binary. and removed service/ec2 Issues and PRs that pertain to the ec2 service. labels Jan 19, 2021
@bflad
Copy link
Contributor

bflad commented Jan 19, 2021

Terraform core requires that the provider resources signal when a Computed attribute will have an updated value during a plan, otherwise it assumes the value is going to remain the same. Without that in place, if the Read function of the resource is setup correct and calls d.Set() on the attribute, it should at least get picked up with a second terraform refresh or terraform apply.

To prevent the need for a double apply, our only option today is to use CustomizeDiff to modify the plan a signal to Terraform core the side-effect of a Computed attribute value update during the same plan. ResourceDiff supports either updating the value to a known value (diff.SetNew()) or an unknown value (diff.SetNewComputed()). e.g. to do it "manually":

CustomizeDiff: func(ctx context.Context, diff *schema.ResourceDiff, meta interface{}) error {
	if diff.HasChange("number_cache_clusters") {
		return diff.SetNewComputed("member_clusters")
	}

	return nil
},

Or the customdiff package provides some helpers:

CustomizeDiff: customdiff.Sequence(
	customdiff.ComputedIf("member_clusters", func(ctx context.Context, diff *schema.ResourceDiff, meta interface{}) bool {
		return diff.HasChange("number_cache_clusters")
	}),
),

@gdavison gdavison removed bug Addresses a defect in current functionality. upstream-terraform Addresses functionality related to the Terraform core binary. labels Jan 20, 2021
@gdavison gdavison self-assigned this Jan 21, 2021
@github-actions github-actions bot added this to the v3.26.0 milestone Jan 22, 2021
@ghost
Copy link

ghost commented Jan 28, 2021

This has been released in version 3.26.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!

@ghost
Copy link

ghost commented Feb 22, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Feb 22, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
service/elasticache Issues and PRs that pertain to the elasticache service.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants