You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! Have been running happily with terraform-provider-aws 1.0 for a while now. When doing a test upgrade to 1.2, I discovered that 1.1 or later wants to destroy and recreate many of my AWS ALB resources for what look effectively no-op changes. Anonymized partial diffs follow:
-/+ module.cluster_region.aws_alb.foo (new resource required)
id: "arn:aws:elasticloadbalancing:us-west-2:123456789012:loadbalancer/app/bar/deadbeef" => <computed> (forces new resource)
[...]
arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:loadbalancer/app/bar/deadbeef" => <computed>
[...]
load_balancer_type: "" => "application" (forces new resource)
[...]
For aws_alb above, I created my resources before the load_balancer_type field existed, and so I would hope that it could silently upgrade the value of that field in the state file to "application" without requiring a destroy & recreate cycle.
-/+ module.cluster_region.aws_alb_listener.foo (new resource required)
id: "arn:aws:elasticloadbalancing:us-west-2:123456789012:listener/app/foo/deadbeef/deadbeef" => <computed> (forces new resource)
arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:listener/app/foo/deadbeef/deadbeef" => <computed>
[...]
load_balancer_arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:loadbalancer/app/foo/deadbeef" => "${aws_alb.foo.id}" (forces new resource)
[...]
For aws_alb_listener above, the load_balancer_arn change triggered by the aws_alb change above is understood, but the id recomputation forcing a new resource, is this #1626 ?
-/+ module.cluster_region.aws_alb_target_group.foo (new resource required)
id: "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/foo/deadbeef" => <computed> (forces new resource)
arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/foo/deadbeef" => <computed>
[...]
target_type: "" => "instance" (forces new resource)
For aws_alb_target_group above, possibly #1626 again for id, but target_type was added after I created these resources, and again instance is the expected default. Can this be auto-upgraded?
-/+ module.cluster_region.aws_ecs_service.foo (new resource required)
id: "arn:aws:ecs:us-west-2:123456789012:service/foo" => <computed> (forces new resource)
cluster: "arn:aws:ecs:us-west-2:123456789012:cluster/foo" => "arn:aws:ecs:us-west-2:123456789012:cluster/foo"
[...]
load_balancer.763866410.target_group_arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/foo/deadbeef" => "" (forces new resource)
[...]
load_balancer.~4092294221.target_group_arn: "" => "${aws_alb_target_group.foo.id}" (forces new resource)
[...]
For aws_ecs_service above, the aws_alb_target_group change above triggering a new resource here is understood, but the id change may be the same as the prior issues, #1626 ?
Further notes:
This is present in all of my aws_alb{,_listener,_target_group} resources, not just the ones associated with ECS. I selected this as a representative sample to show the trace of all of the issues.
I definitely think I could work around the issues with load_balancer_type and target_type by hand-editing my Terraform state file to add those missing fields, but I am not sure what to do about the id changes? Also, I have N ALBs * Y Terraform state files to update, so it would definitely be a bunch of toil-y typing and testing just to get myself back to the same place, so some automation would be helpful here..
Thanks for any thoughts!
The text was updated successfully, but these errors were encountered:
OK n/m, I see that if I refresh my config, the state file gets updated, and all of these go away. 😜
Could we avoid this situation in the future by having each provider keep a monotonically increasing epoch number that gets incremented when breaking changes/changes requiring a refresh are committed, and the epoch number last used for each provider be recorded in the Terraform state? That way Terraform could suggest that you refresh to reprocess with the new provider, rather than ending up in this confusing state.
Hello! Have been running happily with terraform-provider-aws 1.0 for a while now. When doing a test upgrade to 1.2, I discovered that 1.1 or later wants to destroy and recreate many of my AWS ALB resources for what look effectively no-op changes. Anonymized partial diffs follow:
For
aws_alb
above, I created my resources before theload_balancer_type
field existed, and so I would hope that it could silently upgrade the value of that field in the state file to "application" without requiring a destroy & recreate cycle.For
aws_alb_listener
above, theload_balancer_arn
change triggered by theaws_alb
change above is understood, but theid
recomputation forcing a new resource, is this #1626 ?For
aws_alb_target_group
above, possibly #1626 again forid
, buttarget_type
was added after I created these resources, and againinstance
is the expected default. Can this be auto-upgraded?For
aws_ecs_service
above, theaws_alb_target_group
change above triggering a new resource here is understood, but theid
change may be the same as the prior issues, #1626 ?Further notes:
This is present in all of my
aws_alb{,_listener,_target_group}
resources, not just the ones associated with ECS. I selected this as a representative sample to show the trace of all of the issues.I definitely think I could work around the issues with
load_balancer_type
andtarget_type
by hand-editing my Terraform state file to add those missing fields, but I am not sure what to do about theid
changes? Also, I have N ALBs * Y Terraform state files to update, so it would definitely be a bunch of toil-y typing and testing just to get myself back to the same place, so some automation would be helpful here..Thanks for any thoughts!
The text was updated successfully, but these errors were encountered: