Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform-provider-aws 1.1 and later want to destroy & recreate aws_alb{,_listener,_target_group} and aws_ecs_service #2150

Closed
handlerbot opened this issue Nov 1, 2017 · 3 comments

Comments

@handlerbot
Copy link
Contributor

handlerbot commented Nov 1, 2017

Hello! Have been running happily with terraform-provider-aws 1.0 for a while now. When doing a test upgrade to 1.2, I discovered that 1.1 or later wants to destroy and recreate many of my AWS ALB resources for what look effectively no-op changes. Anonymized partial diffs follow:

-/+ module.cluster_region.aws_alb.foo (new resource required)
 id: "arn:aws:elasticloadbalancing:us-west-2:123456789012:loadbalancer/app/bar/deadbeef" => <computed> (forces new resource)
[...]
 arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:loadbalancer/app/bar/deadbeef" => <computed>
[...]
 load_balancer_type: "" => "application" (forces new resource)
[...]

For aws_alb above, I created my resources before the load_balancer_type field existed, and so I would hope that it could silently upgrade the value of that field in the state file to "application" without requiring a destroy & recreate cycle.

-/+ module.cluster_region.aws_alb_listener.foo (new resource required)
 id: "arn:aws:elasticloadbalancing:us-west-2:123456789012:listener/app/foo/deadbeef/deadbeef" => <computed> (forces new resource)
 arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:listener/app/foo/deadbeef/deadbeef" => <computed>
[...]
 load_balancer_arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:loadbalancer/app/foo/deadbeef" => "${aws_alb.foo.id}" (forces new resource)
[...]

For aws_alb_listener above, the load_balancer_arn change triggered by the aws_alb change above is understood, but the id recomputation forcing a new resource, is this #1626 ?

-/+ module.cluster_region.aws_alb_target_group.foo (new resource required)
 id: "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/foo/deadbeef" => <computed> (forces new resource)
 arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/foo/deadbeef" => <computed>
[...]
 target_type: "" => "instance" (forces new resource)

For aws_alb_target_group above, possibly #1626 again for id, but target_type was added after I created these resources, and again instance is the expected default. Can this be auto-upgraded?

-/+ module.cluster_region.aws_ecs_service.foo (new resource required)
 id: "arn:aws:ecs:us-west-2:123456789012:service/foo" => <computed> (forces new resource)
 cluster: "arn:aws:ecs:us-west-2:123456789012:cluster/foo" => "arn:aws:ecs:us-west-2:123456789012:cluster/foo"
[...]
 load_balancer.763866410.target_group_arn: "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/foo/deadbeef" => "" (forces new resource)
[...]
 load_balancer.~4092294221.target_group_arn: "" => "${aws_alb_target_group.foo.id}" (forces new resource)
[...]

For aws_ecs_service above, the aws_alb_target_group change above triggering a new resource here is understood, but the id change may be the same as the prior issues, #1626 ?

Further notes:

  1. This is present in all of my aws_alb{,_listener,_target_group} resources, not just the ones associated with ECS. I selected this as a representative sample to show the trace of all of the issues.

  2. I definitely think I could work around the issues with load_balancer_type and target_type by hand-editing my Terraform state file to add those missing fields, but I am not sure what to do about the id changes? Also, I have N ALBs * Y Terraform state files to update, so it would definitely be a bunch of toil-y typing and testing just to get myself back to the same place, so some automation would be helpful here..

Thanks for any thoughts!

@handlerbot
Copy link
Contributor Author

OK n/m, I see that if I refresh my config, the state file gets updated, and all of these go away. 😜

Could we avoid this situation in the future by having each provider keep a monotonically increasing epoch number that gets incremented when breaking changes/changes requiring a refresh are committed, and the epoch number last used for each provider be recorded in the Terraform state? That way Terraform could suggest that you refresh to reprocess with the new provider, rather than ending up in this confusing state.

@catsby
Copy link
Contributor

catsby commented Nov 20, 2017

Thanks for the follow up @handlerbot, going to close for now

@catsby catsby closed this as completed Nov 20, 2017
@ghost
Copy link

ghost commented Apr 10, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants