Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lifecycle rules on launch config create cyclic dependency during 'destroy' #3294

Closed
gposton opened this issue Sep 21, 2015 · 12 comments
Closed

Comments

@gposton
Copy link
Contributor

gposton commented Sep 21, 2015

It seems to me that lifecycle rules should be ignored during the 'destroy' action.

I have a terraform template that consists of an ASG and launch config (among other things).

Without lifecycle rules the initial 'apply' and a subsequent 'destroy' work as expected.

However, I am unable to update the AMI in the launch config, as I get this error:

* ResourceInUse: Cannot delete launch configuration consul because it is attached to    AutoScalingGroup consul
    status code: 400, request id: [119046e5-60a0-11e5-9e64-6167bb31c650]
Error applying plan:

1 error(s) occurred:

* ResourceInUse: Cannot delete launch configuration consul because it is attached to AutoScalingGroup consul
    status code: 400, request id: [119046e5-60a0-11e5-9e64-6167bb31c650]

So I added the lifecycle rule to the launch config.

Now I can do the initial 'apply', and can also do another apply to change the AMI. Everything works as expected.... except 'destroy'. When I run a destroy, I now get the following error:

Error creating plan: 1 error(s) occurred:

* Cycle: aws_security_group.consul_elb, aws_elb.elb, aws_security_group.consul_server (destroy), terraform_remote_state.vpc (destroy), terraform_remote_state.vpc, aws_security_group.consul_server, aws_launch_configuration.consul_asg_conf, aws_autoscaling_group.consul_asg, aws_launch_configuration.consul_asg_conf (destroy), aws_iam_instance_profile.profile (destroy), aws_iam_role.role (destroy), aws_iam_role.role, aws_iam_instance_profile.profile

I can follow the documentation and add the lifecycle rule to the asg as well. This makes everything run successfully from terraform's perspective. However, this has unintended consequences.

When the lifecycle rule is not on the ASG, I can change the AMI in the launch config w/o the ASG being destroyed (thus cycling my instances).

When the lifecycle rule is added to the ASG, both the ASG and the launch config is destroyed and re-created. This cycles my instances which happens too quickly for all of our services to initialize and pass health checks.

I'd prefer the former scenario, where the ASG is not cycled. However, with that scenario (which works from an 'apply' perspective) I can not run 'destroy' without introducing a cycle

@gposton gposton changed the title Lifecycle rules on launch config Lifecycle rules on launch config create cyclic dependency during 'destroy' Sep 21, 2015
@dpetzold
Copy link
Contributor

+1

1 similar comment
@rbachman
Copy link

👍

@stack72
Copy link
Contributor

stack72 commented Sep 21, 2015

@gposton are you giving your LaunchConf a name?

@gposton
Copy link
Contributor Author

gposton commented Sep 22, 2015

@stack72 The name includes the AMI-ID, so it will be unique each time.

name          = "consul-${var.ami}"

@gposton
Copy link
Contributor Author

gposton commented Sep 22, 2015

I updated the issue description... please see the last 4 paragraphs.

@gposton
Copy link
Contributor Author

gposton commented Sep 22, 2015

I added a template that demonstrates this issue here: https://gist.github.com/gposton/0b51dea975d9250b6c99

Note that running the template allows you to update the AMI without cycling the instances in the ASG

export TF_VAR_aws_access_key=YOUR ACCESS KEY
export TF_VAR_aws_secret_key=YOUR SECRET KEY
export TF_VAR_ami=ami-1627ad26
terraform apply
export TF_VAR_ami=ami-15c29b25
terraform apply

However, 'destroy' introduces a cycle

aws_security_group.allow_all: Refreshing state... (ID: sg-0d62e969)
aws_route53_zone.ccointernal: Refreshing state... (ID: ZACQKLBGDNTBD)
aws_elb.elb: Refreshing state... (ID: test-elb)
aws_launch_configuration.launch_config: Refreshing state... (ID: test-launch_config-ami-1627ad26)
aws_route53_record.dns: Refreshing state... (ID: ZACQKLBGDNTBD_test.internal.com_CNAME)
aws_autoscaling_group.asg: Refreshing state... (ID: test-asg)
Error creating plan: 1 error(s) occurred:

* Cycle: aws_elb.elb, aws_autoscaling_group.asg, aws_launch_configuration.launch_config (destroy), aws_security_group.allow_all (destroy), aws_security_group.allow_all, aws_launch_configuration.launch_config

@timbunce
Copy link

timbunce commented Oct 5, 2015

See also #2359.

@roderickrandolph
Copy link

👍

@sstarcher
Copy link

Anyone have a viable workaround for this?

@vancluever
Copy link
Contributor

I've been able to reproduce this using a rather complex module-based config. It's hard to paste it all here, but it works just fine during update and what not (after I effected create_before_destory across the entire infrastructure). All I had to do to get it to destroy was roll back my modules to the previous versions, and destroy worked perfectly.

Just by looking at documented behaviour, the only meaningful lifecycle config option out of the three that would have any value in a destroy operation would be prevent_destroy.

Maybe during walking dependencies on a terraform destroy operation, would there be a way to override a user-defined create_before_destory to false?

@hashibot
Copy link
Contributor

Hello! 🤖

This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it.

If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in this issue, because the internal details relating to this problem are likely to be different in the current version of Terraform.

Thanks!

@ghost
Copy link

ghost commented Sep 27, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Sep 27, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests