-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Destroy Action #649
Comments
Not currently, but we want to extend the provisioners to support running them at different points in the lifecycle. e.g. post-create, pre-destroy, etc. |
To contribute code do I just fork the repo, write code, make sure it doesn't break anything and does what I expect and then create a pull request or is there some other things to be done? |
@Nalum that and add some tests 😄. You can read more about it in the Contributing.md file at the root of the repo. Feel free to ping us if you have any questions! |
Something like this is a pretty major change, both code wise and in terms of UX and core interaction. So I would definitely start with a design doc to open up the discussion. I'd hate for you to do a bunch of work and have us disagree on it. /cc: @mitchellh |
Just a +1 to all of this. Was thinking yesterday that perhaps the cleanest way to implement the logical_mutate resource that was referenced in #580 would just be to run a provisioner later in the dependency graph based upon post-create, post-update, or post-destroy events. Along these lines, I'd also love to be able to run a provisioner based on the fact that multiple resources fired created, updated, or deleted events. I haven't dug down into the internals far enough to suggest how this might work but it's certainly something I'm hoping Terraform will support. |
Is there an example or previous design doc I can look at so I can follow your existing process. |
I've not really done any Design Docs before, I'm sure I'm missing quite a bit so let me know but here it is: https://docs.google.com/document/d/1fg5cxNDqWMDFioDXIpNbcSYiaOD7EGrnfhspfWvdkVg/edit |
@Nalum I think we need to address a few major concerns with the The other is remote provisioning. Those clearly only make sense for |
Isn't there another issue that asks for this exact same thing? /cc @armon |
@mitchellh If this is better discussed else where I'll go there if I can. @armon I had thought of implementing it in the following way:
Instead of the Would you only allow for one value, so if you have Maybe For me if Also should I open that document up so anyone can edit it or are you happy with it being in suggestion mode? I can add anyone to it who should be able to edit it. |
There is definitely an issue tracking this somewhere, but this looks about along the lines of what I was thinking. I think our point of discussion revolved around "what does tainting mean" in this scenario. |
@Nalum I would just open the doc up for edits as well. I think you bring up an interesting point, of can you run a provisioner at multiple life cycle points? I can't think of a case where you would want to, but maybe there is a case... I think the partial failure behavior at least for pre-create (abort), post-create (taint), and pre-destroy (taint) make sense. Maybe by avoiding post-destroy we can avoid that complication. |
I've updated the doc and opened it up for anyone to edit. |
@armon @mitchellh Are you happy for me to go at this as defined in the doc? It will probably take me some time as I'm only getting into Go now. |
@Nalum fwiw i just read through your google doc and I like the format defined there. |
I would think a much cleaner way to address this problem would be creating a terraform resource that represents the entry in the Chef client list. Then the chef client entry gets created and destroyed through the normal Terraform lifecycle. Provisioners are a hack at best, and this highlights the reason why. If you think of your infrastructure as a state machine, then Terraform is based on modelling the desired state and Terraform then selects the right CRUD operations from the providers to make the edges to get to that state. Provisioners don't model desired state, they model edges. Consequently Terraform doesn't know anything about how that provisioner altered state, and you have a bad time. If you want a tool that models edges and not state, that tool already exists. It's called Ansible. Or Chef. Or Puppet. There's no reason you can't do both. I don't see any reason to dilute Terraform's model to support a use case that's already well-addressed by other tools. |
I agree with you about the possibility of using custom terraform resources for things like the "clean up the chef client on instance destruction". Although, I do respectfully disagree with you about the need for a before-destroy provisioner specifically. I have a blog post largely written up about this idea as a part of a grander scheme of other things we've been working on but here's the part that's pertinent to this issue specifically. Assume you have a cluster of stateful machines. An elasticsearch cluster for example. Anything that is clustered by default with a >1 replication factor fits this model quite nicely actually. Now, assume that we want to cut a new server image with packer and roll the cluster so that all the machines are on the new image. There are a variety of good reasons to do this. Right now Terraform does not have a safe way to drain old AMIs one by one. It can either delete the whole cluster and re-create it, or stand up the new nodes and then delete the old nodes. I believe we require stronger guarantees about how our stateful instances get destroyed. For example, we should never delete all the nodes that contain a given block worth of data until we have confirmed that the data has been replicated to the new set of instances. Create before destroy doesn't give us that assurance. It merely waits for the new instance to be up before terminating the old instance. Terraform is very close to providing the correct primitives for such graceful draining of stateful instances. It requires two things: the parallelism semaphore deep within terraform/context.go should be exposed up to the command line UI as a "--parallelism" flag so you can enforce that a specific apply is done one node at a time. Right now parallelism is just hardcoded to a default of 10 items at a time if it isn't overridden and the current implementation of what sets up the ContextOpts doesn't override it as far as I tell. The second thing is that stateful instances controlled by terraform require the need to gracefully decomission themselves and then spin until their blocks have been replicated to the other nodes before starting the next destroy. This decomission and blocking dance can be done within before-destroy provisioners on the instance resource. Either me or someone on our team will be PRing some of this work as we get to it and I'd like to hopefully come to some sort of agreement on how it could be done that the community and Hashicorp is comfortable with. Furthermore, tainting a stateful node may not be strong enough of an assurance when it comes to the automated destruction of stateful instances. I believe a new "decomission-on-delete" flag may need to be created so that we can provide the worst case assurance that you can flag specific instances as ones that we want humans to follow up and destroy, just in case. Decomissioning a resource would merely take it out of the statefile, or move it into a decomissioned section and whine about how the user needs to clean these up on every terraform plan. This is because we don't want Terraform to accidentally cause a "we need to recover from cold backups" scenario because too many nodes were deleted too quickly. Decomission-on-delete would put the onus on the operator to be quite certain that the state was replicated off that node before initializing it's destruction. I'll eventually publish my thoughts much more extensively about this and related things in a more general sort of way. It did seem prudent to defend my stance that adding an on-destroy provisioner is not a dilution of Terraform's model and is instead an integral piece to the graceful rolling of stateful instances. As always, I'd be willing to make time to discuss these sorts of things on a hangout or move this discussion to it's own dedicated issue if needed. Upon re-reading this I realize this diatribe likely falls under the umbrella of "but but someone on the internet is wrong". My apologies if it comes across as gruff in any way. This sort of thing has been a primary concern of mine for awhile now. While this may not be the final solution for graceful rolling of stateful nodes with Terraform, I believe it's one possible way that we just haven't gotten to taking care of yet. :) |
But it can also be done by making a resource that represents whatever it is you did when you provisioned the thing. This "provisioning" resource, which might represent some bit of configuration in a cluster (like a Chef client list, or whatever) gets created after the EC2 instance is created, and on destroy, the provisioning resource is destroyed before the instance is destroyed.
You are absolutely right. "Tainting" is just a half-baked, ambiguously defined state. Since it's also an unknown state, it's impossible to do anything automatically from this state without first resetting to a known state (for example by destroying the resource) or without making restrictions on how provisioners can work (for example requiring that they are idempotent).
But you already have that, it's the What you are describing with a "deprovisioning" mechanism is CRUD without the R or U, and an incomplete description of the known state (tainting). It's just a half-baked implementation of what Terraform already does, but which can't recover from failures of any of the operations unless you make all operations idempotent, and if you do that, what you've done is implemented Ansible. |
I agree this is a decent approach. This is definitely related to #386 One thing that @thegedge and I discussed is that provisioners almost seem like they should just be their own resource, rather than a nested attribute of some types of resources. |
To explain why I see them as a resource: they add data to the chef server (environment, run list, etc) that follows CRUD. To me this doesn't seem terribly different from, for example, an It really is a key component of our infrastructure at the moment, which fits with the definition of a resource. The actual bootstrapping run could be part of the provisioning block, but the client/node setup in the chef server is. Would it make sense to have a chef-server provider? |
Has there been any further movement on this? It's becoming a serious issue when coupled with #2720 |
+1 on this feature (mainly for ensuring chef stays current). In the mean time, does someone have some kind of script to clean out expired data from chef? |
I've taken to using a local provisioner to delete the Chef node and client. This is a horrible hack but it at least lets me create new instances without having to manually delete the old ones. Unfortunately, it only runs when an instance is provisioned, so if you de-provision an instance, you'll still have stale instances in Chef's database. But at least I can recreate instances without worrying what's in Chef.
|
👍 for this feature/enhancement |
+1. This would come in handy so Terraform could have Digitalocean make a snapshot of a server image instead of deleting it and building from scratch. |
+1. Our use case is unregistering app services from consul so that they are taken out of the load balancer before killing the server. |
+1 My use case is that some of the servers are joined to Active Directory domain, it would be very handy to have them delete themselves from the domain pre-destroy. The lack of it means that if I change the server in a way that results in a destroy and recreate the bootstrap to join the domain fails because the account name already exists |
+1 See #3605 |
+1 desperately need this 😀 |
has there been any activity on this ? |
Is this issue still blocked on a philosophical debate regarding whether this feature is appropriate? Or is it a matter of developing a refined product spec for how the feature should behave, and having the time to implement it? (I'm not taking sides or passing judgment, I'm just honestly asking). |
See, e.g.: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroupLifecycle.html For how this works in AWS. |
+1 |
Hey guys, we (Box) is about to start design work on implementing a wrapper around "terraform destroy" for internal use. Are there any updates on this feature request? edit: whoops, just saw apparentlymart's proposal |
Don't let my complex proposal discourage a simple solution here... I was trying to collect together a number of use-cases, but my main target was the cases where we use null_resource to trigger actions on updates, rather than destroy actions. There are a few different ways to attack destroy actions in particular so it would be nice to have a few different approaches to think about. There are some great ideas in this thread that take a different approach to what I suggested. Trying out some of these ideas with real examples is probably the best way to converge on a solution. |
+1 |
I have mainly 2 use cases where this would be really helpful.
|
How about /sbin/halt.local or equivalent? https://www.redhat.com/archives/rhl-list/2007-August/msg00314.html |
We have similar requirements where we need to manually de-register Consul server when we destroy the ASG with them. |
+1 - this is a must have for different stages - even if it rudimentary script execution hooks at various points in the infra lifecycle. |
I am relatively new to terraform but building on @icebourg 's example and #2831 one could do the following
with
and
with ./bin/deregister-chef.sh containing
That way this can be called post a destroy. |
Another use case: when removing a Kuberenetes worker, it would be nice to move pods off the node and remove it before the resource is destroyed:
Thanks @DocBradfordSoftware hermanjunge/kubernetes-digitalocean-terraform#21 |
I had the same issue except with Nomad. When Terraform would recreate the instance(s), the containers wouldn't migrate to the live instances until |
I also need it for nomad and kubernetes On Nov 3, 2016 2:45 PM, "Alex McLain" [email protected] wrote:
|
I know there is a lot of discussion here but this is identical to #386 and there is an equally large amount of discussion there so I'm going to close this and centralize there. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Is there a way to have a resource run a command before it is destroyed?
Example: I have an
aws_instance
that when provisioned was added to a chef servers node and client list, when this resource is destroyed I want to remove it from the node and client list.The text was updated successfully, but these errors were encountered: