Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When a cloudstack_instance is tainted, the corresponding cloudstack_nic(s) should also be tainted #5586

Closed
larstobi opened this issue Mar 11, 2016 · 10 comments

Comments

@larstobi
Copy link

When I taint a cloudstack_instance that has a NIC attached to it, then after the instance has been recreated and the Terraform run has finished, the new instance has no second NIC attached to it. No error message.

On the next Terraform run, it will find that the NIC resource is missing, and will create a new one and attach it. Also, no error message. In between the two terraform runs, the state is out of sync with the reality.

In CloudStack a NIC can't exist on its own without a VM to be attached to, and when the VM of a NIC is destroyed, then the NIC is automatically destroyed along with it.

@svanharmelen
Copy link
Contributor

@larstobi I currently have two PR's in the works that will fix this.

One (in this case the most important I think) is related to some input fields allowed in the CS resources. We allow both names and ID's for some fields, but we should only allow ID's for things that are managed/created by other resources as that makes sure the dependencies are graphed correctly by Terraform. Will reference the PR here and it will most likely make the next release 0.6.15.

The other PR (I guess slightly related) is about how TF handles tainted objects. But this will likely be a PR for the 0.7 release. Yet I don't think you really need it to fix this problem.

You can expect the link to the first PR later today.

@svanharmelen
Copy link
Contributor

@larstobi please see PR #6123 as this is the main PR that will fix this problem. Most likely you referenced the VM based on it's name, right? So something like:

resource "cloudstack_nic" "test" {
    network = "network-2"
    ipaddress = "192.168.1.1"
    virtual_machine = "server-1"
}

If you change that to using the ID of the VM, your problem will probably be solved. So something like this:

resource "cloudstack_nic" "test" {
    network = "network-2"
    ipaddress = "192.168.1.1"
    virtual_machine = "${cloudstack_instance.server-1.id}"
}

PR #6123 enforced using the parameters like this, making sure the graph can be build correctly. The PR to improve tainted resources is well on it's way, but I guess it's not even needed for this use case.

@larstobi
Copy link
Author

@svanharmelen yes, I do reference the VM based on it's name, like this:

resource "cloudstack_nic" "app" {
    count            = "${var.app_count}"
    network          = "${var.nfs_network}"
    virtual_machine  = "${element(cloudstack_instance.app.*.name, count.index)}"
}

Initially I wanted to use cloudstack_instance.app.*.id, but got errors that the VM could not be found when using the ID, whereas it was found when using the name instead. So I had to use the name. I meant to file a but report about it, but never got around to do it. Now, it seems no longer necessary. :-)

I will try the new code and report back. Thank you!

@svanharmelen
Copy link
Contributor

@larstobi interesting... The new code will not necessarily fix that, but I will just enforce you to use the ID instead of the name. So if you have a use case that breaks when you use the ID instead of the name, please share some details so I can help debug the problem...

You should be able to use the ID (and with that get a correct graph) by using v0.6.14 already, the code in there should work and the logic is not changed by PR #6123. So if you could test it again and let me know the result, that would be great.

Thx!

@svanharmelen
Copy link
Contributor

@larstobi any chance you got to test this? And/or any chance you can test this with a build from master? Would be cool to know if we can close this one...

@larstobi
Copy link
Author

I'm doing it now. :-)

@svanharmelen
Copy link
Contributor

Cool, let me know the results and/or any findings you may have...

@larstobi
Copy link
Author

@svanharmelen Tested, and working as advertized. 👍 Thanks!

@svanharmelen
Copy link
Contributor

Nice! Thanks for the feedback!

@ghost
Copy link

ghost commented Apr 26, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 26, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants