-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dependency cycle errors with modules #1637
Comments
I've been seeing this as well. Additionally if I do a |
Interesting. @johnrengelman, could you share any of the TF code that errors out? Maybe @mitchellh, @phinze or others can determine if this is the same issue or different. |
I'll try and replicate it today. |
This will fail to graph after outputting the plan.
However, if I add a |
@johnrengelman, your message is cut off at the end. FWIW, @phinze noted we should remove |
@ketzacoatl yeah, i accidentally hit . Message updated. |
Filed this officially as #1651 - stay tuned on that thread for a PR with my recent work here once I finish it up. |
great, that should help us all figure out what is what here. I am in over my head atm on it and will need to wait for further guidance from the wizards of Terraform. |
Working with @ketzacoatl I was able to determine that this is another expression of the same Terraform bug we've been kicking around across several issues - Terraform needs to change the way it treats modules in the execution graph such that modules are only a UX layer abstraction - the execution graph needs to be flattened out to a single layer to prevent these situations from being problematic. Terraform generates "destroy nodes" for each of the resources, which it places at the logical place in the graph that destroys would happen for those nodes. But TF can't do this for the module, because it has an unknown number of resources inside of it. The cycle happens as Terraform tries to figure out how to perform its operations in the proper order assuming a worst case destroy-then-create. If you think of the module as just a security group you'd want it to be:
But the problem is, from this view the graph only has "handle module" as an operation, giving us:
And hence the cycle. To work around the issue in the meantime, I'd recommend structuring your config to avoid cross-module references. So instead of directly accessing an output of one module from inside another, set it up as in input parameter instead and wire everything together on the top level. Tagging this as a core bug and leaving it open for now - I may try and consolidate the "stuff that will be fixed when we flatten the module graph" issues eventually. 👌 |
Wow, that is intense. It is also confusing to me that Terraform defaults to a destroy-then-create action. Even in a worst-case scenario.. it is not always what the user would want, and it is not always necessary (given the circumstances imposed by the current state of the environment and the provider's APIs). That aside.. @phinze, I am confused what you mean by cross-module references. If I understand your suggestion, I should use an output from one module as an input to another, is that right? If so, I believe I am doing that with:
and
as examples from the code I pasted in this issue. Forgive me if I am not understanding correctly. I do not intend to reference one module from inside another, except through the variable/output interfaces - do you see a place in my code where I am not doing this? I read the description of the problem/solution as removing the namespacing the modules do - I don't think that is the way to go, but maybe I do not understand what you are describing. @mitchellh, maybe you have a few spare cycles you can send our way to confirm we are on the right track? |
It's true that in many cases
Sorry about the unclear explanation here - in your example the references causing the cycle are from the ELB and the LC in the "leaders" module to the Here's a better way of describing my suggested workaround: avoid direct references from a resource block to a module. I think the easiest way to do this is by flattening out the module declarations at the top level. So in your case I'd try pulling the "leader-sg" module declaration out to "main.tf" and piping in the SG id as a variable to the "leaders" module instead of nesting them. It's definitely not ideal, but it should help you avoid cycles until we get the module graph fixed up in core. |
Ok, I understand now. I will shuffle around the SG modules so they are at the top level with the other modules. In regards to EDIT: Thank you @phinze for ensuring I understand what is going on and how to work around the limitation for now! |
Ah, now I remember why I did this to begin with.. I have two modules, For now, I think my work around is to put the security group resources directly in the leader/minion modules, rather than using a module to the SG. I will figure it out and report back. I guess it makes sense to close this issue if Terraform will be addressing multiple issues like this with structural changes in core. |
I've been able to work around the issue here by not calling a module from within a module. All modules include resources, no modules. This is less than ideal, but workable. @phinze, do you have plans for consolidating one or more of these issues? |
+1 to perhaps making a meta issue, there are a fair amount of module "bugs / feature requests" floating about. |
We were getting some cycle dependencies error when applying the config without existing .tfstate. We double checked and we have no cycles at all. For reference, see: * [What is the dependency cycle in this? #1475](hashicorp/terraform#1475) * [Dependency cycle errors with modules #1637](hashicorp/terraform#1637)
We were getting some cycle dependencies error when applying the config without existing .tfstate. We double checked and we have no cycles at all. For reference, see: * [What is the dependency cycle in this? #1475](hashicorp/terraform#1475) * [Dependency cycle errors with modules #1637](hashicorp/terraform#1637)
We were getting some cycle dependencies error when applying the config without existing .tfstate. We double checked and we have no cycles at all. For reference, see: * [What is the dependency cycle in this? #1475](hashicorp/terraform#1475) * [Dependency cycle errors with modules #1637](hashicorp/terraform#1637)
Fixed by #1781 |
@mitchellh FYI, experiencing this in 0.7.1 & 0.7.2. The code to reproduce is very similar to this issue, basically a top level module that does some CIDR math and generates an output map using null_data_source (admittedly a hack on my part). |
I am still having this issue in 0.8.2, modules are all at the top level, using input parameter as suggested by @phinze , here is the main.tf where wires all the modules:
Use this, I can do initial apply, and destroy, but when I try to lc, it gives cycle error. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have a build plan which is failing to apply, citing a cycle/dependency error.
With v0.4.2, I would get the cycle error with
terraform plan
. I built terraform frommaster
as of Sunday (this topic originally started in #1475), and reviewing the plan will now succeed:...but errors out when applying the plan:
The code is currently in a private repo, but I have requested @phinze have access. See here and here. The high-level bit is replicated here:
I include this reference because, unless I am mistaken, it confirms there should be no cyclical dependency issue/error here. The
cminions-a
andcminion-inbound-sg
modules depend on thecleaders
module, which in turn depends oncleader-inbound-sg
. This last module is simple and only uses variables (see the end of the code snippet above). Here is the core of the leader sg module:Nothing depends on
cminion-inbound-sg
orcminions-a
. When trying to apply the plan (with Terraform built frommaster
this past Sunday), the plan fails:If you have 0.4.2 available,
terraform plan
should fail in the same way.The text was updated successfully, but these errors were encountered: