-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chef provisioner hangs when provisioning more than one resource #22722
Comments
I've been testing this some more. I split my resources into the vra7 deployments, and the chef provisioning of them by using What i've found is that it seems to not like chef-provisioning more than one machine at a time. So far I've found cases where 1 out of 4 machines will provision perfectly, and the others just hang after they all print the To reiterate: When it gets stuck, the chef provisioning always hangs at the Here is a gist of a chef run using null_provisioner on two resources, both of which hang: https://gist.github.com/mcascone/0b71948f50d52648389e661d00c8e31c And this is one of a successful, 1-resource run: https://gist.github.com/mcascone/858855b5bd9d5d1cf655d5e10df67801 |
I keep thinking this is an issue with the same provisioner being called multiple times in the same |
Some more testing. This time i tried using a single chef provisioner block, with a count of 2 iterator:
Same result. It hangs on I did notice one new thing: The configuration files are indeed getting created on both target VMs - EXCEPT one of them does NOT create the |
This appears to only affect v0.12. I reverted all the syntax and downgraded to v0.11.14, and it provisions in parallel like a charm. Something got broken in the provisioners in v0.12. |
Any updates on this issue? |
We were experiencing this issue, as well, when trying to build Windows VMs in VMWare. We use Terragrunt along with Terraform, mostly against VMWare with some Azure here and there. I noticed a possible workaround and I'm curious if it works for others. EnvironmentVSphere: 6.7.0.42000 SummaryBasically, I just stopped using Terraform's 'count' functionality and split my various servers into separate modules. I realize this is not ideal, but I was curious to see if I would encounter the same issue. I was surprised to find that it might be a workaround and possibly a helpful indicator of where the issue might lie? Snippet of Original Configuration:This would result in the issue everyone's describing main.tf
module source
Snippet of Alteration:This seems to actually succeed as expected. main.tf
module source
|
@ndunn990, I agree with your hunch, i believe something gets lost when there is a count > 1; somewhere an iterator is not iterating correctly or at all, or something like that. |
FYI - Using Terraform 0.12.26 fixes it for me (with multiple file provisioners) as per #22006 (comment) Hope it helps you |
I'm closing this issue because it looks like it's already fixed, and we also announced tool-specific (vendor or 3rd-party) provisioner deprecation in mid-September 2020. Additionally, we added a deprecation notice for tool-specific provisioners in 0.13.4. On a practical level this means we will no longer be reviewing or merging PRs for built-in plugins like the chef provisioner. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform Version
Terraform Configuration Files
null_resource
block:Debug Output
https://gist.github.com/mcascone/41d1514b05ebd675a700f00ce5948995
NOTE: The REMOTE machine provisions just fine. The chef provisioner dies on the MASTER machine at line 9701.
Crash Output
Expected Behavior
Chef provisions all machines in the apply run.
Actual Behavior
It doesn't even actually "fail", it just stops responding:
Steps to Reproduce
terraform init
terraform plan
terraform apply
Additional Context
#21194
The text was updated successfully, but these errors were encountered: