-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with Consul Auto-Join with Cloud Metadata tutorial #16956
Comments
Was able to apply this workaround: #3449 (comment) |
Hi @pavlo! Sorry this didn't work as expected. It seems like the problem here stems from the dynamic setting of This is, unfortunately, a situation where Terraform's model of the world doesn't quite fit reality: Consul only cares about resource "aws_instance" "consul_server" {
# ...
user_data = "${element(data.template_file.manager.*.rendered, count.index)}"
lifecycle {
ignore_changes = ["user_data"]
}
} A common pattern for safely deploying Consul with Terraform in production is to emulate a blue/green deployment model using Terraform modules. To do this, you can put the necessary resources for a set of Consul servers in a module and instantiate it from your root module. When making any changes to the cluster, a new instance of the same module is created alongside, unbootstrapped, and then joined to the existing servers so that there are temporarily two times the number of servers present. Once cluster replication is complete, you can then remove the original module to destroy the old servers. This does, of course, add some additional complexity compared to simply using |
Hello again! We didn't hear back from you, so I'm going to close this in the hope that a previous response gave you the information you needed. If not, please do feel free to re-open this and leave another comment with the information my human friends requested above. Thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Hey!
I was following the tutorial on Auto-Join feature of Consul - https://www.hashicorp.com/blog/consul-auto-join-with-cloud-metadata. It worked just fine, I managed to get
3
servers running and was going to try to scale to5
.So, I changed the
managers_count
variable interraform.tfvars
from3
to5
and ranterraform plan
.In the tutorial, the
plan
command outputted this:But in my environment it yielded this:
so that instead of just adding two more servers it planned to remove the three existing and create five new ones. I would not expect that as in a real-life it would purge the cluster instead of just scaling it.
So, by changing the
managers_count
variable it essentially changes thetemplate_file
'scounter
which leads toaws_instance
'suser_data
to change. Thus terraform deems it necessary to recreate the instances.Is there anything wrong with what I was doing? I assume so given the
plan
from the tutorial looks just perfect!Here's an excerpt of the
plan
output of an instance it scheduled to replace, so it is theid
anduser_data
guilty.Terraform Configuration Files
Consul Version
Note, contrary to the tutorial, I configured it to leverage
1.0.2
version of ConsulTerraform Version
The text was updated successfully, but these errors were encountered: