Terraform configuration to provision the Nomad/Consul/Vault servers based on the Packer VM templates. This code depends on the community provided esxi Terraform provider. Please note the requirements for using this provider, including ovftool. If you would like to provision this using Terraform Cloud, you'll need a Business tier entitlement to enable Terraform Cloud Agents (assuming your ESXi host is not reachable from the internet, and it probably shouldn't be.)
As described in the repository README, I resort to using statically assigned IP addresses in this environment. I have configured static assignments on my router based on MAC address as listed below, and specify the MAC address for each machine in the terraform code.
There is a script that is run on each machine once it's provisioned. This is a one time bootstrap that uses the provided secret_id to authenticate to Vault and fetch any certificates and other secrets as needed, as well as update the cluster_addr
value in the Vault config. The primary need for this provisioner is due to the fact that some configuration can only be done after the system's IP address is known.
Since I don't have an auto scaling group type of structure, I am using a blue/green methodology to perform upgrades.
- Deploy blue nodes with the latest machine image
- To perform an upgrade, deploy green nodes with the latest image, validate, then drain and destroy the blue nodes
- To upgrade again, deploy blue nodes with the latest image, validate, then drain and destroy green nodes
- Repeat steps 2-3 ad infinitum
An ESXi host Recommended resources
Review the terraform code carefully. Provide values for terraform variables.
If you do not wish to mirror the ZFS volume on the NAS across two datastores, remove the nas_disk2
resource, and change the first line of the nas remote-exec provisioner to sudo zpool create data /dev/sdb
.
Deploy it with terraform apply
. It may take 10-15 minutes to complete.
Once provisioning is complete, initialize Vault:
export VAULT_ADDR='https://192.168.0.101:8200'
vault operator init -recovery-shares=1 -recovery-threshold=1
Optionally follow the Vault Disaster Recovery Replication Setup Learn guide to configure Vault Enterprise Disaster Recovery.
You should have fully operational Consul, Nomad, and Vault clusters. Now go run some jobs!
- Run Packer to generate a new version of the template
- Update tfvars to add new nodes
- Update
template_blue
ORtemplate_green
with the new template name - Edit
nodes_blue
ornodes_green
as necessary to add 3 new nodes The format is
- Update
{
Castle-1 = "00:0C:29:00:00:0A"
Castle-2 = "00:0C:29:00:00:0B"
Castle-3 = "00:0C:29:00:00:0C"
}
OR
{
Castle-4 = "00:0C:29:00:00:0D"
Castle-5 = "00:0C:29:00:00:0E"
Castle-6 = "00:0C:29:00:00:0F"
}
- Terraform plan and apply
- Validate
- Consul cluster health (Autopilot should do its thing)
- Nomad cluster health (Autopilot should do its thing)
- Vault cluster unseal and health
- Update tfvars to remove old nodes
- Edit
nodes_blue
ORnodes_green
as necessary to remove 3 old nodes The format is
- Edit
{ }
- Terraform plan and apply