This recipe shows an actual usage scenario for the cross-region internal application load balancer by implementing the example provided in the GCP documentation.
To run this recipe, you need
- an existing GCP project with the
compute
API enabled - the
roles/compute.admin
role or equivalent (e.g.roles/editor
) assigned on the project - an existing VPC in the same project
- one regular subnet per region where you want to deploy the load balancer in the same VPC
- an organization policy configuration that allows creation of internal application load balancer (the default configuration is fine)
- access to the Docker Registry from the instances (e.g. via Cloud NAT)
The load balancer needs one proxy-only global subnet in each of its regions. If the subnets already exist the load balancer will consume them. If you need to create them, either do that manually or configure the module to do it for you as explained in the Variable configuration section below.
For the load balancer to work you need to allow ingress to the instances from the health check ranges, and from the load balancer proxy ranges. You can create firewall rules manually or configure the module to do it for you as explained in the Variable configuration section below.
With all the requirements in place, the only variables that are needed are those that configure the project and VPC details. Note that you need to use ids or self links in the VPC configuration not names (Shared VPC configurations are also supported).
This is a simple minimal configuration:
project_id = "my-project"
vpc_config = {
network = "projects/my-project/global/networks/test"
subnets = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/default"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/default",
}
}
# tftest modules=5 resources=15
The VPC configuration also allows creating instances in different subnets, and auto-creation of proxy subnets and firewall rules. This is a complete configuration with all options.
project_id = "my-project"
vpc_config = {
network = "projects/my-project/global/networks/test"
subnets = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/default"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/default",
}
# only specify this to use different subnets for instances
subnets_instances = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/vms"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/vms",
}
# create proxy subnets
proxy_subnets_config = {
europe-west1 = "172.16.193.0/24"
europe-west8 = "172.16.192.0/24"
}
# create firewall rules
firewall_config = {
proxy_subnet_ranges = [
"172.16.193.0/24",
"172.16.192.0/24"
]
enable_health_check = true
enable_iap_ssh = true
}
}
# tftest skip
The instance type and the number of zones can be configured via the instances_config
variable:
project_id = "my-project"
vpc_config = {
network = "projects/my-project/global/networks/test"
subnets = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/default"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/default",
}
instances_config = {
# both attributes are optional
machine_type = "e2-small"
zones = ["b", "c"]
}
}
# tftest modules=5 resources=15
The DNS zone used for the load balancer record can be configured via the dns_config
variable:
project_id = "my-project"
vpc_config = {
network = "projects/my-project/global/networks/test"
subnets = {
europe-west1 = "projects/my-project/regions/europe-west1/subnetworks/default"
europe-west8 = "projects/my-project/regions/europe-west8/subnetworks/default",
}
dns_config = {
# all attributes are optional
client_networks = [
"projects/my-project/global/networks/test",
"projects/my-other-project/global/networks/test"
]
domain = "foo.example."
hostname = "lb-test"
}
}
# tftest modules=5 resources=15
To test the load balancer behaviour, you can simply disable the service on the backend instances by connecting via SSH and issuing the sudo systemctl stop nginx
command.
If the backends are unhealthy and the necessary firewall rules are in place, check that the Docker containers have started successfully on the instances by connecting via SSH and issuing the sudo systemctl status nginx
command.
name | description | modules |
---|---|---|
instances.tf | Instance-related locals and resources. | compute-vm · iam-service-account |
main.tf | Load balancer and VPC resources. | dns · net-lb-app-int-cross-region · net-vpc · net-vpc-firewall |
outputs.tf | Module outputs. | |
region_shortnames.tf | Region shortnames via locals. | |
variables.tf | Module variables. |
name | description | type | required | default |
---|---|---|---|---|
project_id | Project used to create resources. | string |
✓ | |
vpc_config | VPC configuration for load balancer and instances. Subnets are keyed by region. | object({…}) |
✓ | |
dns_config | DNS configuration. | object({…}) |
{} |
|
instances_config | Configuration for instances. | object({…}) |
{} |
|
prefix | Prefix used for resource names. | string |
"lb-xr-00" |
name | description | sensitive |
---|---|---|
instances | Instances details. | |
lb | Load balancer details. |