Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blueprint: GLB hybrid NEG internal #1150

Merged
merged 6 commits into from
Mar 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion blueprints/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Currently available blueprints:
- **data solutions** - [GCE and GCS CMEK via centralized Cloud KMS](./data-solutions/cmek-via-centralized-kms), [Cloud Composer version 2 private instance, supporting Shared VPC and external CMEK key](./data-solutions/composer-2), [Cloud SQL instance with multi-region read replicas](./data-solutions/cloudsql-multiregion), [Data Platform](./data-solutions/data-platform-foundations), [Spinning up a foundation data pipeline on Google Cloud using Cloud Storage, Dataflow and BigQuery](./data-solutions/gcs-to-bq-with-least-privileges), [#SQL Server Always On Groups blueprint](./data-solutions/sqlserver-alwayson), [Data Playground](./data-solutions/data-playground), [MLOps with Vertex AI](./data-solutions/vertex-mlops), [Shielded Folder](./data-solutions/shielded-folder)
- **factories** - [The why and the how of Resource Factories](./factories), [Google Cloud Identity Group Factory](./factories/cloud-identity-group-factory), [Google Cloud BQ Factory](./factories/bigquery-factory), [Google Cloud VPC Firewall Factory](./factories/net-vpc-firewall-yaml), [Minimal Project Factory](./factories/project-factory)
- **GKE** - [Binary Authorization Pipeline Blueprint](./gke/binauthz), [Storage API](./gke/binauthz/image), [Multi-cluster mesh on GKE (fleet API)](./gke/multi-cluster-mesh-gke-fleet-api), [GKE Multitenant Blueprint](./gke/multitenant-fleet), [Shared VPC with GKE support](./networking/shared-vpc-gke/)
- **networking** - [Decentralized firewall management](./networking/decentralized-firewall), [Decentralized firewall validator](./networking/decentralized-firewall/validator), [Network filtering with Squid](./networking/filtering-proxy), [Network filtering with Squid with isolated VPCs using Private Service Connect](./networking/filtering-proxy-psc), [HTTP Load Balancer with Cloud Armor](./networking/glb-and-armor), [Hub and Spoke via VPN](./networking/hub-and-spoke-vpn), [Hub and Spoke via VPC Peering](./networking/hub-and-spoke-peering), [Internal Load Balancer as Next Hop](./networking/ilb-next-hop), On-prem DNS and Google Private Access, [Calling a private Cloud Function from On-premises](./networking/private-cloud-function-from-onprem), [Hybrid connectivity to on-premise services through PSC](./networking/psc-hybrid), [PSC Producer](./networking/psc-hybrid/psc-producer), [PSC Consumer](./networking/psc-hybrid/psc-consumer), [Shared VPC with optional GKE cluster](./networking/shared-vpc-gke)
- **networking** - [Calling a private Cloud Function from On-premises](./networking/private-cloud-function-from-onprem), [Decentralized firewall management](./networking/decentralized-firewall), [Decentralized firewall validator](./networking/decentralized-firewall/validator), [Network filtering with Squid](./networking/filtering-proxy), [GLB and multi-regional daisy-chaining through hybrid NEGs](./networking/glb-hybrid-neg-internal), [Hybrid connectivity to on-premise services through PSC](./networking/psc-hybrid), [HTTP Load Balancer with Cloud Armor](./networking/glb-and-armor), [Hub and Spoke via VPN](./networking/hub-and-spoke-vpn), [Hub and Spoke via VPC Peering](./networking/hub-and-spoke-peering), [Internal Load Balancer as Next Hop](./networking/ilb-next-hop), [Network filtering with Squid with isolated VPCs using Private Service Connect](./networking/filtering-proxy-psc), On-prem DNS and Google Private Access, [PSC Producer](./networking/psc-hybrid/psc-producer), [PSC Consumer](./networking/psc-hybrid/psc-consumer), [Shared VPC with optional GKE cluster](./networking/shared-vpc-gke)
- **serverless** - [Creating multi-region deployments for API Gateway](./serverless/api-gateway), [Cloud Run series](./serverless/cloud-run-explore)
- **third party solutions** - [OpenShift on GCP user-provisioned infrastructure](./third-party-solutions/openshift), [Wordpress deployment on Cloud Run](./third-party-solutions/wordpress/cloudrun)

Expand Down
38 changes: 22 additions & 16 deletions blueprints/networking/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,27 @@ They are meant to be used as minimal but complete starting points to create actu

## Blueprints

### Calling a private Cloud Function from on-premises

<a href="./private-cloud-function-from-onprem/" title="Private Cloud Function from On-premises"><img src="./private-cloud-function-from-onprem/diagram.png" align="left" width="280px"></a> This [blueprint](./private-cloud-function-from-onprem/) shows how to invoke a [private Google Cloud Function](https://cloud.google.com/functions/docs/networking/network-settings) from the on-prem environment via a [Private Service Connect endpoint](https://cloud.google.com/vpc/docs/private-service-connect#benefits-apis).

<br clear="left">

### Calling on-premise services through PSC and hybrid NEGs

<a href="./psc-hybrid/" title="Hybrid connectivity to on-premise services thrugh PSC"><img src="./psc-hybrid/diagram.png" align="left" width="280px"></a> This [blueprint](./psc-hybrid/) shows how to privately connect to on-premise services (IP + port) from GCP, leveraging [Private Service Connect (PSC)](https://cloud.google.com/vpc/docs/private-service-connect) and [Hybrid Network Endpoint Groups](https://cloud.google.com/load-balancing/docs/negs/hybrid-neg-concepts).

<br clear="left">

### Decentralized firewall management

<a href="./decentralized-firewall/" title="Decentralized firewall management"><img src="./decentralized-firewall/diagram.png" align="left" width="280px"></a> This [blueprint](./decentralized-firewall/) shows how a decentralized firewall management can be organized using the [firewall factory](../factories/net-vpc-firewall-yaml/).

<br clear="left">

### Network filtering with Squid
### GLB and multi-regional daisy-chaining through hybrid NEGs

<a href="./filtering-proxy/" title="Network filtering with Squid"><img src="./filtering-proxy/squid.png" align="left" width="280px"></a> This [blueprint](./filtering-proxy/) how to deploy a filtering HTTP proxy to restrict Internet access, in a simplified setup using a VPC with two subnets and a Cloud DNS zone, and an optional MIG for scaling.
<a href="./glb-hybrid-neg-internal/" title="XGLB and multi-regional daisy-chaining through hybrid NEGs"><img src="./glb-hybrid-neg-internal/diagram.png" align="left" width="280px"></a> This [blueprint](./glb-hybrid-neg-internal/) shows the experimental use of hybrid NEGs behind external Global Load Balancers (GLBs) to connect to GCP instances living in spoke VPCs and behind Network Virtual Appliances (NVAs).

<br clear="left">

Expand All @@ -24,19 +36,19 @@ They are meant to be used as minimal but complete starting points to create actu

<br clear="left">

### Hub and Spoke via Peering
### Hub and Spoke via Dynamic VPN

<a href="./hub-and-spoke-peering/" title="Hub and spoke via peering blueprint"><img src="./hub-and-spoke-peering/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-peering/) implements a hub and spoke topology via VPC peering, a common design where a landing zone VPC (hub) is connected to on-premises, and then peered with satellite VPCs (spokes) to further partition the infrastructure.
<a href="./hub-and-spoke-vpn/" title="Hub and spoke via dynamic VPN"><img src="./hub-and-spoke-vpn/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-vpn/) implements a hub and spoke topology via dynamic VPN tunnels, a common design where peering cannot be used due to limitations on the number of spokes or connectivity to managed services.

The sample highlights the lack of transitivity in peering: the absence of connectivity between spokes, and the need create workarounds for private service access to managed services. One such workaround is shown for private GKE, allowing access from hub and all spokes to GKE masters via a dedicated VPN.
The blueprint shows how to implement spoke transitivity via BGP advertisements, how to expose hub DNS zones to spokes via DNS peering, and allows easy testing of different VPN and BGP configurations.

<br clear="left">

### Hub and Spoke via Dynamic VPN
### Hub and Spoke via Peering

<a href="./hub-and-spoke-vpn/" title="Hub and spoke via dynamic VPN"><img src="./hub-and-spoke-vpn/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-vpn/) implements a hub and spoke topology via dynamic VPN tunnels, a common design where peering cannot be used due to limitations on the number of spokes or connectivity to managed services.
<a href="./hub-and-spoke-peering/" title="Hub and spoke via peering blueprint"><img src="./hub-and-spoke-peering/diagram.png" align="left" width="280px"></a> This [blueprint](./hub-and-spoke-peering/) implements a hub and spoke topology via VPC peering, a common design where a landing zone VPC (hub) is connected to on-premises, and then peered with satellite VPCs (spokes) to further partition the infrastructure.

The blueprint shows how to implement spoke transitivity via BGP advertisements, how to expose hub DNS zones to spokes via DNS peering, and allows easy testing of different VPN and BGP configurations.
The sample highlights the lack of transitivity in peering: the absence of connectivity between spokes, and the need create workarounds for private service access to managed services. One such workaround is shown for private GKE, allowing access from hub and all spokes to GKE masters via a dedicated VPN.

<br clear="left">

Expand All @@ -63,15 +75,9 @@ The emulated on-premises environment can be used to test access to different ser

-->

### Calling a private Cloud Function from on-premises

<a href="./private-cloud-function-from-onprem/" title="Private Cloud Function from On-premises"><img src="./private-cloud-function-from-onprem/diagram.png" align="left" width="280px"></a> This [blueprint](./private-cloud-function-from-onprem/) shows how to invoke a [private Google Cloud Function](https://cloud.google.com/functions/docs/networking/network-settings) from the on-prem environment via a [Private Service Connect endpoint](https://cloud.google.com/vpc/docs/private-service-connect#benefits-apis).

<br clear="left">

### Calling on-premise services through PSC and hybrid NEGs
### Network filtering with Squid

<a href="./psc-hybrid/" title="Hybrid connectivity to on-premise services thrugh PSC"><img src="./psc-hybrid/diagram.png" align="left" width="280px"></a> This [blueprint](./psc-hybrid/) shows how to privately connect to on-premise services (IP + port) from GCP, leveraging [Private Service Connect (PSC)](https://cloud.google.com/vpc/docs/private-service-connect) and [Hybrid Network Endpoint Groups](https://cloud.google.com/load-balancing/docs/negs/hybrid-neg-concepts).
<a href="./filtering-proxy/" title="Network filtering with Squid"><img src="./filtering-proxy/squid.png" align="left" width="280px"></a> This [blueprint](./filtering-proxy/) how to deploy a filtering HTTP proxy to restrict Internet access, in a simplified setup using a VPC with two subnets and a Cloud DNS zone, and an optional MIG for scaling.

<br clear="left">

Expand Down
100 changes: 100 additions & 0 deletions blueprints/networking/glb-hybrid-neg-internal/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# GLB and multi-regional daisy-chaining through hybrid NEGs

The blueprint shows the experimental use of hybrid NEGs behind eXternal Global Load Balancers (GLBs) to connect to GCP instances living in spoke VPCs and behind Network Virtual Appliances (NVAs).

<p align="center"> <img src="diagram.png" width="700"> </p>

This allows users to not configure per-destination-VM NAT rules in the NVAs.

The user traffic will enter the GLB, it will go across the NVAs and it will be routed to the destination VMs (or the ILBs behind the VMs) in the spokes.

## What the blueprint creates

This is what the blueprint brings up, using the default module values.
The ids `primary` and `secondary` are used to identify two regions. By default, `europe-west1` and `europe-west4`.

- Projects: landing, spoke-01

- VPCs and subnets
+ landing-untrusted: primary - 192.168.1.0/24 and secondary - 192.168.2.0/24
+ landing-trusted: primary - 192.168.11.0/24 and secondary - 192.168.22.0/24
+ spoke-01: primary - 192.168.101.0/24 and secondary - 192.168.102.0/24

- Cloud NAT
+ landing-untrusted (both for primary and secondary)
+ in spoke-01 (both for primary and secondary) - this is just for test purposes, so you VMs can automatically install nginx, even if NVAs are still not ready

- VMs
+ NVAs in MIGs in the landing project, both in primary and secondary, with NICs in the untrusted and in the trusted VPCs
+ Test VMs, in spoke-01, both in primary and secondary. Optionally, deployed in MIGs

- Hybrid NEGs in the untrusted VPC, both in primary and secondary, either pointing to the test VMs in the spoke or -optionally- to ILBs in the spokes (if test VMs are deployed as MIGs)

- Internal Load balancers (L4 ILBs)
+ in the untrusted VPC, pointing to NVA MIGs, both in primary and secondary. Their VIPs are used by custom routes in the untrusted VPC, so that all traffic that arrives in the untrusted VPC destined for the test VMs in the spoke is sent through the NVAs
+ optionally, in the spokes. They are created if the user decides to deploy the test VMs as MIGs

- External Global Load balancer (GLB) in the untrusted VPC, using the hybrid NEGs as its backends

## Health Checks

Google Cloud sends [health checks](https://cloud.google.com/load-balancing/docs/health-checks) using [specific IP ranges](https://cloud.google.com/load-balancing/docs/health-checks#fw-netlb). Each VPC uses [implicit routes](https://cloud.google.com/vpc/docs/routes#special_return_paths) to send the health check replies back to Google.

At the moment of writing, when Google Cloud sends out [health checks](https://cloud.google.com/load-balancing/docs/health-checks) against backend services, it expects replies to come back from the same VPC where they have been sent out to.

Given the GLB lives in the untrusted VPC, its backend service health checks are sent out to that VPC, and so the replies are expected from it. Anyway, the destinations of the health checks are the test VMs in the spokes.

The blueprint configures some custom routes in the untrusted VPC and routing/NAT rules in the NVAs, so that health checks reach the test VMs through the NVAs, and replies come back through the NVAs in the untrusted VPC. Without these configurations health checks will fail and backend services won't be reachable.

Specifically:

- we create two custom routes in the untrusted VPC (one per region) so that traffic for the spoke subnets is sent to the VIP of the L4 ILBs in front of the NVAs

- we configure the NVAs so they know how to route traffic to the spokes via the trusted VPC gateway

- we configure the NVAs to s-NAT (specifically, masquerade) health checks traffic destined to the test VMs

## Change the ilb_create variable

Through the `ilb_create` variable you can decide whether test VMs in the spoke will be deployed as MIGs with ILBs in front. This will also configure NEGs, so they point to the ILB VIPs, instead of the VM IPs.

At the moment, every time a user changes the configuration of a NEG, the NEG is recreated. When this happens, the provider doesn't check if it is used by other resources, such as GLB backend services. Until this doesn't get fixed, every time you'll need to change the NEG configuration (i.e. when changing the variable `ilb_create`) you'll have to workaround it. Here is how:

- Destroy the existing backend service: `terraform destroy -target 'module.hybrid-glb.google_compute_backend_service.default["default"]'`

- Change the variable `ilb_create`

- run `terraform apply`
<!-- BEGIN TFDOC -->

## Variables

| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [prefix](variables.tf#L36) | Prefix used for resource names. | <code>string</code> | ✓ | |
| [ilb_create](variables.tf#L17) | Whether we should create an ILB L4 in front of the test VMs in the spoke. | <code>string</code> | | <code>&#34;false&#34;</code> |
| [ip_config](variables.tf#L23) | The subnet IP configurations. | <code title="object&#40;&#123;&#10; spoke_primary &#61; optional&#40;string, &#34;192.168.101.0&#47;24&#34;&#41;&#10; spoke_secondary &#61; optional&#40;string, &#34;192.168.102.0&#47;24&#34;&#41;&#10; trusted_primary &#61; optional&#40;string, &#34;192.168.11.0&#47;24&#34;&#41;&#10; trusted_secondary &#61; optional&#40;string, &#34;192.168.22.0&#47;24&#34;&#41;&#10; untrusted_primary &#61; optional&#40;string, &#34;192.168.1.0&#47;24&#34;&#41;&#10; untrusted_secondary &#61; optional&#40;string, &#34;192.168.2.0&#47;24&#34;&#41;&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>&#123;&#125;</code> |
| [project_names](variables.tf#L45) | The project names. | <code title="object&#40;&#123;&#10; landing &#61; string&#10; spoke_01 &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code title="&#123;&#10; landing &#61; &#34;landing&#34;&#10; spoke_01 &#61; &#34;spoke-01&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |
| [projects_create](variables.tf#L57) | Parameters for the creation of the new project. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [regions](variables.tf#L66) | Region definitions. | <code title="object&#40;&#123;&#10; primary &#61; string&#10; secondary &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code title="&#123;&#10; primary &#61; &#34;europe-west1&#34;&#10; secondary &#61; &#34;europe-west4&#34;&#10;&#125;">&#123;&#8230;&#125;</code> |

## Outputs

| name | description | sensitive |
|---|---|:---:|
| [glb_ip_address](outputs.tf#L17) | Load balancer IP address. | |

<!-- END TFDOC -->
## Test
```hcl
module "test" {
source = "./fabric/blueprints/networking/glb-hybrid-neg-internal"
prefix = "prefix"
projects_create = {
billing_account_id = "123456-123456-123456"
parent = "folders/123456789"
}
}

# tftest modules=21 resources=64
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
#!/bin/bash

# Copyright 2023 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

echo 'Enabling IP forwarding'
sed '/net.ipv4.ip_forward=1/s/^#//g' -i /etc/sysctl.conf &&
sysctl -p /etc/sysctl.conf &&
/etc/init.d/procps restart

echo 'Setting routes'
ip route add ${spoke-primary} via ${gateway-trusted} dev ens5
ip route add ${spoke-secondary} via ${gateway-trusted} dev ens5

echo 'Setting NAT masquerade, so that HCs can reach the spoke through the NVA using the trusted intf source IP'
iptables \
-t nat \
-A POSTROUTING \
-s 130.211.0.0/22,35.191.0.0/16 \
-d ${spoke-primary} \
-p tcp \
--dport 80 \
-j MASQUERADE
iptables \
-t nat \
-A POSTROUTING \
-s 130.211.0.0/22,35.191.0.0/16 \
-d ${spoke-secondary} \
-p tcp \
--dport 80 \
-j MASQUERADE
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions blueprints/networking/glb-hybrid-neg-internal/diagram.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading