Skip to content

Commit

Permalink
Hub and spoke peering changes (#48)
Browse files Browse the repository at this point in the history
* rename hub-and-spoke-vpn

* add ssh tag to shared-vpc-gke instance

* rename and rework hub and spoke peering

* fix test requirements

* align hub and spoke peering with module contents

* diagram

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* minimal fixes to onprem examples variable files

* onprem example stub, missing DNS zones and private.googleapis records onprem
  • Loading branch information
ludoo authored Mar 9, 2020
1 parent 4e4ef0a commit eb6fbe5
Show file tree
Hide file tree
Showing 26 changed files with 608 additions and 630 deletions.
75 changes: 75 additions & 0 deletions infrastructure/hub-and-spoke-peering/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Hub and Spoke via VPC Peering

This example creates a simple **Hub and Spoke** setup, where the VPC network connects satellite locations (spokes) through a single intermediary location (hub) via [VPC Peering](https://cloud.google.com/vpc/docs/vpc-peering).

The example shows some of the limitations that need to be taken into account when using VPC Peering, mostly due to the lack of transivity between peerings:

- no mesh networking between the spokes
- complex support for managed services hosted in tenant VPCs connected via peering (Cloud SQL, GKE, etc.)

One possible solution to the managed service limitation above is presented here, using a static VPN to establish connectivity to the GKE masters in the tenant project ([courtesy of @drebes](https://github.com/drebes/tf-samples/blob/master/gke-master-from-hub/main.tf#L10)). Other solutions typically involve the use of proxies, as [described in this GKE article](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies).

One other topic that needs to be considered when using peering is the limit of 25 peerings in each peering group, which constrains the scalability of design like the one presented here.

The example has been purposefully kept simple to show how to use and wire the VPC modules together, and so that it can be used as a basis for more complex scenarios. This is the high level diagram:

![High-level diagram](diagram.png "High-level diagram")

## Managed resources and services

This sample creates several distinct groups of resources:

- one VPC each for hub and each spoke
- one set of firewall rules for each VPC
- one Cloud NAT configuration for each spoke
- one test instance for each spoke
- one GKE cluster with a single nodepool in spoke 2
- one service account for the GCE instances
- one service account for the GKE nodes
- one static VPN gateway in hub and spoke 2 with a single tunnel each

## Testing GKE access from spoke 1

As mentioned above, a VPN tunnel is used as a workaround to avoid the peering transitivity issue that would prevent any VPC other than spoke 2 to connect to the GKE master.

To test cluster access, first log on to the spoke 2 instance and confirm cluster and IAM roles are set up correctly:

```bash
gcloud container clusters get-credentials cluster-1 --zone europe-west1-b
kubectl get all
```

The next step is to edit the peering towards the GKE master tenant VPC, and enable export routes. The peering has a name like `gke-xxxxxxxxxxxxxxxxxxxx-xxxx-xxxx-peer`, you can edit it in the Cloud Console from the *VPC network peering* page or using `gcloud`:

```
gcloud compute networks peerings list
# find the gke-xxxxxxxxxxxxxxxxxxxx-xxxx-xxxx-peer in the spoke-2 network
gcloud compute networks peerings update [peering name from above] \
--network spoke-2 --export-custom-routes
```

Then connect via SSH to the spoke 1 instance and run the same commands you ran on the spoke 2 instance above, you should be able to run `kubectl` commands against the cluster. To test the default situation with no supporting VPN, just comment out the two VPN modules in `main.tf` and run `terraform apply` to bring down the VPN gateways and tunnels. GKE should only become accessible from spoke 2.

## Operational considerations

A single pre-existing project is used in this example to keep variables and complexity to a minimum, in a real world scenarios each spoke would probably use a separate project.

The VPN used to connect the GKE masters VPC does not account for HA, upgrading to use HA VPN is reasonably simple by using the relevant [module](../../modules/net-vpn-ha).

<!-- BEGIN TFDOC -->
## Variables

| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| project_id | Project id for all resources. | <code title="">string</code> || |
| *ip_ranges* | IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;hub &#61; &#34;10.0.0.0&#47;24&#34;&#10;spoke-1 &#61; &#34;10.0.16.0&#47;24&#34;&#10;spoke-2 &#61; &#34;10.0.32.0&#47;24&#34;&#10;&#125;">...</code> |
| *ip_secondary_ranges* | Secondary IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;spoke-2-pods &#61; &#34;10.128.0.0&#47;18&#34;&#10;spoke-2-services &#61; &#34;172.16.0.0&#47;24&#34;&#10;&#125;">...</code> |
| *private_service_ranges* | Private service IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;spoke-2-cluster-1 &#61; &#34;192.168.0.0&#47;28&#34;&#10;&#125;">...</code> |
| *region* | VPC regions. | <code title="">string</code> | | <code title="">europe-west1</code> |

## Outputs

| name | description | sensitive |
|---|---|:---:|
| vms | GCE VMs. | |
<!-- END TFDOC -->
Binary file added infrastructure/hub-and-spoke-peering/diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
280 changes: 280 additions & 0 deletions infrastructure/hub-and-spoke-peering/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,280 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

locals {
vm-instances = concat(
module.vm-spoke-1.instances,
module.vm-spoke-2.instances
)
vm-startup-script = join("\n", [
"#! /bin/bash",
"apt-get update && apt-get install -y bash-completion dnsutils kubectl"
])
}

################################################################################
# Hub networking #
################################################################################

module "vpc-hub" {
source = "../../modules/net-vpc"
project_id = var.project_id
name = "hub"
subnets = {
default = {
ip_cidr_range = var.ip_ranges.hub
region = var.region
secondary_ip_range = {}
}
}
}

module "vpc-hub-firewall" {
source = "../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.vpc-hub.name
admin_ranges_enabled = true
admin_ranges = values(var.ip_ranges)
}

################################################################################
# Spoke 1 networking #
################################################################################

module "vpc-spoke-1" {
source = "../../modules/net-vpc"
project_id = var.project_id
name = "spoke-1"
subnets = {
default = {
ip_cidr_range = var.ip_ranges.spoke-1
region = var.region
secondary_ip_range = {}
}
}
}

module "vpc-spoke-1-firewall" {
source = "../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.vpc-spoke-1.name
admin_ranges_enabled = true
admin_ranges = values(var.ip_ranges)
}

module "nat-spoke-1" {
source = "../../modules/net-cloudnat"
project_id = var.project_id
region = module.vpc-spoke-1.subnet_regions.default
name = "spoke-1"
router_name = "spoke-1"
router_network = module.vpc-spoke-1.self_link
}

module "hub-to-spoke-1-peering" {
source = "../../modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-spoke-1.self_link
export_local_custom_routes = true
export_peer_custom_routes = false
}

################################################################################
# Spoke 2 networking #
################################################################################

module "vpc-spoke-2" {
source = "../../modules/net-vpc"
project_id = var.project_id
name = "spoke-2"
subnets = {
default = {
ip_cidr_range = var.ip_ranges.spoke-2
region = var.region
secondary_ip_range = {
pods = var.ip_secondary_ranges.spoke-2-pods
services = var.ip_secondary_ranges.spoke-2-services
}
}
}
}

module "vpc-spoke-2-firewall" {
source = "../../modules/net-vpc-firewall"
project_id = var.project_id
network = module.vpc-spoke-2.name
admin_ranges_enabled = true
admin_ranges = values(var.ip_ranges)
}

module "nat-spoke-2" {
source = "../../modules/net-cloudnat"
project_id = var.project_id
region = module.vpc-spoke-2.subnet_regions.default
name = "spoke-2"
router_name = "spoke-2"
router_network = module.vpc-spoke-2.self_link
}

module "hub-to-spoke-2-peering" {
source = "../../modules/net-vpc-peering"
local_network = module.vpc-hub.self_link
peer_network = module.vpc-spoke-2.self_link
export_local_custom_routes = true
export_peer_custom_routes = false
module_depends_on = [module.hub-to-spoke-1-peering.complete]
}

################################################################################
# Test VMs #
################################################################################

module "vm-spoke-1" {
source = "../../modules/compute-vm"
project_id = var.project_id
region = module.vpc-spoke-1.subnet_regions.default
zone = "${module.vpc-spoke-1.subnet_regions.default}-b"
name = "spoke-1-test"
network_interfaces = [{
network = module.vpc-spoke-1.self_link,
subnetwork = module.vpc-spoke-1.subnet_self_links.default,
nat = false,
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = module.service-account-gce.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
tags = ["ssh"]
}

module "vm-spoke-2" {
source = "../../modules/compute-vm"
project_id = var.project_id
region = module.vpc-spoke-2.subnet_regions.default
zone = "${module.vpc-spoke-2.subnet_regions.default}-b"
name = "spoke-2-test"
network_interfaces = [{
network = module.vpc-spoke-2.self_link,
subnetwork = module.vpc-spoke-2.subnet_self_links.default,
nat = false,
addresses = null
}]
metadata = { startup-script = local.vm-startup-script }
service_account = module.service-account-gce.email
service_account_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
tags = ["ssh"]
}

module "service-account-gce" {
source = "../../modules/iam-service-accounts"
project_id = var.project_id
names = ["gce-test"]
iam_project_roles = {
(var.project_id) = [
"roles/container.developer",
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
]
}
}

################################################################################
# GKE #
################################################################################

module "cluster-1" {
source = "../../modules/gke-cluster"
name = "cluster-1"
project_id = var.project_id
location = "${module.vpc-spoke-2.subnet_regions.default}-b"
network = module.vpc-spoke-2.self_link
subnetwork = module.vpc-spoke-2.subnet_self_links.default
secondary_range_pods = "pods"
secondary_range_services = "services"
default_max_pods_per_node = 32
labels = {
environment = "test"
}
master_authorized_ranges = {
for name, range in var.ip_ranges : name => range
}
private_cluster_config = {
enable_private_nodes = true
enable_private_endpoint = true
master_ipv4_cidr_block = var.private_service_ranges.spoke-2-cluster-1
}
}

module "cluster-1-nodepool-1" {
source = "../../modules/gke-nodepool"
name = "nodepool-1"
project_id = var.project_id
location = module.cluster-1.location
cluster_name = module.cluster-1.name
node_config_service_account = module.service-account-gke-node.email
}

# roles assigned via this module use non-authoritative IAM bindings at the
# project level, with no risk of conflicts with pre-existing roles

module "service-account-gke-node" {
source = "../../modules/iam-service-accounts"
project_id = var.project_id
names = ["gke-node"]
iam_project_roles = {
(var.project_id) = [
"roles/logging.logWriter", "roles/monitoring.metricWriter",
]
}
}

################################################################################
# GKE peering VPN #
################################################################################

module "vpn-hub" {
source = "../../modules/net-vpn-static"
project_id = var.project_id
region = var.region
network = module.vpc-hub.name
name = "hub"
remote_ranges = values(var.private_service_ranges)
tunnels = {
spoke-2 = {
ike_version = 2
peer_ip = module.vpn-spoke-2.address
shared_secret = ""
traffic_selectors = { local = ["0.0.0.0/0"], remote = null }
}
}
}

module "vpn-spoke-2" {
source = "../../modules/net-vpn-static"
project_id = var.project_id
region = var.region
network = module.vpc-spoke-2.name
name = "spoke-2"
# use an aggregate of the remote ranges, so as to be less specific than the
# routes exchanged via peering
remote_ranges = ["10.0.0.0/8"]
tunnels = {
spoke-2 = {
ike_version = 2
peer_ip = module.vpn-hub.address
shared_secret = module.vpn-hub.random_secret
traffic_selectors = { local = ["0.0.0.0/0"], remote = null }
}
}
}
21 changes: 21 additions & 0 deletions infrastructure/hub-and-spoke-peering/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

output "vms" {
description = "GCE VMs."
value = {
for instance in local.vm-instances :
instance.name => instance.network_interface.0.network_ip
}
}
Loading

0 comments on commit eb6fbe5

Please sign in to comment.