Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate Shared VPC example to local modules #41

Merged
merged 19 commits into from
Feb 29, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 73 additions & 0 deletions infrastructure/shared-vpc-gke/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Shared VPC sample

This sample creates a basic [Shared VPC](https://cloud.google.com/vpc/docs/shared-vpc) setup using one host project and two service projects, each with a specific subnet in the shared VPC. The setup also includes the specific IAM-level configurations needed for [GKE on Shared VPC](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-shared-vpc) to enable cluster creation in one of the two service projects.

The sample has been purposefully kept simple so that it can be used as a basis for different Shared VPC configurations. This is the high level diagram:

![High-level diagram](diagram.png "High-level diagram")

## Managed resources and services

This sample creates several distinct groups of resources:

- projects
- host project
- service project configured for GKE clusters
- service project configured for GCE instances
- networking
- the VPC network
- one subnet with secondary ranges for GKE clusters
- one subnet for GCE instances
- firewall rules for SSH access via IAP and open communication within the VPC
- Cloud NAT service
- IAM
- one service account for the bastion CGE instance
- one service account for the GKE nodes
- optional owner role bindings on each project
- optional [OS Login](https://cloud.google.com/compute/docs/oslogin/) role bindings on the GCE service project
- role bindings to allow the GCE instance and GKE nodes logging and monitoring write access
- role binding to allow the GCE instance cluster access
- DNS
- one private zone
- GCE
- one instance used to access the internal GKE cluster
- GKE
- one private cluster with one nodepool

## Accessing the bastion instance and GKE cluster

The bastion VM has no public address so access is mediated via [IAP](https://cloud.google.com/iap/docs), which is supported transparently in the `gcloud compute ssh` command. Authentication is via OS Login set as a project default.

Cluster access from the bastion can leverage the instance service account's `container.developer` role: the only configuration needed is to fetch cluster credentials via `gcloud container clusters get-credentials` passing the correct cluster name, location and project via command options.

## Destroying

There's a minor glitch that can surface running `terraform destroy`, where the service project attachments to the Shared VPC will not get destroyed even with the relevant API call succeeding. We are investigating the issue, in the meantime just manually remove the attachment in the Cloud console or via the `gcloud` command when `terraform destroy` fails, and then relaunch the command.

<!-- BEGIN TFDOC -->
## Variables

| name | description | type | required | default |
|---|---|:---: |:---:|:---:|
| billing_account_id | Billing account id used as default for new projects. | <code title="">string</code> || |
| prefix | Prefix used for resources that need unique names. | <code title="">string</code> || |
| root_node | Hierarchy node where projects will be created, 'organizations/org_id' or 'folders/folder_id'. | <code title="">string</code> || |
| *ip_ranges* | Subnet IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;gce &#61; &#34;10.0.16.0&#47;24&#34;&#10;gke &#61; &#34;10.0.32.0&#47;24&#34;&#10;&#125;">...</code> |
| *ip_secondary_ranges* | Secondary IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;gke-pods &#61; &#34;10.128.0.0&#47;18&#34;&#10;gke-services &#61; &#34;172.16.0.0&#47;24&#34;&#10;&#125;">...</code> |
| *owners_gce* | GCE project owners, in IAM format. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *owners_gke* | GKE project owners, in IAM format. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *owners_host* | Host project owners, in IAM format. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="">[]</code> |
| *private_service_ranges* | Private service IP CIDR ranges. | <code title="map&#40;string&#41;">map(string)</code> | | <code title="&#123;&#10;cluster-1 &#61; &#34;192.168.0.0&#47;28&#34;&#10;&#125;">...</code> |
| *project_services* | Service APIs enabled by default in new projects. | <code title="list&#40;string&#41;">list(string)</code> | | <code title="&#91;&#10;&#34;resourceviews.googleapis.com&#34;,&#10;&#34;stackdriver.googleapis.com&#34;,&#10;&#93;">...</code> |
| *region* | Region used. | <code title="">string</code> | | <code title="">europe-west1</code> |

## Outputs

| name | description | sensitive |
|---|---|:---:|
| gke_clusters | GKE clusters information. | |
| projects | Project ids. | |
| service_accounts | GCE and GKE service accounts. | |
| vms | GCE VMs. | |
| vpc | Shared VPC. | |
<!-- END TFDOC -->
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,17 @@ elements {
group host_services {
name "Shared Services"
background_color "#f6f6f6"
card dns
card kms
card dns {
name "Private zone"
}
card nat {
name "NAT"
}
}

group vpc_host {
name "Shared VPC"
background_color "#fff3e0"
card vpc as net_subnet {
name "Networking subnet"
}
card vpc as gce_subnet {
name "GCE subnet"
}
Expand All @@ -28,15 +29,15 @@ elements {

group project_gce {
name "GCE service project"
stacked_card gce as gce_instances {
name "VM instances"
card gce as gce_instances {
name "Bastion VM"
}
}

group project_gke {
name "GKE service project"
stacked_card gke as gke_clusters {
name "GKE clusters"
card gke as gke_clusters {
name "GKE cluster"
}
}

Expand All @@ -46,4 +47,4 @@ elements {
paths {
gce_subnet ..> gce_instances
gke_subnet ..> gke_clusters
}
}
Binary file added infrastructure/shared-vpc-gke/diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
249 changes: 249 additions & 0 deletions infrastructure/shared-vpc-gke/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,249 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

###############################################################################
# Host and service projects #
###############################################################################

# the container.hostServiceAgentUser role is needed for GKE on shared VPC

module "project-host" {
source = "../../modules/project"
parent = var.root_node
billing_account = var.billing_account_id
prefix = var.prefix
name = "net"
services = concat(var.project_services, ["dns.googleapis.com"])
iam_roles = [
"roles/container.hostServiceAgentUser", "roles/owner"
]
iam_members = {
"roles/container.hostServiceAgentUser" = [
"serviceAccount:${module.project-svc-gke.gke_service_account}"
]
"roles/owner" = var.owners_host
}
}

module "project-svc-gce" {
source = "../../modules/project"
parent = var.root_node
billing_account = var.billing_account_id
prefix = var.prefix
name = "gce"
services = var.project_services
oslogin = true
oslogin_admins = var.owners_gce
iam_roles = [
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
"roles/owner"
]
iam_members = {
"roles/logging.logWriter" = [module.vm-bastion.service_account_iam_email],
"roles/monitoring.metricWriter" = [module.vm-bastion.service_account_iam_email],
"roles/owner" = var.owners_gce,

}
}

# the container.developer role assigned to the bastion instance service account
# allows to fetch GKE credentials from bastion for clusters in this project

module "project-svc-gke" {
source = "../../modules/project"
parent = var.root_node
billing_account = var.billing_account_id
prefix = var.prefix
name = "gke"
services = var.project_services
iam_roles = [
"roles/container.developer",
"roles/owner",
]
iam_members = {
"roles/owner" = var.owners_gke
"roles/container.developer" = [module.vm-bastion.service_account_iam_email]
}
}

################################################################################
# Networking #
################################################################################

# the service project GKE robot needs the `hostServiceAgent` role throughout
# the entire life of its clusters; the `iam_project_id` project output is used
# here to set the project id so that the VPC depends on that binding, and any
# cluster using it then also depends on it indirectly; you can of course use
# the `project_id` output instead if you don't care about destroying

# subnet IAM bindings control which identities can use the individual subnets

module "vpc-shared" {
source = "../../modules/net-vpc"
project_id = module.project-host.iam_project_id
name = "shared-vpc"
shared_vpc_host = true
shared_vpc_service_projects = [
module.project-svc-gce.project_id,
module.project-svc-gke.project_id
]
subnets = {
gce = {
ip_cidr_range = var.ip_ranges.gce
region = var.region
secondary_ip_range = {}
}
gke = {
ip_cidr_range = var.ip_ranges.gke
region = var.region
secondary_ip_range = {
pods = var.ip_secondary_ranges.gke-pods
services = var.ip_secondary_ranges.gke-services
}
}
}
iam_roles = {
gke = ["roles/compute.networkUser", "roles/compute.securityAdmin"]
gce = ["roles/compute.networkUser"]
}
iam_members = {
gce = {
"roles/compute.networkUser" = concat(var.owners_gce, [
"serviceAccount:${module.project-svc-gce.cloudsvc_service_account}",
])
}
gke = {
"roles/compute.networkUser" = concat(var.owners_gke, [
"serviceAccount:${module.project-svc-gke.cloudsvc_service_account}",
"serviceAccount:${module.project-svc-gke.gke_service_account}",
])
"roles/compute.securityAdmin" = [
"serviceAccount:${module.project-svc-gke.gke_service_account}",
]
}
}
}

module "vpc-shared-firewall" {
source = "../../modules/net-vpc-firewall"
project_id = module.project-host.project_id
network = module.vpc-shared.name
admin_ranges_enabled = true
admin_ranges = values(var.ip_ranges)
}

module "nat" {
source = "../../modules/net-cloudnat"
project_id = module.project-host.project_id
region = var.region
name = "vpc-shared"
router_create = true
router_network = module.vpc-shared.name
}

################################################################################
# DNS #
################################################################################

module "host-dns" {
source = "../../modules/dns"
project_id = module.project-host.project_id
type = "private"
name = "example"
domain = "example.com."
client_networks = [module.vpc-shared.self_link]
recordsets = [
{ name = "localhost", type = "A", ttl = 300, records = ["127.0.0.1"] },
{ name = "bastion", type = "A", ttl = 300, records = module.vm-bastion.internal_ips },
]
}

################################################################################
# VM #
################################################################################

module "vm-bastion" {
source = "../../modules/compute-vm"
project_id = module.project-svc-gce.project_id
region = module.vpc-shared.subnet_regions.gce
zone = "${module.vpc-shared.subnet_regions.gce}-b"
name = "bastion"
network_interfaces = [{
network = module.vpc-shared.self_link,
subnetwork = lookup(module.vpc-shared.subnet_self_links, "gce", null),
nat = false,
addresses = null
}]
instance_count = 1
metadata = {
startup-script = join("\n", [
"#! /bin/bash",
"apt-get update",
"apt-get install -y bash-completion kubectl dnsutils"
])
}
service_account_create = true
}

################################################################################
# GKE #
################################################################################

module "cluster-1" {
source = "../../modules/gke-cluster"
name = "cluster-1"
project_id = module.project-svc-gke.project_id
location = "${module.vpc-shared.subnet_regions.gke}-b"
network = module.vpc-shared.self_link
subnetwork = module.vpc-shared.subnet_self_links.gke
secondary_range_pods = "pods"
secondary_range_services = "services"
default_max_pods_per_node = 32
labels = {
environment = "test"
}
master_authorized_ranges = {
internal-vms = var.ip_ranges.gce
}
private_cluster_config = {
enable_private_nodes = true
enable_private_endpoint = true
master_ipv4_cidr_block = var.private_service_ranges.cluster-1
}
}

module "cluster-1-nodepool-1" {
source = "../../modules/gke-nodepool"
name = "nodepool-1"
project_id = module.project-svc-gke.project_id
location = module.cluster-1.location
cluster_name = module.cluster-1.name
node_config_service_account = module.service-account-gke-node.email
}

# roles assigned via this module use non-authoritative IAM bindings at the
# project level, with no risk of conflicts with pre-existing roles

module "service-account-gke-node" {
source = "../../modules/iam-service-accounts"
project_id = module.project-svc-gke.project_id
names = ["gke-node"]
iam_project_roles = {
(module.project-svc-gke.project_id) = [
"roles/logging.logWriter",
"roles/monitoring.metricWriter",
]
}
}
Loading