The purpose of this step is to:
- Setup the global DNS Hub.
- Setup base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, onprem dedicated interconnect and baseline firewall rules for each environment.
- 0-bootstrap executed successfully.
- 1-org executed successfully.
- 2-environments executed successfully.
- Obtain the value for the access_context_manager_policy_id variable. Can be obtained by running
gcloud access-context-manager policies list --organization YOUR-ORGANIZATION_ID --format="value(name)"
.
If you have the prerequisites listed in the Dedicated Interconnect README follow this steps to enable Dedicated Interconnect to access onprem.
- Rename
interconnect.tf.example
tointerconnect.tf
in each environment folder in3-networks/envs/<ENV>
- Update the file
interconnect.tf
with values that are valid for your environment for the interconnects, locations, candidate subnetworks, vlan_tag8021q and peer info. - The candidate subnetworks and vlan_tag8021q variables can be set to
null
to allow the interconnect module to auto generate these values.
If you are not able to use dedicated interconnect, you can also use an HA VPN to access onprem.
- Rename
vpn.tf.example
tovpn.tf
in each environment folder in3-networks/envs/<ENV>
- Create secret for VPN private preshared key
echo '<YOUR-PRESHARED-KEY-SECRET>' | gcloud secrets create <VPN_PRIVATE_PSK_SECRET_NAME> --project <ENV_SECRETS_PROJECT> --replication-policy=automatic --data-file=-
- Create secret for VPN restricted preshared key
echo '<YOUR-PRESHARED-KEY-SECRET>' | gcloud secrets create <VPN_RESTRICTED_PSK_SECRET_NAME> --project <ENV_SECRETS_PROJECT> --replication-policy=automatic --data-file=-
- Update in the file
vpn.tf
the values forenvironment
,vpn_psk_secret_name
,on_prem_router_ip_address1
,on_prem_router_ip_address2
andbgp_peer_asn
. - Verify other default values are valid for your environment.
- Clone repo
gcloud source repos clone gcp-networks --project=YOUR_CLOUD_BUILD_PROJECT_ID
- Change to the freshly cloned repo and change to non-master branch
git checkout -b plan
- Copy contents of foundation to new repo
cp -RT ../terraform-example-foundation/3-networks/ .
(modify accordingly based on your current directory). - Copy cloud build configuration files for terraform
cp ../terraform-example-foundation/build/cloudbuild-tf-* .
(modify accordingly based on your current directory) - Copy terraform wrapper script
cp ../terraform-example-foundation/build/tf-wrapper.sh .
to the root of your new repository (modify accordingly based on your current directory). - Ensure wrapper script can be executed
chmod 755 ./tf-wrapper.sh
. - Rename
common.auto.example.tfvars
tocommon.auto.tfvars
and update the file with values from your environment and bootstrap. - Rename
shared.auto.example.tfvars
toshared.auto.tfvars
and update the file with the target_name_server_addresses (the list of target name servers for the DNS forwarding zone in the DNS Hub). - Rename
access_context.auto.example.tfvars
toaccess_context.auto.tfvars
and update the file with theaccess_context_manager_policy_id
. - Commit changes with
git add .
andgit commit -m 'Your message'
- You will need only once to manually plan + apply the
shared
environment sincedevelopment
,non-production
andproduction
depend on it.- cd to ./envs/shared/
- Update
backend.tf
with your bucket name from the bootstrap step. - Run
terraform init
- Run
terraform plan
and review output - Run
terraform apply
- If you would like the bucket to be replaced by cloud build at run time, change the bucket name back to
UPDATE_ME
- Push your plan branch to trigger a plan
git push --set-upstream origin plan
(the branchplan
is not a special one. Any branch which name is different fromdevelopment
,non-production
orproduction
will trigger a terraform plan).- Review the plan output in your cloud build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
- Merge changes to production with
git checkout -b production
andgit push origin production
- Review the apply output in your cloud build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
- After production has been applied, apply development and non-production
- Merge changes to development with
git checkout -b development
andgit push origin development
- Review the apply output in your cloud build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
- Merge changes to non-production with
git checkout -b non-production
andgit push origin non-production
- Review the apply output in your cloud build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
-
Clone the repo you created manually in bootstrap:
git clone <YOUR_NEW_REPO-3-networks>
-
Navigate into the repo
cd YOUR_NEW_REPO_CLONE-3-networks
and change to a non production branchgit checkout -b plan
(the branchplan
is not a special one. Any branch which name is different fromdevelopment
,non-production
orproduction
will trigger a terraform plan). -
Copy contents of foundation to new repo
cp -RT ../terraform-example-foundation/3-networks/ .
(modify accordingly based on your current directory). -
Copy the Jenkinsfile script
cp ../terraform-example-foundation/build/Jenkinsfile .
to the root of your new repository (modify accordingly based on your current directory). -
Update the variables located in the
environment {}
section of theJenkinsfile
with values from your environment:_POLICY_REPO (optional) _TF_SA_EMAIL _STATE_BUCKET_NAME
-
Copy terraform wrapper script
cp ../terraform-example-foundation/build/tf-wrapper.sh .
to the root of your new repository (modify accordingly based on your current directory). -
Ensure wrapper script can be executed
chmod 755 ./tf-wrapper.sh
. -
Rename
common.auto.example.tfvars
tocommon.auto.tfvars
and update the file with values from your environment and bootstrap. -
Rename
shared.auto.example.tfvars
toshared.auto.tfvars
and update the file with thetarget_name_server_addresses
. -
Rename
access_context.auto.example.tfvars
toaccess_context.auto.tfvars
and update the file with theaccess_context_manager_policy_id
. -
Commit changes with
git add .
andgit commit -m 'Your message'
-
You will need to manually plan + apply the
shared
environment (only once) since development, non-production and production depend on it.- cd to ./envs/shared/
- Update backend.tf with your bucket name from the bootstrap step.
- Run
terraform init
- Run
terraform plan
and review output - Run
terraform apply
- If you would like the bucket to be replaced by cloud build at run time, change the bucket name back to
UPDATE_ME
-
Push your plan branch
git push --set-upstream origin plan
. The branchplan
is not a special one. Any branch which name is different fromdevelopment
,non-production
orproduction
will trigger a terraform plan.- Assuming you configured an automatic trigger in your Jenkins Master (see Jenkins sub-module README), this will trigger a plan. You can also trigger a Jenkins job manually. Given the many options to do this in Jenkins, it is out of the scope of this document see Jenkins website for more details.
- Review the plan output in your Master's web UI.
-
Merge changes to production branch with
git checkout -b production
andgit push origin production
- Review the apply output in your Master's web UI (You might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Master UI).
-
After production has been applied, apply development and non-production
-
Merge changes to development with
git checkout -b development
andgit push origin development
- Review the apply output in your Master's web UI (You might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Master UI).
-
Merge changes to non-production with
git checkout -b non-production
andgit push origin non-production
- Review the apply output in your Master's web UI (You might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Master UI).
-
You can now move to the instructions in the step 4-projects.
- Change into 3-networks folder.
- Run
cp ../build/tf-wrapper.sh .
- Run
chmod 755 ./tf-wrapper.sh
- Rename common.auto.example.tfvars to common.auto.tfvars and update the file with values from your environment and bootstrap.
- Rename shared.auto.example.tfvars to shared.auto.tfvars and update the file with the target_name_server_addresses.
- Rename access_context.auto.example.tfvars to access_context.auto.tfvars and update the file with the access_context_manager_policy_id.
- Update backend.tf with your bucket from bootstrap. You can run
for i in `find -name 'backend.tf'`; do sed -i 's/UPDATE_ME/<YOUR-BUCKET-NAME>/' $i; done
. You can runterraform output gcs_bucket_tfstate
in the 0-bootstrap folder to obtain the bucket name.
We will now deploy each of our environments(development/production/non-production) using this script. When using Cloud Build or Jenkins as your CI/CD tool each environment corresponds to a branch in the repository for 3-networks step and only the corresponding environment is applied.
- Run
./tf-wrapper.sh init shared
- Run
./tf-wrapper.sh plan shared
and review output. - Run
./tf-wrapper.sh apply shared
- Run
./tf-wrapper.sh init production
- Run
./tf-wrapper.sh plan production
and review output. - Run
./tf-wrapper.sh apply production
- Run
./tf-wrapper.sh init non-production
- Run
./tf-wrapper.sh plan non-production
and review output. - Run
./tf-wrapper.sh apply non-production
- Run
./tf-wrapper.sh init development
- Run
./tf-wrapper.sh plan development
and review output. - Run
./tf-wrapper.sh apply development
If you received any errors or made any changes to the Terraform config or any .tfvars
you must re-run ./tf-wrapper.sh plan <env>
before run ./tf-wrapper.sh apply <env>