Skip to content

Latest commit

 

History

History
386 lines (345 loc) · 19.3 KB

File metadata and controls

386 lines (345 loc) · 19.3 KB

3-networks-dual-svpc

This repo is part of a multi-part guide that shows how to configure and deploy the example.com reference architecture described in Google Cloud security foundations guide. The following table lists the parts of the guide.

0-bootstrap Bootstraps a Google Cloud organization, creating all the required resources and permissions to start using the Cloud Foundation Toolkit (CFT). This step also configures a CI/CD Pipeline for foundations code in subsequent stages.
1-org Sets up top level shared folders, monitoring and networking projects, and organization-level logging, and sets baseline security settings through organizational policy.
2-environments Sets up development, non-production, and production environments within the Google Cloud organization that you've created.
3-networks-dual-svpc (this file) Sets up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated Interconnect, and baseline firewall rules for each environment. It also sets up the global DNS hub.
3-networks-hub-and-spoke Sets up base and restricted shared VPCs with all the default configuration found on step 3-networks-dual-svpc, but here the architecture will be based on the Hub and Spoke network model. It also sets up the global DNS hub
4-projects Sets up a folder structure, projects, and application infrastructure pipeline for applications, which are connected as service projects to the shared VPC created in the previous stage.
5-app-infra Deploy a simple Compute Engine instance in one of the business unit projects using the infra pipeline set up in 4-projects.

For an overview of the architecture and the parts, see the terraform-example-foundation README.

Purpose

The purpose of this step is to:

  • Set up the global DNS Hub.
  • Set up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated or Partner Interconnect, and baseline firewall rules for each environment.

Prerequisites

  1. 0-bootstrap executed successfully.

  2. 1-org executed successfully.

  3. 2-environments executed successfully.

  4. Obtain the value for the access_context_manager_policy_id variable. Can be obtained by running

    gcloud access-context-manager policies list --organization YOUR_ORGANIZATION_ID --format="value(name)"
  5. For the manual step described in this document, you need Terraform version 1.0.0 or later to be installed.

Note: Make sure that you use version 1.0.0 or later of Terraform throughout this series. Otherwise, you might experience Terraform state snapshot lock errors.

Troubleshooting

Please refer to troubleshooting if you run into issues during this step.

Usage

Note: If you are using MacOS, replace cp -RT with cp -R in the relevant commands. The -T flag is needed for Linux, but causes problems for MacOS.

Networking Architecture

This step makes use of the Dual Shared VPC architecture, and more details can be found described at the Networking section of the Google cloud security foundations guide. To see the version that makes use the Hub and Spoke mode, check the step 3-networks-hub-and-spoke.

Using Dedicated Interconnect

If you provisioned the prerequisites listed in the Dedicated Interconnect README, follow these steps to enable Dedicated Interconnect to access on-premises resources.

  1. Rename interconnect.tf.example to interconnect.tf in base_env folder in 3-networks-dual-svpc/modules/base_env.
  2. Update the file interconnect.tf with values that are valid for your environment for the interconnects, locations, candidate subnetworks, vlan_tag8021q and peer info.
  3. The candidate subnetworks and vlan_tag8021q variables can be set to null to allow the interconnect module to auto generate these values.

Using Partner Interconnect

If you provisioned the prerequisites listed in the Partner Interconnect README follow this steps to enable Partner Interconnect to access on-premises resources.

  1. Rename partner_interconnect.tf.example to partner_interconnect.tf in the base-env folder in 3-networks-dual-svpc/modules/base_env .
  2. Update the file partner_interconnect.tf with values that are valid for your environment for the VLAN attachments, locations, and candidate subnetworks.
  3. The candidate subnetworks variable can be set to null to allow the interconnect module to auto generate this value.

OPTIONAL - Using High Availability VPN

If you are not able to use Dedicated or Partner Interconnect, you can also use an HA Cloud VPN to access on-premises resources.

  1. Rename vpn.tf.example to vpn.tf in base-env folder in 3-networks-dual-svpc/modules/base_env.
  2. Create secret for VPN private pre-shared key.
    echo '<YOUR-PRESHARED-KEY-SECRET>' | gcloud secrets create <VPN_PRIVATE_PSK_SECRET_NAME> --project <ENV_SECRETS_PROJECT> --replication-policy=automatic --data-file=-
    
  3. Create secret for VPN restricted pre-shared key.
    echo '<YOUR-PRESHARED-KEY-SECRET>' | gcloud secrets create <VPN_RESTRICTED_PSK_SECRET_NAME> --project <ENV_SECRETS_PROJECT> --replication-policy=automatic --data-file=-
    
  4. In the file vpn.tf, update the values for environment, vpn_psk_secret_name, on_prem_router_ip_address1, on_prem_router_ip_address2 and bgp_peer_asn.
  5. Verify other default values are valid for your environment.

Deploying with Cloud Build

  1. Clone repo.
    gcloud source repos clone gcp-networks --project=YOUR_CLOUD_BUILD_PROJECT_ID
    
  2. Change to the freshly cloned repo and change to non-main branch.
    cd gcp-networks/
    git checkout -b plan
    
  3. Copy contents of foundation to new repo.
    cp -RT ../terraform-example-foundation/3-networks-dual-svpc/ .
    
  4. Copy Cloud Build configuration files for Terraform.
    cp ../terraform-example-foundation/build/cloudbuild-tf-* .
    
  5. Copy Terraform wrapper script to the root of your new repository.
    cp ../terraform-example-foundation/build/tf-wrapper.sh .
    
  6. Ensure wrapper script can be executed.
    chmod 755 ./tf-wrapper.sh
    
  7. Rename common.auto.example.tfvars to common.auto.tfvars and update the file with values from your environment and bootstrap. See any of the envs folder README.md files for additional information on the values in the common.auto.tfvars file.
  8. Rename shared.auto.example.tfvars to shared.auto.tfvars and update the file with the target_name_server_addresses (the list of target name servers for the DNS forwarding zone in the DNS Hub).
  9. Rename access_context.auto.example.tfvars to access_context.auto.tfvars and update the file with the access_context_manager_policy_id.
  10. Commit changes
    git add .
    git commit -m 'Your message'
    
  11. You must manually plan and apply the shared environment (only once) since the development, non-production and production environments depend on it.
    1. Run cd ./envs/shared/.
    2. Update backend.tf with your bucket name from the bootstrap step.
    3. Export the network (terraform-net-sa) service account for impersonation export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT="<IMPERSONATE_SERVICE_ACCOUNT>"
    4. Run terraform init.
    5. Run terraform plan and review output.
    6. Run terraform apply.
    7. If you would like the bucket to be replaced by Cloud Build at run time, change the bucket name back to UPDATE_ME.
  12. Push your plan branch to trigger a plan for all environments. Because the plan branch is not a named environment branch, pushing your plan branch triggers terraform plan but not terraform apply.
    cd ../../
    git push --set-upstream origin plan
    
  13. Review the plan output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
  14. Merge changes to production. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply.
    git checkout -b production
    git push origin production
    
  15. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
  16. After production has been applied, apply development.
  17. Merge changes to development. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply.
    git checkout -b development
    git push origin development
    
  18. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
  19. After development has been applied, apply non-production.
  20. Merge changes to non-production. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply.
    git checkout -b non-production
    git push origin non-production
    
  21. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID
  22. You can now move to the instructions in the step 4-projects.

Deploying with Jenkins

  1. Clone the repo you created manually in 0-bootstrap.
    git clone <YOUR_NEW_REPO-3-networks>
    
  2. Navigate into the repo and change to a non-production branch. All subsequent steps assume you are running them from the <YOUR_NEW_REPO-3-networks> directory. If you run them from another directory, adjust your copy paths accordingly.
    cd YOUR_NEW_REPO_CLONE-3-networks
    git checkout -b plan
    
  3. Copy contents of foundation to new repo.
    cp -RT ../terraform-example-foundation/3-networks-dual-svpc/ .
    
  4. Copy the Jenkinsfile script to the root of your new repository.
    cp ../terraform-example-foundation/build/Jenkinsfile .
    
  5. Update the variables located in the environment {} section of the Jenkinsfile with values from your environment:
    _TF_SA_EMAIL
    _STATE_BUCKET_NAME
    _PROJECT_ID (the CI/CD project ID)
    
  6. Copy Terraform wrapper script to the root of your new repository.
    cp ../terraform-example-foundation/build/tf-wrapper.sh .
    
  7. Ensure wrapper script can be executed.
    chmod 755 ./tf-wrapper.sh
    
  8. Rename common.auto.example.tfvars to common.auto.tfvars and update the file with values from your environment and bootstrap. See any of the envs folder README.md files for additional information on the values in the common.auto.tfvars file.
  9. Rename shared.auto.example.tfvars to shared.auto.tfvars and update the file with the target_name_server_addresses.
  10. Rename access_context.auto.example.tfvars to access_context.auto.tfvars and update the file with the access_context_manager_policy_id.
  11. Commit changes.
    git add .
    git commit -m 'Your message'
    
  12. You must manually plan and apply the shared environment (only once) since the development, non-production and production environments depend on it.
    1. Run cd ./envs/shared/.
    2. Update backend.tf with your bucket name from the bootstrap step.
    3. Export the network (terraform-net-sa) service account for impersonation export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT="<IMPERSONATE_SERVICE_ACCOUNT>"
    4. Run terraform init.
    5. Run terraform plan and review output.
    6. Run terraform apply.
    7. If you would like the bucket to be replaced by Cloud Build at run time, change the bucket name back to UPDATE_ME.
  13. Push your plan branch.
     git push --set-upstream origin plan
    
    • Assuming you configured an automatic trigger in your Jenkins Controller (see Jenkins sub-module README), this will trigger a plan. You can also trigger a Jenkins job manually. Given the many options to do this in Jenkins, it is out of the scope of this document see Jenkins website for more details.
  14. Review the plan output in your Controller's web UI.
  15. Merge changes to production branch.
    git checkout -b production
    git push origin production
    
  16. Review the apply output in your Controller's web UI (you might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Controller UI).
  17. After production has been applied, apply development and non-production.
  18. Merge changes to development
    git checkout -b development
    git push origin development
    
  19. Review the apply output in your Controller's web UI (you might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Controller UI).
  20. Merge changes to non-production.
    git checkout -b non-production
    git push origin non-production
    
  21. Review the apply output in your Controller's web UI (you might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Controller UI).

Run Terraform locally

  1. Change into 3-networks-dual-svpc folder, copy the Terraform wrapper script and ensure it can be executed.
    cd 3-networks-dual-svpc
    cp ../build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
    
  2. Rename common.auto.example.tfvars to common.auto.tfvars, rename shared.auto.example.tfvars to shared.auto.tfvars and rename access_context.auto.example.tfvars to access_context.auto.tfvars.
    mv common.auto.example.tfvars common.auto.tfvars
    mv shared.auto.example.tfvars shared.auto.tfvars
    mv access_context.auto.example.tfvars access_context.auto.tfvars
    
  3. Update common.auto.tfvars file with values from your environment and bootstrap. See any of the envs folder README.md files for additional information on the values in the common.auto.tfvars file.
  4. Update shared.auto.tfvars file with the target_name_server_addresses.
  5. Update access_context.auto.tfvars file with the access_context_manager_policy_id.
  6. Use terraform output to get the backend bucket and networks step Terraform Service Account values from 0-bootstrap output.
    export ORGANIZATION_ID=$(terraform -chdir="../0-bootstrap/" output -json common_config | jq '.org_id' | tr -d '"')
    export ACCESS_CONTEXT_MANAGER_ID=$(gcloud access-context-manager policies list --organization ${ORGANIZATION_ID} --format="value(name)")
    echo "access_context_manager_policy_id = ${ACCESS_CONTEXT_MANAGER_ID}"
    sed -i "s/ACCESS_CONTEXT_MANAGER_ID/${ACCESS_CONTEXT_MANAGER_ID}/" ./access_context.auto.tfvars
    
    export backend_bucket=$(terraform -chdir="../0-bootstrap/" output -raw gcs_bucket_tfstate)
    echo "backend_bucket = ${backend_bucket}"
    sed -i "s/TERRAFORM_STATE_BUCKET/${backend_bucket}/" ./common.auto.tfvars
    
    export NETWORKS_STEP_TERRAFORM_SERVICE_ACCOUNT_EMAIL=$(terraform -chdir="../0-bootstrap/" output -raw networks_step_terraform_service_account_email)
    echo "terraform_service_account = ${NETWORKS_STEP_TERRAFORM_SERVICE_ACCOUNT_EMAIL}"
    sed -i "s/NETWORKS_STEP_TERRAFORM_SERVICE_ACCOUNT_EMAIL/${NETWORKS_STEP_TERRAFORM_SERVICE_ACCOUNT_EMAIL}/" ./common.auto.tfvars
    
  7. Also update backend.tf with your backend bucket from 0-bootstrap output.
    for i in `find -name 'backend.tf'`; do sed -i "s/UPDATE_ME/${backend_bucket}/" $i; done
    

We will now deploy each of our environments(development/production/non-production) using this script. When using Cloud Build or Jenkins as your CI/CD tool each environment corresponds to a branch in the repository for 3-networks-dual-svpc step and only the corresponding environment is applied.

To use the validate option of the tf-wrapper.sh script, please follow the instructions to install the terraform-tools component.

  1. Use terraform output to get the Cloud Build project ID and the environment step Terraform Service Account from 0-bootstrap output. An environment variable GOOGLE_IMPERSONATE_SERVICE_ACCOUNT will be set using the Terraform Service Account to enable impersonation.
    export CLOUD_BUILD_PROJECT_ID=$(terraform -chdir="../0-bootstrap/" output -raw cloudbuild_project_id)
    echo ${CLOUD_BUILD_PROJECT_ID}
    
    export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT=$(terraform -chdir="../0-bootstrap/" output -raw networks_step_terraform_service_account_email)
    echo ${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}
    
  2. Run init and plan and review output for environment shared.
    ./tf-wrapper.sh init shared
    ./tf-wrapper.sh plan shared
    
  3. Run validate and check for violations.
    ./tf-wrapper.sh validate shared $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
    
  4. Run apply shared.
    ./tf-wrapper.sh apply shared
    
  5. Run init and plan and review output for environment production.
    ./tf-wrapper.sh init production
    ./tf-wrapper.sh plan production
    
  6. Run validate and check for violations.
    ./tf-wrapper.sh validate production $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
    
  7. Run apply production.
    ./tf-wrapper.sh apply production
    
  8. Run init and plan and review output for environment non-production.
    ./tf-wrapper.sh init non-production
    ./tf-wrapper.sh plan non-production
    
  9. Run validate and check for violations.
    ./tf-wrapper.sh validate non-production $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
    
  10. Run apply non-production.
    ./tf-wrapper.sh apply non-production
    
  11. Run init and plan and review output for environment development.
    ./tf-wrapper.sh init development
    ./tf-wrapper.sh plan development
    
  12. Run validate and check for violations.
    ./tf-wrapper.sh validate development $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
    
  13. Run apply development.
    ./tf-wrapper.sh apply development
    

If you received any errors or made any changes to the Terraform config or any .tfvars, you must re-run ./tf-wrapper.sh plan <env> before run ./tf-wrapper.sh apply <env>.

Before executing the next stages, unset the GOOGLE_IMPERSONATE_SERVICE_ACCOUNT environment variable.

unset GOOGLE_IMPERSONATE_SERVICE_ACCOUNT