Skip to content

Latest commit

 

History

History
393 lines (296 loc) · 13.1 KB

File metadata and controls

393 lines (296 loc) · 13.1 KB

OSS Memorystore Cluster Autoscaler

Autoscaler

Set up the Autoscaler in Cloud Run functions in a per-project deployment using Terraform
Home · Scaler component · Poller component · Forwarder component · Terraform configuration · Monitoring
Cloud Run functions · Google Kubernetes Engine
Per-Project · Centralized · Distributed

Table of Contents

Overview

This directory contains Terraform configuration files to quickly set up the infrastructure for your Autoscaler with a per-project deployment.

In this deployment option, all the components of the Autoscaler reside in the same project as your Memorystore Cluster instances.

This deployment is ideal for independent teams who want to self-manage the infrastructure and configuration of their own Autoscalers. It is also a good entry point for testing the Autoscaler capabilities.

Architecture

architecture-per-project

For an explanation of the components of the Autoscaler and the interaction flow, please read the main Architecture section.

The per-project deployment has the following pros and cons:

Pros

  • Design: This option has the simplest design.
  • Configuration: The control over scheduler parameters belongs to the team that owns the Memorystore clusters, therefore the team has the highest degree of freedom to adapt the Autoscaler to its needs.
  • Infrastructure: This design establishes a clear boundary of responsibility and security over the Autoscaler infrastructure because the team that owns the Memorystore clusters is also the owner of the Autoscaler infrastructure.

Cons

  • Maintenance: With each team being responsible for the Autoscaler configuration and infrastructure it may become difficult to make sure that all Autoscalers across the organization follow the same update guidelines.
  • Audit: Because of the high level of control by each team, a centralized audit may become more complex.

Before you begin

In this section you prepare your project for deployment.

  1. Open the Cloud Console

  2. Activate Cloud Shell
    At the bottom of the Cloud Console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Cloud SDK already installed, including the gcloud command-line tool, and with values already set for your current project. It can take a few seconds for the session to initialize.

  3. In Cloud Shell, clone this repository:

    gcloud source repos clone memorystore-cluster-autoscaler --project=memorystore-oss-preview
  4. Change into the directory of the cloned repository, and check out the main branch:

    cd memorystore-cluster-autoscaler && git checkout main
  5. Export a variable for the working directory:

    export AUTOSCALER_DIR="$(pwd)/terraform/cloud-functions/per-project"

Preparing the Autoscaler Project

In this section you prepare your project for deployment.

  1. Go to the project selector page in the Cloud Console. Select or create a Cloud project.

  2. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  3. In Cloud Shell, set environment variables with the ID of your autoscaler project:

    export PROJECT_ID=<INSERT_YOUR_PROJECT_ID>
    gcloud config set project "${PROJECT_ID}"
  4. Choose the region and App Engine Location where the Autoscaler infrastructure will be located.

    export REGION=us-central1
  5. Enable the required Cloud APIs

    gcloud services enable \
      appengine.googleapis.com \
      cloudbuild.googleapis.com \
      cloudfunctions.googleapis.com \
      cloudresourcemanager.googleapis.com \
      cloudscheduler.googleapis.com \
      compute.googleapis.com \
      eventarc.googleapis.com \
      iam.googleapis.com \
      networkconnectivity.googleapis.com \
      pubsub.googleapis.com \
      logging.googleapis.com \
      monitoring.googleapis.com \
      redis.googleapis.com \
      run.googleapis.com \
      serviceconsumermanagement.googleapis.com
  6. There are two options for deploying the state store for the Autoscaler:

    1. Store the state in Firestore
    2. Store the state in Spanner

    For Firestore, follow the steps in Using Firestore for Autoscaler State. For Spanner, follow the steps in Using Spanner for Autoscaler state.

Using Firestore for Autoscaler state

  1. To use Firestore for the Autoscaler state, enable the additional APIs:

    gcloud services enable firestore.googleapis.com
  2. Create a Google App Engine app to enable the API for Firestore:

    gcloud app create --region="${REGION}"
  3. To store the state of the Autoscaler, update the database created with the Google App Engine app to use Firestore native mode.

    gcloud firestore databases update --type=firestore-native
  4. Next, continue to Deploying the Autoscaler.

Using Spanner for Autoscaler state

  1. To use Spanner for the Autoscaler state, enable the additional API:

    gcloud services enable spanner.googleapis.com
  2. If you want Terraform to create a Spanner instance (named memorystore-autoscaler-state by default) to store the state, set the following variable:

    export TF_VAR_terraform_spanner_state=true

    If you already have a Spanner instance where state must be stored, set the the name of your instance:

    export TF_VAR_spanner_state_name=<INSERT_YOUR_STATE_SPANNER_INSTANCE_NAME>

    If you want to manage the state of the Autoscaler in your own Cloud Spanner instance, please create the following table in advance:

    CREATE TABLE memorystoreClusterAutoscaler (
      id STRING(MAX),
      lastScalingTimestamp TIMESTAMP,
      createdOn TIMESTAMP,
      updatedOn TIMESTAMP,
      lastScalingCompleteTimestamp TIMESTAMP,
      scalingOperationId STRING(MAX),
      scalingRequestedSize INT64,
      scalingPreviousSize INT64,
      scalingMethod STRING(MAX),
    ) PRIMARY KEY (id)
  3. Next, continue to Deploying the Autoscaler.

Deploying the Autoscaler

  1. Set the project ID and region in the corresponding Terraform environment variables:

    export TF_VAR_project_id="${PROJECT_ID}"
    export TF_VAR_region="${REGION}"
  2. By default, a new Memorystore Cluster instance will be created for testing. If you want to scale an existing Memorystore Cluster instance, set the following variable:

    export TF_VAR_terraform_memorystore_cluster=false

    Set the following variable to choose the name of a new or existing cluster to scale:

    export TF_VAR_memorystore_cluster_name=<memorystore-cluster-name>

    If you do not set this variable, autoscaler-target-memorystore-cluster will be used.

    For more information on how to configure your cluster to be managed by Terraform, see Importing your Memorystore Cluster instances.

  3. To create a testbench VM with utilities for testing Memorystore, including generating load, set the following variable:

    export TF_VAR_terraform_test_vm=true

    Note that this option can only be selected when you have chosen to create a new Memorystore cluster.

  4. Change directory into the Terraform per-project directory and initialize it.

    cd "${AUTOSCALER_DIR}"
    terraform init
  5. Import the existing App Engine application into Terraform state:

    terraform import module.autoscaler-scheduler.google_app_engine_application.app "${PROJECT_ID}"
  6. Create the Autoscaler infrastructure. Answer yes when prompted, after reviewing the resources that Terraform intends to create.

    terraform apply -parallelism=2
    • If you are running this command in Cloud Shell and encounter errors of the form "Error: cannot assign requested address", this is a known issue in the Terraform Google provider, please retry the command above and include the flag -parallelism=1.

Connecting to the Test VM

To connect to the optionally created test VM, run the following command:

function memorystore-testbench-ssh {
  export TEST_VM_NAME=$(terraform output -raw test_vm_name)
  export TEST_VM_ZONE=$(terraform output -raw test_vm_zone)
  export PROJECT_ID=$(gcloud config get-value project)
  gcloud compute ssh --zone "${TEST_VM_ZONE}" "${TEST_VM_NAME}" --tunnel-through-iap --project "${PROJECT_ID}"
}

You can then use memorystore-testbench-ssh to SSH to the testbench VM via IAP.

Importing your Memorystore Cluster instances

If you have existing Memorystore Cluster instances that you want to import to be managed by Terraform, follow the instructions in this section.

  1. List your Memorystore clusters:

    gcloud redis clusters list --format="table(name)"
  2. Set the following variable with the instance name from the output of the above command that you want to import

    MEMORYSTORE_CLUSTER_NAME=<YOUR_MEMORYSTORE_CLUSTER_NAME>
  3. Create a Terraform config file with an empty google_redis_cluster resource:

    echo "resource \"google_redis_cluster\" \"${MEMORYSTORE_CLUSTER_NAME}\" {}" > "${MEMORYSTORE_CLUSTER_NAME}.tf"
  4. Import the Memorystore Cluster instance into the Terraform state.

    terraform import "google_redis_cluster.${MEMORYSTORE_CLUSTER_NAME}" "${MEMORYSTORE_CLUSTER_NAME}"
  5. After the import succeeds, update the Terraform config file for your instance with the actual instance attributes

    # TODO fields to exclude
    terraform state show -no-color "google_redis_cluster.${MEMORYSTORE_CLUSTER_NAME}" \
      | grep -vE "(id|state).*(=|\{)" \
      > "${MEMORYSTORE_CLUSTER_NAME}.tf"

If you have additional Memorystore clusters to import, repeat this process.

Next steps

Your Autoscaler infrastructure is ready, follow the instructions in the main page to configure your Autoscaler.