Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: deploy networks via github actions #10381

Merged
merged 1 commit into from
Dec 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.secrets
94 changes: 63 additions & 31 deletions .github/workflows/network-deploy.yml
Original file line number Diff line number Diff line change
@@ -1,18 +1,17 @@
name: Aztec Network EKS Deployment

# Manual trigerring of this workflow is intentionally disabled
# Helm deployments do not support lock files
# Without a lockfile, manual trigerring can lead to corrupted or partial deployments
name: Aztec Network Deployment

on:
push:
branches:
- staging
- production
pull_request:
branches:
- staging
- production
workflow_dispatch:
inputs:
namespace:
description: The namespace to deploy to, e.g. smoke
required: true
values_file:
description: The values file to use, e.g. 1-validators.yaml
required: true
aztec_docker_image:
description: The Aztec Docker image to use, e.g. aztecprotocol/aztec:da809c58290f9590836f45ec59376cbf04d3c4ce-x86_64
required: true

jobs:
network_deployment:
Expand All @@ -24,34 +23,67 @@ jobs:

# Set up a variable based on the branch name
env:
NAMESPACE: ${{ github.ref == 'refs/heads/production' && 'production' || 'staging' }}
AZTEC_DOCKER_IMAGE: ${{ inputs.aztec_docker_image }}
NAMESPACE: ${{ inputs.namespace }}
VALUES_FILE: ${{ inputs.values_file }}
CHART_PATH: ./spartan/aztec-network
CLUSTER_NAME: aztec-gke
REGION: us-west1-a
TF_STATE_BUCKET: aztec-terraform
GKE_CLUSTER_CONTEXT: gke_testnet-440309_us-west1-a_aztec-gke

steps:
# Step 1: Check out the repository's code
- name: Checkout code
uses: actions/checkout@v3

# Step 2: Configure AWS credentials using GitHub Secrets
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
credentials_json: ${{ secrets.GCP_SA_KEY }}

- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud@v2

# Step 3: Set up Kubernetes context for AWS EKS
- name: Configure kubectl with EKS cluster
- name: Install GKE Auth Plugin
run: |
aws eks update-kubeconfig --region us-east-1 --name spartan
gcloud components install gke-gcloud-auth-plugin --quiet

# Step 4: Install Helm
- name: Install Helm
- name: Configure kubectl with GKE cluster
run: |
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
gcloud container clusters get-credentials ${{ env.CLUSTER_NAME }} --region ${{ env.REGION }}

# Step 5: Apply Helm Chart
- name: Deploy Helm chart
- name: Ensure Terraform state bucket exists
run: |
helm dependency update ${{ env.CHART_PATH }}
helm upgrade --install ${{ env.NAMESPACE }} ${{ env.CHART_PATH }} --namespace ${{ env.NAMESPACE }} --set network.public=true --atomic --create-namespace --timeout 20m
if ! gsutil ls gs://${{ env.TF_STATE_BUCKET }} >/dev/null 2>&1; then
echo "Creating GCS bucket for Terraform state..."
Copy link
Member

@spypsy spypsy Dec 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't the bucket handled by terraform? (I believe it was in AWS, just curious)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't seem to: I ran it without explicit creation and it crashed.

gsutil mb -l us-east4 gs://${{ env.TF_STATE_BUCKET }}
gsutil versioning set on gs://${{ env.TF_STATE_BUCKET }}
else
echo "Terraform state bucket already exists"
fi

- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: "1.5.0" # Specify your desired version

- name: Terraform Init
working-directory: ./spartan/terraform/deploy-release
run: |
terraform init \
-backend-config="bucket=${{ env.TF_STATE_BUCKET }}" \
-backend-config="prefix=network-deploy/${{ env.REGION }}/${{ env.CLUSTER_NAME }}/${{ env.NAMESPACE }}/terraform.tfstate" \

- name: Terraform Plan
working-directory: ./spartan/terraform/deploy-release
run: |
terraform plan \
-var="release_name=${{ env.NAMESPACE }}" \
-var="values_file=${{ env.VALUES_FILE }}" \
-var="gke_cluster_context=${{ env.GKE_CLUSTER_CONTEXT }}" \
-var="aztec_docker_image=${{ env.AZTEC_DOCKER_IMAGE }}" \
-out=tfplan

- name: Terraform Apply
working-directory: ./spartan/terraform/deploy-release
run: terraform apply -auto-approve tfplan
4 changes: 2 additions & 2 deletions spartan/terraform/deploy-release/main.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
terraform {
backend "s3" {
backend "gcs" {
bucket = "aztec-terraform"
region = "eu-west-2"
prefix = "terraform/state"
}
required_providers {
helm = {
Expand Down
18 changes: 18 additions & 0 deletions spartan/terraform/gke-cluster/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,24 @@ resource "google_project_iam_member" "gke_sa_roles" {
member = "serviceAccount:${google_service_account.gke_sa.email}"
}

# Create a new service account for Helm
resource "google_service_account" "helm_sa" {
account_id = "helm-sa"
display_name = "Helm Service Account"
description = "Service account for Helm operations"
}

# Add IAM roles to the Helm service account
resource "google_project_iam_member" "helm_sa_roles" {
for_each = toset([
"roles/container.admin",
"roles/storage.admin"
])
project = var.project
role = each.key
member = "serviceAccount:${google_service_account.helm_sa.email}"
}

# Create a GKE cluster
resource "google_container_cluster" "primary" {
name = "spartan-gke"
Expand Down
Loading