Terraform module for provisioning Nebuly Platform resources on GCP.
Available on Terraform Registry.
Before using this Terraform module, ensure that you have your Nebuly credentials ready.
These credentials are necessary to activate your installation and should be provided as input via the nebuly_credentials
input.
Before using this Terraform module, ensure that the following GCP APIs are enabled in your Google Cloud project:
- sqladmin.googleapis.com
- servicenetworking.googleapis.com
- cloudresourcemanager.googleapis.com
- container.googleapis.com
- secretmanager.googleapis.com
You can enable the APIs using either the GCP Console or the gcloud CLI, as explained in the GCP Documentation.
Ensure that your GCP project has the necessary quotas for the following resources over the regions you plan to deploy Nebuly:
-
Name: GPUs (all regions)
Min Value: 2
-
Name: NVIDIA L4 GPUs
Min Value: 1
-
Name: NVIDIA T4 GPUs
Min Value: 1
For more information on how to check and increase quotas, refer to the GCP Documentation.
To get started with Nebuly installation on GCP, you can follow the steps below.
These instructions will guide you through the installation using Nebuly's default standard configuration with the Nebuly Helm Chart.
For specific configurations or assistance, reach out to the Nebuly Slack channel or email [email protected].
Import Nebuly into your Terraform root module, provide the necessary variables, and apply the changes.
For configuration examples, you can refer to the Examples.
Once the Terraform changes are applied, proceed with the next steps to deploy Nebuly on the provisioned Google Kubernetes Engine (GKE) cluster.
For connecting to the created GKE cluster, you can follow the steps below. For more information, refer to the GKE Documentation.
-
Install the GCloud CLI.
-
Install kubectl:
gcloud components install kubectl
- Install the Install the gke-gcloud-auth-plugin:
gcloud components install gke-gcloud-auth-plugin
- Fetch the command for retrieving the credentials from the module outputs:
terraform output gke_cluster_get_credentials
- Run the command you got from the previous step
The auto-generated Helm values use the name defined in the k8s_image_pull_secret_name input variable for the Image Pull Secret. If you prefer a custom name, update either the Terraform variable or your Helm values accordingly. Create a Kubernetes Image Pull Secret for authenticating with your Docker registry and pulling the Nebuly Docker images.
Install the bootstrap Helm chart to set up all the dependencies required for installing the Nebuly Platform Helm chart on GKE.
Refer to the chart documentation for all the configuration details.
helm install oci://ghcr.io/nebuly-ai/helm-charts/bootstrap-gcp \
--namespace nebuly-bootstrap \
--generate-name \
--create-namespace
Create a Secret Provider Class to allow GKE to fetch credentials from the provisioned Key Vault.
-
Get the Secret Provider Class YAML definition from the Terraform module outputs:
terraform output secret_provider_class
-
Copy the output of the command into a file named secret-provider-class.yaml.
-
Run the following commands to install Nebuly in the Kubernetes namespace nebuly:
kubectl create ns nebuly kubectl apply --server-side -f secret-provider-class.yaml
Retrieve the auto-generated values from the Terraform outputs and save them to a file named values.yaml
:
terraform output helm_values
Install the Nebuly Platform Helm chart. Refer to the chart documentation for detailed configuration options.
helm install <your-release-name> oci://ghcr.io/nebuly-ai/helm-charts/nebuly-platform \
--namespace nebuly \
-f values.yaml \
--timeout 30m
ℹ️ During the initial installation of the chart, all required Nebuly LLMs are uploaded to your model registry. This process can take approximately 5 minutes. If the helm install command appears to be stuck, don't worry: it's simply waiting for the upload to finish.
Retrieve the external Load Balancer IP address to access the Nebuly Platform:
kubectl get svc -n nebuly-bootstrap -o jsonpath='{range .items[?(@.status.loadBalancer.ingress)]}{.status.loadBalancer.ingress[0].ip}{"\n"}{end}'
You can then register a DNS A record pointing to the Load Balancer IP address to access Nebuly via the custom domain you provided
in the input variable platform_domain
.
You can find examples of code that uses this Terraform module in the examples directory.
Name | Version |
---|---|
~>6.3.0 | |
random | ~>3.6 |
tls | ~>4.0 |
Name | Description |
---|---|
gke_cluster_get_credentials | The command for connecting with the provisioned GKE cluster. |
helm_values | The values.yaml file for installing Nebuly with Helm.The default standard configuration is used, which uses Nginx as ingress controller and exposes the application to the Internet. This configuration can be customized according to specific needs. |
secret_provider_class | The secret-provider-class.yaml file to make Kubernetes reference the secrets stored in the Key Vault. |
Name | Description | Type | Default | Required |
---|---|---|---|---|
gke_cluster_admin_users | The list of email addresses of the users who will have admin access to the GKE cluster. | set(string) |
n/a | yes |
gke_delete_protection | Whether the GKE Cluster should have delete protection enabled. | bool |
true |
no |
gke_kubernetes_version | The used Kubernetes version for the GKE cluster. | string |
"1.30.3" |
no |
gke_nebuly_namespaces | The namespaces used by Nebuly installation. Update this if you use custom namespaces in the Helm chart installation. | set(string) |
[ |
no |
gke_node_pools | The node Pools used by the GKE cluster. | map(object({ |
{ |
no |
gke_service_account_name | The name of the Kubernetes Service Account used by Nebuly installation. | string |
"nebuly" |
no |
k8s_image_pull_secret_name | The name of the Kubernetes Image Pull Secret to use. This value will be used to auto-generate the values.yaml file for installing the Nebuly Platform Helm chart. |
string |
"nebuly-docker-pull" |
no |
labels | Common labels that will be applied to all resources. | map(string) |
{} |
no |
nebuly_credentials | The credentials provided by Nebuly are required for activating your platform installation. If you haven't received your credentials or have lost them, please contact [email protected]. |
object({ |
n/a | yes |
network_cidr_blocks | The CIDR blocks of the VPC network used by Nebuly. - primary: The primary CIDR block of the VPC network. - secondary_gke_pods: The secondary CIDR block used by GKE for pods. - secondary_gke_services: The secondary CIDR block used by GKE for services. |
object({ |
{ |
no |
openai_api_key | The API Key used for authenticating with OpenAI. | string |
n/a | yes |
openai_endpoint | The endpoint of the OpenAI API. | string |
n/a | yes |
openai_gpt4_deployment_name | The name of the deployment to use for the GPT-4 model. | string |
n/a | yes |
platform_domain | The domain on which the deployed Nebuly platform is made accessible. | string |
n/a | yes |
postgres_server_backup_configuration | The backup settings of the PostgreSQL server. | object({ |
{ |
no |
postgres_server_delete_protection | Whether the PostgreSQL server should have delete protection enabled. | bool |
true |
no |
postgres_server_disk_size | The size of the disk in GB for the PostgreSQL server. | object({ |
{ |
no |
postgres_server_edition | The edition of the PostgreSQL server. Possible values are ENTERPRISE, ENTERPRISE_PLUS. | string |
"ENTERPRISE" |
no |
postgres_server_high_availability | The high availability configuration for the PostgreSQL server. | object({ |
{ |
no |
postgres_server_maintenance_window | Time window when the PostgreSQL server can automatically restart to apply updates. Specified in UTC time. | object({ |
{ |
no |
postgres_server_tier | The tier of the PostgreSQL server. Default value: 4 vCPU, 16GB memory. | string |
"db-standard-4" |
no |
region | The region where the resources will be created | string |
n/a | yes |
resource_prefix | The prefix that is used for generating resource names. | string |
n/a | yes |
- resource.google_compute_global_address.main (/terraform-docs/main.tf#43)
- resource.google_compute_network.main (/terraform-docs/main.tf#38)
- resource.google_compute_network_peering_routes_config.main (/terraform-docs/main.tf#73)
- resource.google_compute_subnetwork.main (/terraform-docs/main.tf#50)
- resource.google_container_cluster.main (/terraform-docs/main.tf#206)
- resource.google_container_node_pool.main (/terraform-docs/main.tf#253)
- resource.google_project_iam_binding.gke_cluster_admin (/terraform-docs/main.tf#337)
- resource.google_project_iam_member.gke_secret_accessors (/terraform-docs/main.tf#314)
- resource.google_secret_manager_secret.jwt_signing_key (/terraform-docs/main.tf#354)
- resource.google_secret_manager_secret.nebuly_client_id (/terraform-docs/main.tf#381)
- resource.google_secret_manager_secret.nebuly_client_secret (/terraform-docs/main.tf#393)
- resource.google_secret_manager_secret.openai_api_key (/terraform-docs/main.tf#369)
- resource.google_secret_manager_secret.postgres_analytics_password (/terraform-docs/main.tf#150)
- resource.google_secret_manager_secret.postgres_analytics_username (/terraform-docs/main.tf#138)
- resource.google_secret_manager_secret.postgres_auth_password (/terraform-docs/main.tf#191)
- resource.google_secret_manager_secret.postgres_auth_username (/terraform-docs/main.tf#179)
- resource.google_secret_manager_secret_version.jwt_signing_key (/terraform-docs/main.tf#362)
- resource.google_secret_manager_secret_version.nebuly_client_id (/terraform-docs/main.tf#389)
- resource.google_secret_manager_secret_version.nebuly_client_secret (/terraform-docs/main.tf#401)
- resource.google_secret_manager_secret_version.openai_api_key (/terraform-docs/main.tf#377)
- resource.google_secret_manager_secret_version.postgres_analytics_password (/terraform-docs/main.tf#158)
- resource.google_secret_manager_secret_version.postgres_analytics_username (/terraform-docs/main.tf#146)
- resource.google_secret_manager_secret_version.postgres_auth_password (/terraform-docs/main.tf#199)
- resource.google_secret_manager_secret_version.postgres_auth_username (/terraform-docs/main.tf#187)
- resource.google_service_account.gke_node_pool (/terraform-docs/main.tf#249)
- resource.google_service_networking_connection.main (/terraform-docs/main.tf#68)
- resource.google_sql_database.analytics (/terraform-docs/main.tf#122)
- resource.google_sql_database.auth (/terraform-docs/main.tf#163)
- resource.google_sql_database_instance.main (/terraform-docs/main.tf#82)
- resource.google_sql_user.analytics (/terraform-docs/main.tf#133)
- resource.google_sql_user.auth (/terraform-docs/main.tf#174)
- resource.google_storage_bucket.main (/terraform-docs/main.tf#407)
- resource.google_storage_bucket_iam_binding.gke_storage_object_user (/terraform-docs/main.tf#325)
- resource.random_password.analytics (/terraform-docs/main.tf#128)
- resource.random_password.auth (/terraform-docs/main.tf#169)
- resource.tls_private_key.jwt_signing_key (/terraform-docs/main.tf#350)
- data source.google_compute_zones.available (/terraform-docs/main.tf#23)
- data source.google_container_engine_versions.main (/terraform-docs/main.tf#24)
- data source.google_project.current (/terraform-docs/main.tf#22)