Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Update readme to include additional information for Kubefuffle #83

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 16 additions & 7 deletions infrastructure/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,17 +32,22 @@ Before running the infra scripts, you need to:
```

1. You need to set the following **REQUIRED** variables (as environment variables prefixing with `TF_VAR_` or variables in `terraform.tfvars`)
* `prefix` : Make sure it's unique else, it will clash with other envs
* `rds_vpc_cidr` : Pick an unused CIDR in the range `172.16.. - 172.29..` (defaults to `172.31.0.0/16`)
* `prefix` : Make sure it's unique else (primary and secondary should use different prefix), it will clash with other envs.
* `quay_vpc_cidr` : Pick an unused CIDR in the range `172.16.. - 172.29..` (defaults to `172.31.0.0/16`)
* `db_password` : The password that will be set on the quay and clair RDS DBs
* `deploy_type`: `primary` or `secondary` this is useful for multi-region setup (default `primary`)
* `region` : AWS region to use for deployment (default `us-east-1`)
* `openshift_vpc_id` : VPC ID where openshift is deployed (used for creating peering)
* `region` : AWS region to use for deployment (default `us-east-1` used for primary). secondary should use `us-east-2`
* `openshift_vpc_id` : VPC ID where openshift is deployed (used for creating peering). Both primary and secondary region use different VPCs. Each of these can be found by looking at the VPCs where the corresponding OCP clusters are deployed
* `openshift_cidrs`: CIDRs for openshift access to RDS (check the Openshift VPC to get this value)
* For builders, set: `builder_access_key`, `builder_secret_key` and `builder_ssh_keypair` (points to the ssh kepair set on AWS UI console)
* If we want to use clair, set variables: `enable_clair=true`, `clair_image` and `clair_db_version=<postgres db version>`
* `kube_context`: cluster's currently active context in your kubeconfig file (can be obtained via `oc config current-context`)
* `redis_azs`: defines availability zones for redis. primary uses `["us-east-1a", "us-east-1b"]` and secondary uses `["us-east-2a", "us-east-2b"]`
* `dns_domain`: DNS name for hosted quay. It uses `<prefix>.<custom-dns-name>` format

1. If you are deploying a **secondary** region you'll also have to add the following **REQUIRED** variables
1. If you are deploying a **secondary** region you'll also have to add the following **REQUIRED** variables in addition to above variables.
* `primary_s3_bucket_arn`: ARN of the S3 bucket created in primary region. This will be used for setting up replication
* `primary_db_arn`: ARM of the DB created in the primary region. This will be used for setting up replication
* `primary_db_arn`: ARM of the DB (Global instance) created in the primary region. This will be used for setting up replication
* `primary_db_hostname`: Hostname of the primary DB, used for setting up the service key when using the secondary deployment
* `primary_db_password`: Password of the primary DB, used for setting up the service key when using the secondary deployment

Expand All @@ -54,6 +59,8 @@ Before running the infra scripts, you need to:
* `builder_ssh_keypair`: SSH Keypair created to access the build VMs (should be created prior to deploy)
* `builder_access_key`: AWS access key for builder. Used to createEC2 VMs for building
* `builder_secret_key`: AWS Secret key for builder. Used to createEC2 VMs for building
* `enable_monitoring=true`: If we want to setup prometheus and grafana
> **NOTE** The grafana dashboards can be made available by importing grafana JSON files. These file can obtained via app-interface repo. Make sure to **exclude** quay.io production related configurations.

1. If using an env file like the examples given, set the environment variables in the current shell `source envs/example-primary.env`.
> **NOTE** The example env file sets the `kube_context` with the command `oc config current-context` so the OCP cluster needs to be logged into first before sourcing the environment file.
Expand All @@ -78,6 +85,8 @@ Before running the infra scripts, you need to:
$ oc project <prefix>-quay
$ oc get route
```

12. Before setting up secondary cluster, run `oc login <secondary-ocp-cluster>` and run `source envs/example-secondary.env` to ensure correct variables are set

## Cleaning up

Expand All @@ -94,7 +103,7 @@ $ terraform workspace select secondary
$ terraform destroy
$ terraform workspace select primary
$ oc login --token="" --server="" # login back in to OCP in primary region
$ source envs/examply-primary.env # Set the correct variables for primary region
$ source envs/example-primary.env # Set the correct variables for primary region
$ terraform destroy
```

Expand Down
1 change: 1 addition & 0 deletions infrastructure/envs/example-primary.env
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,4 @@ export TF_VAR_openshift_vpc_id="vpc-0708b20c341a77eb3d0"
export TF_VAR_kube_context=`oc config current-context`
export TF_VAR_redis_azs='["us-east-1a", "us-east-1b"]'
export TF_VAR_dns_domain=quay.quaydev.org
export TF_VAR_quay_image=<hosted quay image>
1 change: 1 addition & 0 deletions infrastructure/envs/example-secondary.env
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,4 @@ export TF_VAR_kube_context=`oc config current-context`
export TF_VAR_redis_azs='["us-east-2a", "us-east-2b"]'
export TF_VAR_primary_s3_bucket_arn="arn:aws:s3:::kubefuffle-ue1-quay-storage"
export TF_VAR_primary_db_arn="arn:aws:rds:us-east-1:9424382383:db:quay-ue1-quay-db"
export TF_VAR_primary_db_hostname=<primary-rds-hostname>
2 changes: 1 addition & 1 deletion infrastructure/quay_db.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ resource "aws_rds_global_cluster" "quay_global_db" {
force_destroy = true
global_cluster_identifier = "${var.prefix}-global-quay-db"
source_db_cluster_identifier = "${var.primary_db_arn}"
database_name = "quay"
database_name = local.is_primary ? "quay" : null
}

resource "aws_rds_cluster" "quay_db" {
Expand Down
2 changes: 1 addition & 1 deletion infrastructure/quay_deployment.yaml.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -582,7 +582,7 @@ spec:
- name: prometheus
image: ${prometheus_image}
args:
- '--storage.tsdb.retention=6h'
- '--storage.tsdb.retention.time=6h'
- '--storage.tsdb.path=/prometheus'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
Expand Down