This repository contains all OpenCraft playbooks and roles used to deploy many different types of servers.
We have several submodules defined in .gitmodules
-- they're all either 3rd party roles, or those with CircleCI builds that run on their repositories separately.
If you need to update a submodule in this repository to point to a new commit hash, cd
into it and git checkout
that reference. You can then stage and commit those changes.
For our own roles that are in this repository as submodules, they should be on the latest master
. You can update them with git submodule update --remote
.
-
Create a root volume that has 40GB size. It should be based on Ubuntu 14.04 image. We suggest you use vanilla image from ubuntu page. First you need to upload images to your project, to do this follow documentation here. When images are uploaded you'll need to:
Volumes
->Create Volume
->Volume Source
chooseImage
->Select image
. -
Create
vps-ssd-3
instance from this volume: (Boot Source
->Volume
->Your root volume
). -
Create an additional blank volume for storing log and database dumps; we recommend 100GB. Attach it to the instance you created, and make a note of the device identifier it gets assigned (for example,
/dev/vdb
or/dev/vdc
). -
Make a note of the instance's SSH private key; either save it to your SSH config for that host's IP address, or save it in an id_rsa file with permissions set to 600.
-
SSH to the new instance using its private key; verify the public key fingerprint against the OpenStack log page before proceeding.
-
Create an Ansible
hosts
file containing (for example):[dalite] dalite.harvardx.harvard.edu ansible_host=xxx.xxx.xxx.xxx
Please note that while host name is largely irrelevant, this host must be in dalite group.
- Create a
private-extra-vars.yml
file and store it somewhere safe, all configuration should be stored in that file. - Obtain database credentials
- Save host to
MYSQL_DALITE_HOST
- Save database to:
MYSQL_DALITE_DATABASE
- Save credentials to:
MYSQL_DALITE_PASSWORD
,MYSQL_DALITE_USER
- Save the device ID of the blank volume you attached for log and database dumps to
DALITE_LOG_DOWNLOAD_VOLUME_DEVICE_ID
.
-
Generate tarsnap keys, these keys shouldn't end up on dalite-ng server, but instead store them somewhere safe.
- Generate key for swift container backup
- Generate key for dalite logrotate backup
-
Generate tarsnap read write keys from the master key, see tarsnap-keymngmt,
-
Go to dead man's snitch at https://deadmanssnitch.com/. And generate three snitches, save them under:
DALITE_LOG_TARSNAP_SNITCH
--- this will monitor saving logs, this snich should have daily intervalSANITY_CHECK_SNITCH
--- this will monitor sanity check, this snitch should have 15 minute intervalBACKUP_SWIFT_SNITCH
--- this will monitor backup of swift container this snitch should have hourly interval
-
Generate various dailte secrets;
DALITE_SECRET_KEY
--- a random stringDALITE_LTI_CLIENT_SECRET
--- a random stringDALITE_PASSWORD_GENERATOR_NONCE
--- a random stringDALITE_LOG_DOWNLOAD_PASSWORD
--- a crypt compatible encrypted password
-
Obtain HTTPS certificate for dalite domain, and store it like that:
DALITE_SSL_KEY: | -----BEGIN PRIVATE KEY----- data -----END PRIVATE KEY----- DALITE_SSL_CERT: | -----BEGIN CERTIFICATE----- data -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Parent certificate -----END CERTIFICATE-----
-
Obtain OpenStack credentials to a project where media uploads will be stored, these should go to
DALITE_ENV
, like this:DALITE_ENV: DJANGO_SETTINGS_MODULE: 'dalite.settings' OS_AUTH_URL: '...' OS_TENANT_ID: '...' OS_TENANT_NAME: '...' OS_USERNAME: '...' OS_PASSWORD: '...' OS_REGION_NAME: '...'
and to:
BACKUP_SWIFT_RC
(yes you need to copy the same credentials in two ways, but ansible has no sensible "copy variables" facility:BACKUP_SWIFT_RC: |
export OS_AUTH_URL='....' export OS_TENANT_ID='...' export OS_TENANT_NAME='...' export OS_USERNAME='...' export OS_PASSWORD='..' export OS_REGION_NAME='..'
If the device identifier your external log volume was assigned is not /dev/vdc (the default we look for), then you'll need to pass it into the command.
ansible-galaxy install -r requirements.yml -f
ansible-playbook deploy-all.yml -u ubuntu --extra-vars @private-extra-vars.yml --limit dalite
If you saved the instance's private SSH key to a separate file, rather than into your SSH configuration, you'll need to pass the --private-key
argument to ansible-playbook
, specifying the file where the private key can be found.
-
Create an OpenStack "vps-ssd-1" instance from a vanilla Ubuntu 16.04 (xenial) image.
-
Add the IP address and host name of the new instance to the Ansible inventory in the file
hosts
(create it if necessary):[load-balancer] load-balancer.host.name
-
Go to https://deadmanssnitch.com/ and create two snitches, one for the backups and one for the sanity checks. Add them to a file
host_vars/<hostname>/vars.yml
:TARSNAP_BACKUP_SNITCH: https://nosnch.in/<backup-snitch> SANITY_CHECK_SNITCH: https://nosnch.in/<sanity-check-snitch>
-
Generate a tarsnap master key and a subkey with only read and write permissions, and add it to the variables file:
TARSNAP_KEY: | # START OF TARSNAP KEY FILE [...] # END OF TARSNAP KEY FILE
Run these commands:
mkvirtualenv ansible
pip install -r requirements.txt
ansible-galaxy install -r requirements.yml -f
ansible-playbook deploy-all.yml -u ubuntu -l load-balancer
-
Create an OpenStack "vps-ssd-2" instance from a vanilla Ubuntu 16.04 (xenial) image.
-
Add the IP address and host name of the new instance to the Ansible inventory in the file
hosts
(create it if necessary):[elasticsearch] elasticsearch.host.name
-
Go to https://deadmanssnitch.com/ and create one snitch for the sanity checks. Add it to a file
host_vars/<hostname>/vars.yml
:SANITY_CHECK_SNITCH: https://nosnch.in/<sanity-check-snitch>
Run these commands:
mkvirtualenv ansible
pip install -r requirements.txt
ansible-galaxy install -r requirements.yml -f
ansible-playbook deploy-all.yml -u ubuntu -l elasticsearch
Deploys a high-security server, has full access to most our tarsnapper backups. This server is used only to delete old backups.
ansible-playbook deploy/deploy-all.yml -u ubuntu --extra-vars @private-extra-vars.yml -i hosts --private-key /path/to/backup_pruner.key
- Get master key
- Save master key to private.yml
- Save cache directory and file for tarsnap key in the private yaml
- Add new entry to
TARSNAPPER_JOBS
- Login to the instance
- Pruning scripts are named
tarsnap-{{ job.name }}.sh
- Following operations are supported:
a. List archives
sudo tarsnap-{{ job.name }}.sh list
b. Expire archivessudo tarsnap-{{ job.name }}.sh. expire
c. Expire archives (dry run)tarsnap-{{ job.name }}.sh expire --dry-run
This ansible repository deploys MySQL server on OpenStack provider.
- Create a volume for data, as big as you need. It should be empty.
- Create a security group for database servers. It should have following rules:
- Incoming traffic is allowed only for port 22.
- Outgoing traffic should be allowed.
- When you provision servers using the database allow them using their IP.
- Create an instance using an appropriate image, such as Ubuntu 16.04. We suggest you use vanilla image from ubuntu page. This image might be small; MySQL data will be stored elsewhere.
- Attach the data volume that you created earlier to the VM (go to
Volumes
-> data volume -> expand dropdown next to "Edit Volume" ->Edit Attachments
). - Manual step: SSH into the database instance and format
/dev/vdb
usingzfs
.- Create the zfs pool:
zpool create -m /var/lib/mysql mysql /dev/vdb
- Create the zfs pool:
- Go to info page and note the instance public key, ssh to the instance and check whether public key matches, save public key to your ssh config.
- Create
private-extra-vars.yml
, you'll put all the generated variables there - Generate root password, put it in the
private.yaml
for your host, undermysql_root_password
. - Create a key, but store it somewhere safe (for example keepassx database), this key shouldn't end on the mysql server.
- Generate tarsnap read write key from the master key, see tarsnap-keymngmt,
save this key in
MYSQL_TARSNAP_KEY
. Note: this key won't be able to delete the backups. - Go to dead man's snitch at https://deadmanssnitch.com/. Save it under
TARSNAP_BACKUP_SNITCH
in theprivate.yml
.
ansible-galaxy install -r requirements.yml -f && ansible-playbook deploy-all.yml -u ubuntu --extra-vars @private-extra-vars.yml
- Check that backups are saved to tarsnap
- Check contents of these backups.