This repository contains all OpenCraft playbooks and roles used to deploy many different types of servers.
We have several submodules defined in .gitmodules
-- they're all either 3rd party roles, or those with CircleCI builds that run on their repositories separately.
If you need to update a submodule in this repository to point to a new commit hash, cd
into it and git checkout
that reference. You can then stage and commit those changes.
WARNING: ansible-playbook will silently skip tasks in the roles defined as submodules if the submodules haven't been checked out!
Be sure to run git submodule update --init --recursive
to initialize the submodules.
For our own roles that are in this repository as submodules, they should be on the latest master
. You can update them with git submodule update --remote
.
-
Create a SoYouStart dedicated server (for production) or a OVH VM (for staging -- use the infrastructure map to find the correct region and account). This will be a vanilla Ubuntu image 16.04 or greater.
-
Add the host name of the new instance to the Ansible inventory in the file
hosts
:[load-balancer-v2-prod] load-balancer.host.name
-
Update the infrastructure map
-
Bootstrap the server. This means running the
bootstrap-dedicated.yml
playbook:ansible-playbook bootstrap-dedicated.yml -u ubuntu -l load-balancer.host.name
-
Go to https://deadmanssnitch.com/ and create two snitches, one for the backups and one for the sanity checks. Add them to a file
host_vars/<hostname>/vars.yml
:TARSNAP_BACKUP_SNITCH: https://nosnch.in/<backup-snitch> SANITY_CHECK_SNITCH: https://nosnch.in/<sanity-check-snitch>
-
Generate a tarsnap master key and a subkey with only read and write permissions, and add it to the variables file:
TARSNAP_KEY: | # START OF TARSNAP KEY FILE [...] # END OF TARSNAP KEY FILE
For a new server, run these commands:
mkvirtualenv ansible
pip install -r requirements.txt
ansible-playbook deploy-all.yml -u ubuntu -l load-balancer.host.name
Then you will need to instruct the new server to join the Consul cluster and add the new load balancer to the DNS pool in Gandi. See the load balancer documentation for details.
For an existing server, run these commands:
mkvirtualenv ansible
pip install -r requirements.txt
ansible-playbook -v deploy/playbooks/load-balancer-v2.yml -l load-balancer.host.name
ALWAYS use --limit
and test each server after deployment, even if you have to make the same updates to all three. That's the whole point of having 3 highly available load balancers!
-
Create an OpenStack "vps-ssd-2" instance from a vanilla Ubuntu 16.04 (xenial) image.
-
Add the IP address and host name of the new instance to the Ansible inventory in the file
hosts
(create it if necessary):[elasticsearch] elasticsearch.host.name
-
Go to https://deadmanssnitch.com/ and create one snitch for the sanity checks. Add it to a file
host_vars/<hostname>/vars.yml
:SANITY_CHECK_SNITCH: https://nosnch.in/<sanity-check-snitch>
Run these commands:
mkvirtualenv ansible
pip install -r requirements.txt
ansible-playbook deploy-all.yml -u ubuntu -l elasticsearch
Deploys a high-security server, has full access to most our tarsnapper backups. This server is used only to delete old backups.
ansible-playbook deploy/deploy-all.yml -u ubuntu --extra-vars @private-extra-vars.yml -i hosts --private-key /path/to/backup_pruner.key
- Get master key
- Save master key to private.yml
- Save cache directory and file for tarsnap key in the private yaml
- Add new entry to
TARSNAPPER_JOBS
- Login to the instance
- Pruning scripts are named
tarsnap-{{ job.name }}.sh
- Following operations are supported:
a. List archives
sudo tarsnap-{{ job.name }}.sh list
b. Expire archivessudo tarsnap-{{ job.name }}.sh. expire
c. Expire archives (dry run)tarsnap-{{ job.name }}.sh expire --dry-run
This ansible repository deploys MySQL server on OpenStack provider.
- Create a volume for data, as big as you need. It should be empty.
- Create a security group for database servers. It should have following rules:
- Incoming traffic is allowed only for port 22.
- Outgoing traffic should be allowed.
- When you provision servers using the database allow them using their IP.
- Create an instance using an appropriate image, such as Ubuntu 16.04. We suggest you use vanilla image from ubuntu page. This image might be small; MySQL data will be stored elsewhere.
- Attach the data volume that you created earlier to the VM (go to
Volumes
-> data volume -> expand dropdown next to "Edit Volume" ->Edit Attachments
). - Go to info page and note the instance public key, ssh to the instance and check whether public key matches, save public key to your ssh config.
- Create
private-extra-vars.yml
, you'll put all the generated variables there - Generate root password, put it in the
private.yaml
for your host, undermysql_root_password
. - Create a key, but store it somewhere safe (for example keepassx database), this key shouldn't end on the mysql server.
- Generate tarsnap read write key from the master key, see tarsnap-keymngmt,
save this key in
MYSQL_TARSNAP_KEY
. Note: this key won't be able to delete the backups. - Go to dead man's snitch at https://deadmanssnitch.com/. Save it under
TARSNAP_BACKUP_SNITCH
in theprivate.yml
.
ansible-playbook deploy-all.yml -u ubuntu --extra-vars @private-extra-vars.yml
- Check that backups are saved to tarsnap
- Check contents of these backups.
- pip install -r test-requirements.txt
- molecule test -s vault