This repository contains a set of Ansible scripts to deploy Kubernetes and the Labs Workbench onto an OpenStack cluster
If you don't have access to an OpenStack cluster, there are plenty of ways to run Kubernetes!
docker build -t ndslabs/deploy-tools .
docker run -it -v /home/core/private:/root/SAVED_AND_SENSITIVE_VOLUME ndslabs/deploy-tools bash
NOTE: You should remember to map some volume to /root/SAVED_AND_SENSITIVE_VOLUME
containing your *-openrc.sh
file. This directory is where the ansible output gets stored. This includes SSH private keys, generated TLS certificates, and Ansible's own fact cache. If you forget to map this directory, its contents WILL BE LOST FOREVER.
The first thing you need to do is to source
the openrc file of the project you wish to deploy to in OpenStack
NOTE: this file can be retrieved for any OpenStack project which you can access by following the insturctions here.
Assuming you've passed your the openrc.sh files with -v
, as recommended above:
source /root/SAVED_AND_SENSITIVE_VOLUME/OpenStackProjectName-openrc.sh
Some parameters, such as the available flavors (sizes) and images for the deployed OpenStack instances, are properties of the particular installation of OpenStack or the projects to which you are allowed to deploy. We refer to each installation of OpenStack as a "site", and similarly store their variables under /root/inventory/site_vars
, where each file is named after the site that it represents.
To set up a new site, you can simply copy an existing site and change the names of the images and flavors accordingly.
Download the newest stable cloud image of CoreOS for OpenStack and import it into your project.
Currently supported CoreOS version: 1235.6
NOTE: While newer versions of CoreOS should work, due to CoreOS and Docker versions being tied together later versions may not be supported immediately.
Set the site_vars named flavor_small
/ flavor_medium
/ flavor_large
to flavors that already exist in your OpenStack project, or create new flavors that match these.
Make a copy of the existing example or minimal inventory located in /root/inventory
and edit it to your liking:
cp inventory/minimal-ncsa inventory/my-cluster
vi inventory/my-cluster
- The top section pertains to Cluster Variables - here you can override any group_vars (NOTE: site_vars cannot yet be overridden)
- The middle section defines Servers, where we choose the names and quantities for each type of node
- The last section defines Groups, which groups the node types that we declared above into several larger groups
Some parameters are different based on the type of node being provisioned - Ansible calls these "groups". The group-specific values can be found under /root/inventory/group_vars
, where each file is named after the group it represents.
NOTE: these groups can be nested / hierarchical. NOTE: Raw images should be preferred at OpenStack sites where Ceph is used for the backing volumes, as it will significanlty decrease the time needed to provision and start your cluster.
After adjusting the inventory/site parameters to your liking, run the three Ansible playbooks to bring up a Labs Workbench cluster:
ansible-playbook -i inventory/my-cluster playbooks/openstack-provision.yml && \
ansible-playbook -i inventory/my-cluster playbooks/k8s-install.yml && \
ansible-playbook -i inventory/my-cluster playbooks/ndslabs-k8s-install.yml
These commands can be run one at a time, or all at once for provisioning in a single command:
ansible-playbook -i inventory/my-cluster playbooks/openstack-provision.yml playbooks/k8s-install.yml playbooks/ndslabs-k8s-install.yml
Each playbook takes care of a small portion of the installation process:
playbooks/openstack-provision.yml
: Provision OpenStack volumes and instances with chosen flavor / imageplaybooks/k8s-install.yml
: Download and install Kubernetes binaries onto each nodeplaybooks/ndslabs-k8s-install.yml
: Deploy our Kubernetes YAML files to start up services necessary to run Labs Workbench
After running all three playbooks, you should be left with a working cluster.
Labels recognized by the cluster are as follows:
- glfs server nodes must be labelled with
ndslabs-role-glfs=true
for the GLFS servers to run there - compute nodes must be labelled with
ndslabs-role-compute=true
for the Workbench API server to schedule services there - loadbal nodes must be labelled with
ndslabs-role-loadbal=true
to know where a public IP is available, so it can run the ingress/loadbalance - lma nodes should be labelled with
ndslabs-role-lma=true
to know where dedicated resources are set aside to run logging/monitoring/alerts