This produces a very simple demonstration shard of two VirtualBox VMs,
one master and one worker, plus another VM for the proxy. The shard
has a Kubernetes cluster. The networking is done by Flannel using its
host-gw
"backend" (which uses ordinary IP routing and connectivity).
Install git, Vagrant and VirtualBox. You will need at least Vagrant 1.8.4 and VirtualBox 5.0.24.
Checkout this project:
git clone --recursive https://github.com/containercafe/containercafe.git
cd openradiant
This example proceeds through four steps, as follows.
- Provision target machines
- Prepare the installer machine
- Use the installer machine to deploy an OpenRadiant environment on target machines
- Exercise the OpenRadiant shard
For OpenRadiant in general, the first two steps can be done in either order.
This example shows just one way to provision machines for use with OpenRadiant. In general, you can use OpenRadiant with any provisioning technology you like. See the inventory contract for the key idea.
Create target machines with Vagrant. The following creates three, named
radiant2
, radiant3
and proxy
.
( cd examples/vagrant; vagrant up )
In case you face any issues, please follow vagrant troubleshooting
There are two easy ways to open a connection to a shell in a Vagrant/VirtualBox VM. One is provided by Vagrant:
( cd examples/vagrant; vagrant ssh radiant2 )
There's not much magic under that hat. You can do the equivalent
directly using ssh
as follows.
ssh -i ~/.vagrant.d/insecure_private_key [email protected]
In this example's VMs, the vagrant
user is not authorized to use
docker. If you ssh
to one of these VMs and try to use raw Docker
engine commands (i.e., without the settings explained
below) then you will need to sudo
them.
To create an installer machine, you can either (1) use another Vagrant VM we have defined to serve this purpose for you or (2) follow instructions to make your laptop or machine of choice into an installer machine. The installer machine has to be able to run Ansible, which runs only on Linux and MacOS.
( cd examples/vagrant; vagrant up installer-tiny )
That creates an installer VM that is specialized to this example.
Connect to it using SSH as described above. In that VM you will find
most of the contents of the OpenRadiant repository in ~/openradiant
.
If you have created the installer VM using Create Installer VM, you can skip this section. To manually create the installer, see the general documentation of the installer machine for the general story. Following is one concrete realization of that story for this example.
If you are running Ubuntu on your installer, you may need to install the following python packages:
sudo apt-get install python-pip python-dev
Your installer machine must have the gtar
command. On MacOS 10 this
command can be added as follows.
brew install gnu-tar
In any case, install ansible and its netaddr
module:
pip install -r requirements.txt
Ansible version 2.1.1 or the latest is recommended. See our Ansible documentation for more details. If you have a different version and run into issues try the following:
pip install --upgrade ansible
Use Ansible on the installer machine to begin the process of deploying an environment. This will create the certificates and keys that are common throughout the environment, and deploy the API proxy.
If you use the installer VM created from Create Installer VM as the installer machine, use vagrant ssh to SSH into the installer VM and execute the Ansible scripts as the vagrant user, as described in FYI on SSH to Vagrant/VirtualBox VMs
( cd ansible; \
ansible-playbook -v -i ../examples/envs/dev-vbox/radiant01.hosts env-basics.yml \
-e "envs=../examples/envs env_name=dev-vbox" )
Use Ansible on the installer machine to deploy an OpenRadiant shard on the target machines.
( cd ansible; \
ansible-playbook -v -i ../examples/envs/dev-vbox/radiant01.hosts shard.yml \
-e "envs=../examples/envs cluster_name=dev-vbox-radiant01 network_kind=flannel" )
The cd
makes Ansible 2 find the ansible.cfg
supplied by OpenRadiant.
The envs
variable tells the playbook where to find the files that
define the environment and shard.
The env_name
variable in the first command tells the playbook which
environment to deploy, and the cluster_name
variable in the second
command tells the playbook which shard to deploy.
The networking_kind
variable tells the playbook which networking
plugin to deploy (Ansible technicalities make it impossible for the
playbook to use a definition for this variable placed the environment
or shard variables file --- do not put one there, it will just cause
confusion).
See the general doc on deployment for the general story about deploying OpenRadiant.
An alternative to creating and then using the installer machine is to use another Vagrant/VirtualBox VM that we have prepared for you that is an installer that deploys the shard as the last startup step.
( cd examples/vagrant; vagrant up active-installer-tiny )
Now you can exercise the shard through the API proxy. The proxy enables multi-tenancy, multi-sharding and other features.
For details on proxy setup and use, please see Proxy documentation.
OpenRadiant is designed for application devops personnel to go through the API proxy. Developers of OpenRadiant or its extensions who are interested in gaining a deeper understanding can bypass the proxy.
The following describes how to exercise the shard without using the API proxy and the features it provides.
Open an SSH connection to the master node:
ssh -i ~/.vagrant.d/insecure_private_key [email protected]
On the master you will find both the kubectl
and docker
(currently 1.11) commands on your $PATH
.
You can create a Kubernetes "deployment" with a command like this:
kubectl run k1 --image=busybox sleep 864000
NB: IGNORE THIS SECTION. It is obsolete, retained here only until we find a better place to save it until it becomes relevant again.
The Swarm master is configured for multi-tenant use. To prepare to use it as a tenant, do this on the master:
cd; mkdir -p radiant/configs/demo; cat > radiant/configs/demo/config.json <<EOF
{
"HttpHeaders": {
"X-Auth-TenantId": "demo"
}
}
EOF
Then you will want these commands on the master:
export DOCKER_TLS_VERIFY=""
export DOCKER_CONFIG=~/radiant/configs/demo
export DOCKER_HOST=localhost:2375
To get a listing of this tenant's containers, issue the following command on the master:
docker ps
At first, there will be none. So create one, like this:
docker run --name s1 -d -m 128m busybox sleep 864000
Then, get a list of containers, with docker ps
. You can inspect its network
configuration from inside, like this:
docker exec s1 ifconfig
NB: IGNORE THIS SECTION. It is obsolete, retained here only until we find a better place to save it until it becomes relevant again.
The containers in Kubernetes will be invisible to Swarm because they lack the label identifying your tenant. To make containers visible to Swarm, make a kubernetes pod as follows. Create a YAML file prescribing the pod:
cat > sleepy-pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: sleepy-pod
annotations:
containers-annotations.alpha.kubernetes.io: "{ \"com.ibm.radiant.tenant.0\": \"demo\", \"OriginalName\": \"sleeper\" }"
spec:
containers:
- name: sleeper
image: busybox
args:
- sleep
- "864000"
EOF
Then create the pod:
kubectl create -f sleepy-pod.yaml
Then you can watch for it to come up, with
kubectl get pod
Once the pod is created, you can see it with docker ps
. You can
inspect its network configuration from inside, like this:
kubectl exec sleepy-pod ifconfig
In a similar vein, you can ping one of these containers from the other. For example (in which the Swarm container's IP address is 172.17.0.5):
kubectl exec sleepy-pod ping -- -c 2 172.17.0.5
On your local browser, enter the following URL:
master_ip:harproxy_GUI_port/haproxy_stats
Example: http://192.168.10.2:9000/haproxy_stats (port 9000 is statically assigned) When prompt for the user_namer:password use vagrant:radiantHA
On your local browser visit http://192.168.10.2:5050/