Skip to content

Commit

Permalink
installation: add local minikube installation
Browse files Browse the repository at this point in the history
* Adds local installation documentation and amends OpenStack
  installation. (closes reanahub#4)

* Adds `minikube` specific installation option to REANA configuration
  script. (addresses reanahub#13)

Signed-off-by: Diego Rodriguez <[email protected]>
  • Loading branch information
Diego Rodriguez committed Mar 24, 2017
1 parent 245662e commit acd7f69
Show file tree
Hide file tree
Showing 7 changed files with 306 additions and 80 deletions.
3 changes: 3 additions & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,12 @@
include COPYING
include *.rst
include *.sh
include *.txt
include pytest.ini
prune docs/_build
recursive-include docs *.py
recursive-include docs *.png
recursive-include docs *.rst
recursive-include docs *.txt
recursive-include scripts reana
recursive-include tests *.py
80 changes: 0 additions & 80 deletions docs/gettingstarted.rst
Original file line number Diff line number Diff line change
@@ -1,86 +1,6 @@
Getting started
===============

Create a Kubernetes cluster
---------------------------

In order to create a Kubernetes cluster using OpenStack infrastructure,
we need to run the magnum command, which we can have available running
the magnum docker client:

.. code-block:: console
$ docker login gitlab-registry.cern.ch
$ sudo docker run -it gitlab-registry.cern.ch/cloud/ciadm /bin/bash
Or logging into `lxplus-cloud`:

.. code-block:: console
$ ssh lxplus-cloud.cern.ch
Once we have it available we are ready to create the cluster:

.. code-block:: console
$ magnum cluster-create --name reana-cluster --keypair-id reanakey \
--cluster-template kubernetes --node-count 2
Lastly, we must load the cluster configuration into the Kubernetes
client:

.. code-block:: console
$ $(magnum cluster-config reana-cluster)
It is at this point that we will have the `kubectl` command available
to manage our brand new cluster.

For more information on Kubernetes/OpenStack, please see
`CERN Cloud Docs <http://clouddocs.web.cern.ch/clouddocs/containers/quickstart.html#create-a-cluster>`__.

Create the Kubernetes resources using manifest files
----------------------------------------------------
- Clone `reana-resources-k8s <https://github.com/reanahub/reana-resources-k8s>`__:

``git clone https://github.com/reanahub/reana-resources-k8s.git``

- Change Kubernetes service account token in `reana-job-controller` node configuration file:

.. code-block:: console
$ kubectl get secrets
NAME TYPE DATA AGE
default-token-XXXXX kubernetes.io/service-account-token 3 13d
$ sed 's/default-token-02p0z/default-token-XXXXX/' -i reana-resources-k8s/deployments/reana-system/job-controller.yaml
- Create REANA system instances:

``kubectl create -f reana-resources-k8s/deployments/reana-system``

- Yadage workers:

``kubectl create -f reana-resources-k8s/deployments/yadage-workers``

- Services:

``kubectl create -f reana-resources-k8s/services/``

- Secrets (you should provide your own CephFS secret):

``kubectl create -f reana-resources-k8s/secrets/``

Get the Workflow Controller ip address and port
-----------------------------------------------
.. code-block:: console
$ kubectl describe services workflow-controller | grep NodePort
NodePort: http 32313/TCP
$ kubectl get pods | grep workflow-controller | cut -d" " -f 1 | xargs kubectl describe pod | grep 'Node:'
Node: 192.168.99.100,192.168.99.100
So the Workflow Controller component can be accessed through ``192.168.99.100:32313``.

Launch workflows
----------------

Expand Down
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:maxdepth: 2

introduction
installation
gettingstarted
architecture
components
Expand Down
188 changes: 188 additions & 0 deletions docs/installation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
Installation
===============

Local Kubernetes installation
-----------------------------

Requirements
````````````
In order to create a local REANA cluster with Kubernetes as a backend
the following requirements must be met:

- `VirtualBox`
- `VT-x/AMD-v` virtualization must be enabled in BIOS

Quickstart
``````````

We start by creating a fresh new Python virtual environment that will handle our `REANA` cluster instantiation.

.. code-block:: console
$ mkvirtualenv reana
We update ``pip`` and ``setuptools``

.. code-block:: console
$ pip install --upgrade pip setuptools
Next we install ``reana`` package

.. code-block:: console
$ git clone https://github.com/reanahub/reana
$ cd reana
$ pip install -e .[all]
Now, for instantiating a local REANA `minikube` based cluster we should run

.. code-block:: console
$ reana install-minikube
which will do the following:

.. literalinclude:: ../scripts/reana
:language: sh
:lines: 23-89

Once the script finishes, we should check ``kubectl get pods`` until we have an output as follows:

.. code-block:: console
$ kubectl get pods
job-controller-1390584237-0lt03 1/1 Running 0 1m
message-broker-1410199975-7c5v7 1/1 Running 0 1m
storage-admin 1/1 Running 0 1m
workflow-controller-2689978795-5kzgt 1/1 Running 0 1m
workflow-monitor-1639319062-q7v8z 1/1 Running 0 1m
yadage-alice-worker-1624764635-x8z71 1/1 Running 0 1m
yadage-atlas-worker-2909073811-t9qv3 1/1 Running 0 1m
yadage-cms-worker-209120003-js5cv 1/1 Running 0 1m
yadage-lhcb-worker-4061719987-6gpbn 1/1 Running 0 1m
zeromq-msg-proxy-1617754619-68p7v 1/1 Running 0 1m
Now we will be able to check whether the nodes are actually working. Firstly, we get the VM's ip:

.. code-block:: console
$ export MINIKUBE_IP=$(minikube ip)
Secondly, we get the current instances ports:

.. code-block:: console
$ export WORKFLOW_CONTROLLER_PORT=$(kubectl describe service workflow-controller | grep 'NodePort:' | cut -f 4 | sed -e "s@/TCP@@")
$ export WORKFLOW_MONITOR_PORT=$(kubectl describe service workflow-monitor | grep 'NodePort:' | cut -f 4 | sed -e "s@/TCP@@")
$ export JOB_CONTROLLER_PORT=$(kubectl describe service job-controller | grep 'NodePort:' | cut -f 4 | sed -e "s@/TCP@@")
And finally, we open the test URLs:

.. code-block:: console
$ curl "http://$MINIKUBE_IP:$WORKFLOW_CONTROLLER_PORT/workflows"
{
"workflows": {}
}
$ open "http://$MINIKUBE_IP:$WORKFLOW_MONITOR_PORT/helloworld"
$ curl "http://$MINIKUBE_IP:$JOB_CONTROLLER_PORT/jobs"
{
"jobs": {}
}
CERN OpenStack installation
---------------------------

Kubernetes cluster creation
```````````````````````````
In order to create a Kubernetes cluster using OpenStack infrastructure,
we need to run the magnum command, which we can have available running
the magnum docker client:

.. code-block:: console
$ docker login gitlab-registry.cern.ch
$ sudo docker run -it gitlab-registry.cern.ch/cloud/ciadm /bin/bash
Or logging into `lxplus-cloud`:

.. code-block:: console
$ ssh lxplus-cloud.cern.ch
Once we have it available we are ready to create the cluster:

.. code-block:: console
$ magnum cluster-create --name reana-cluster --keypair-id reanakey \
--cluster-template kubernetes --node-count 2
Lastly, we must load the cluster configuration into the Kubernetes
client:

.. code-block:: console
$ $(magnum cluster-config reana-cluster)
It is at this point that we will have the `kubectl` command available
to manage our brand new cluster.

For more information on Kubernetes/OpenStack, please see
`CERN Cloud Docs <http://clouddocs.web.cern.ch/clouddocs/containers/quickstart.html#create-a-cluster>`__.

Create the Kubernetes resources using manifest files
````````````````````````````````````````````````````

First we create a new Python virtual environment

.. code-block:: console
$ mkvirtualenv reana-resources-k8s
Then we install the ``reana-resources-k8s`` package

.. code-block:: console
$ git clone https://github.com/reanahub/reana-resources-k8s
$ cd reana-resources-k8s
$ pip install -e .[all]
Now we should configure ``reana-resources-k8s/templates/config.yaml`` as it fits the better, and once we are done, we should generate the manifests and create the nodes

.. code-block:: console
$ reana-resources-k8s build-manifests
$ kubectl create -Rf configuration-manifests
Clean manifests if they are not needed anymore

.. code-block:: console
$ rm -Rf configuration-manifests
In order to see if the cluster is working we should firstly gather all the nodes IPs and ports:

.. code-block:: console
$ export WORKFLOW_CONTROLLER_IP=$(kubectl describe pod --selector=app=workflow-controller | grep IP | cut -f 3)
$ export WORKFLOW_CONTROLLER_PORT=$(kubectl describe service workflow-controller | grep 'NodePort:' | cut -f 4 | sed -e "s@/TCP@@")
$ export WORKFLOW_MONITOR_IP=$(kubectl describe pod --selector=app=workflow-monitor | grep IP | cut -f 3)
$ export WORKFLOW_MONITOR_PORT=$(kubectl describe service workflow-monitor | grep 'NodePort:' | cut -f 4 | sed -e "s@/TCP@@")
$ export JOB_CONTROLLER_IP=$(kubectl describe pod --selector=app=job-controller | grep IP | cut -f 3)
$ export JOB_CONTROLLER_PORT=$(kubectl describe service job-controller | grep 'NodePort:' | cut -f 4 | sed -e "s@/TCP@@")
And finally, we open the test URLs:

.. code-block:: console
$ curl "http://$WORKFLOW_CONTROLLER_IP:$WORKFLOW_CONTROLLER_PORT/workflows"
{
"workflows": {}
}
$ open "http://$WORKFLOW_MONITOR_IP:$WORKFLOW_MONITOR_PORT/helloworld"
$ curl "http://$JOB_CONTROLLER_IP:$JOB_CONTROLLER_PORT/jobs"
{
"jobs": {}
}
1 change: 1 addition & 0 deletions requirements-k8s.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
-e 'git+https://github.com/reanahub/reana-resources-k8s.git@master#egg=reana-resources-k8s'
Loading

0 comments on commit acd7f69

Please sign in to comment.