Skip to content

Commit

Permalink
Add libcloud backend
Browse files Browse the repository at this point in the history
Initially this supports OpenStack but will grow to support other methods
of cloud-like deployment. Some assuptions are made regarding supporting
infrastructure (FIXME document these)

Signed-off-by: Zack Cerza <[email protected]>
  • Loading branch information
zmc committed Feb 24, 2017
1 parent 45a3f4e commit 02681fd
Show file tree
Hide file tree
Showing 13 changed files with 1,364 additions and 0 deletions.
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Content Index
siteconfig.rst
detailed_test_config.rst
openstack_backend.rst
libcloud_backend.rst
downburst_vms.rst
INSTALL.rst
LAB_SETUP.rst
Expand Down
43 changes: 43 additions & 0 deletions docs/libcloud_backend.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
.. _libcloud-backend:

LibCloud backend
================
This is an *experimental* provisioning backend that eventually intends to support several libcloud drivers. At this time only the OpenStack driver is supported.

Prerequisites
-------------
* An account with an OpenStack provider that supports Nova and Cinder
* A DNS server supporting `RFC 2136 <https://tools.ietf.org/html/rfc2136>`_. We use `bind <https://www.isc.org/downloads/bind/>`_ and `this ansible role <https://github.com/ceph/ceph-cm-ansible/blob/master/roles/nameserver/README.rst>`_ to help configure ours.
* An `nsupdate-web <https://github.com/zmc/nsupdate-web>`_ instance configured to update DNS records. We use `an ansible role <https://github.com/ceph/ceph-cm-ansible/blob/master/roles/nsupdate_web/README.rst>`_ for this as well.
* Configuration in `teuthology.yaml` for this backend itself (see :ref:`libcloud_config`) and `nsupdate-web`
* You will also need to choose a maximum number of nodes to be running at once, and create records in your paddles database for each one - making sure to set `is_vm` to `True` for each.

.. _libcloud_config:

Configuration
-------------
An example configuration using OVH as an OpenStack provider::

libcloud:
providers:
ovh: # This string is the 'machine type' value you will use when locking these nodes
driver: openstack
driver_args: # driver args are passed directly to the libcloud driver
username: 'my_ovh_username'
password: 'my_ovh_password'
ex_force_auth_url: 'https://auth.cloud.ovh.net/v2.0/tokens'
ex_force_auth_version: '2.0_password'
ex_tenant_name: 'my_tenant_name'
ex_force_service_region: 'my_region'

Why nsupdate-web?
-----------------
While we could have supported directly calling `nsupdate <https://en.wikipedia.org/wiki/Nsupdate>`_, we chose not to. There are a few reasons for this:

* To avoid piling on yet another feature of teuthology that could be left up to a separate service
* To avoid teuthology users having to request, obtain and safeguard the private key that nsupdate requires to function
* Because we use one subdomain for all of Sepia's test nodes, we had to enable dynamic DNS for that whole zone (this is a limitation of bind). However, we do not want users to be able to push DNS updates for the entire zone. Instead, we gave nsupdate-web the ability to accept or reject requests based on whether the hostname matches a configurable regular expression. The private key itself is not shared with non-admin users.

Bugs
----
At this time, only OVH has been tested as a provider. PRs are welcome to support more!
4 changes: 4 additions & 0 deletions docs/siteconfig.rst
Original file line number Diff line number Diff line change
Expand Up @@ -225,3 +225,7 @@ Here is a sample configuration with many of the options set and documented::
use_conserver: true
conserver_master: conserver.front.sepia.ceph.com
conserver_port: 3109

# Settings for [nsupdate-web](https://github.com/zmc/nsupdate-web)
# Used by the [libcloud](https://libcloud.apache.org/) backend
nsupdate_url: http://nsupdate.front.sepia.ceph.com/update
3 changes: 3 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,9 @@
'libvirt-python',
'python-dateutil',
'manhole',
'apache-libcloud',
# For apache-libcloud when using python < 2.7.9
'backports.ssl_match_hostname',
],


Expand Down
1 change: 1 addition & 0 deletions teuthology/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,7 @@ class TeuthologyConfig(YamlConfig):
'lab_domain': 'front.sepia.ceph.com',
'lock_server': 'http://paddles.front.sepia.ceph.com/',
'max_job_time': 259200, # 3 days
'nsupdate_url': 'http://nsupdate.front.sepia.ceph.com/update',
'results_server': 'http://paddles.front.sepia.ceph.com/',
'results_ui_server': 'http://pulpito.ceph.com/',
'results_sending_email': 'teuthology',
Expand Down
49 changes: 49 additions & 0 deletions teuthology/provision/cloud/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import logging

from teuthology.config import config

import openstack

log = logging.getLogger(__name__)


supported_drivers = dict(
openstack=dict(
provider=openstack.OpenStackProvider,
provisioner=openstack.OpenStackProvisioner,
),
)


def get_types():
types = list()
if 'libcloud' in config and 'providers' in config.libcloud:
types = config.libcloud['providers'].keys()
return types


def get_provider_conf(node_type):
all_providers = config.libcloud['providers']
provider_conf = all_providers[node_type]
return provider_conf


def get_provider(node_type):
provider_conf = get_provider_conf(node_type)
driver = provider_conf['driver']
provider_cls = supported_drivers[driver]['provider']
return provider_cls(name=node_type, conf=provider_conf)


def get_provisioner(node_type, name, os_type, os_version, conf=None):
provider = get_provider(node_type)
provider_conf = get_provider_conf(node_type)
driver = provider_conf['driver']
provisioner_cls = supported_drivers[driver]['provisioner']
return provisioner_cls(
provider=provider,
name=name,
os_type=os_type,
os_version=os_version,
conf=conf,
)
87 changes: 87 additions & 0 deletions teuthology/provision/cloud/base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
import logging
from copy import deepcopy

from libcloud.compute.providers import get_driver
from libcloud.compute.types import Provider as lc_Provider

import teuthology.orchestra.remote
import teuthology.provision.cloud
from teuthology.misc import canonicalize_hostname, decanonicalize_hostname


log = logging.getLogger(__name__)


class Provider(object):
_driver_posargs = list()

def __init__(self, name, conf):
self.name = name
self.conf = conf
self.driver_name = self.conf['driver']

@property
def driver(self):
driver_type = get_driver(
getattr(lc_Provider, self.driver_name.upper())
)
driver_args = deepcopy(self.conf['driver_args'])
driver = driver_type(
*[driver_args.pop(arg_name) for arg_name in self._driver_posargs],
**driver_args
)
return driver


class Provisioner(object):
def __init__(
self, provider, name, os_type=None, os_version=None,
conf=None, user='ubuntu',
):
if isinstance(provider, basestring):
provider = teuthology.provision.cloud.get_provider(provider)
self.provider = provider
self.name = decanonicalize_hostname(name)
self.hostname = canonicalize_hostname(name, user=None)
self.os_type = os_type
self.os_version = os_version
self.user = user

def create(self):
try:
return self._create()
except Exception:
log.exception("Failed to create %s", self.name)
return False

def _create(self):
pass

def destroy(self):
try:
return self._destroy()
except Exception:
log.exception("Failed to destroy %s", self.name)
return False

def _destroy(self):
pass

@property
def remote(self):
if not hasattr(self, '_remote'):
self._remote = teuthology.orchestra.remote.Remote(
"%s@%s" % (self.user, self.name),
)
return self._remote

def __repr__(self):
template = "%s(provider='%s', name='%s', os_type='%s', " \
"os_version='%s')"
return template % (
self.__class__.__name__,
self.provider.name,
self.name,
self.os_type,
self.os_version,
)
Loading

0 comments on commit 02681fd

Please sign in to comment.