rpc-ceph is no longer being developed or tested. Please use the upstream ceph-ansible playbooks for any future deployments.
rpc-ceph
deploys Ceph as an RPC stand-alone platform in a uniform,
managed, and tested way to ensure version consistency and testing.
By adding automated tests, rpc-ceph
provides a way to manage tested
versions of ceph-ansible
used in RPC deployments.
rpc-ceph
is a thin wrapper around the ceph-ansible
project.
rpc-ceph
manages the versions of ansible and ceph-ansible
by providing:
- RPC integration testing (MaaS/Logging and WIP-OpenStack).
- Tested and versioned
ceph-ansible
and Ceph releases. - Default variables (still WIP) for base installs.
- Standarized deployments.
- Default playbooks for integration.
- Benchmarking tools using
fio
.
Deploying rpc-ceph
uses boostrap.sh
, ceph-ansible
, default
group_vars
, and a pre-created playbook.
NOTE: Anything that can be configured with ceph-ansible
is configurable with
rpc-ceph
.
We do not recommend or use containers for rpc-ceph
production deployments.
Containers are setup and used as part of the run_tests.sh
(AIO) testing
strategy only. The default playbooks are not set up to build containers or
configure any of the required container specific roles.
The inventory should consist of the following:
- 1-3+ mons hosts (perferably 3 or more), and an uneven number of them.
- 1-3+ mgrs hosts (perferably 3 or more) - Ideally on the mon hosts (Since the Luminous release this is required).
- 3+ osds hosts with storage drives.
- 1+ repo_servers hosts to serve as apt repo servers for ceph version pinning.
- OPTIONAL: 1-3+ rgws hosts - these will be load balanced.
rsyslog_all
host, pointing to an existing or new rsyslog logging server.- OPTIONAL:
benchmark_hosts
- the host on which to run benchmarking (Readbenchmark/README.md
for more).
-
Configure the following inventory:
ansible_host
var for each host.- Devices,
dedicated_devices
for osd hosts.
-
Configure a variables file including the following
ceph-ansible
vars:monitor_interface
public_network
cluster_network
osd_scenario
repo_server_interface
- Any other
ceph-ansible
settings you want to configure.
-
Set any override vars in playbooks/group_vars/host_group/overrides.yml, this allows:
- Defaults to remain, but be overriden if required (overrides.yml will take precedence).
- Git will ignore the overrides.yml file, so the repo can be updated without clearing out all deploy specific vars.
-
Override any variables from
ceph.conf
usingceph_conf_overrides_extra
orceph_conf_overrides_<group>_extra
:- This allows the default
group_vars
to remain in place, and means you do not have to respecify any vars you aren't setting. - The
ceph_conf_overrides_<group>_extra
var will override only vars for only the hosts in that group, with currently supported groups:- ceph_conf_overrides_rgw_extra
- ceph_conf_overrides_mon_extra
- ceph_conf_overrides_mgr_extra
- ceph_conf_overrides_osd_extra
- The overrides will merge with the existing settings and take precedence but not squash them.
- This allows the default
-
Run the
bootstrap-ansible.sh
inside the scripts directory:./scripts/bootstrap-ansible.sh
-
This configures ansible at a pre-tested version, creates a
ceph-ansible
binary that points to the appropriate ansible-playbook binary, and clones the required role repositories:ceph-ansible
rsyslog_client
openstack-ansible-plugins
(ceph-ansible
uses the config template plugin from here).haproxy_server
rsyslog_server
-
Run the
ceph-ansible
playbook from the playbooks directory:ceph-ansible-playbook -i <link to your inventory file> playbooks/add-repo.yml -e @<link to your vars file> ceph-ansible-playbook -i <link to your inventory file> playbooks/deploy-ceph.yml -e @<link to your vars file>
-
Run any additional playbooks from the playbook directory:
ceph-setup-logging.yml
will setup rsyslog client, ensure you have the appropriate rsyslog server setup, or other log shipping location, refer to: https://docs.openstack.org/openstack-ansible-rsyslog_client/latest/ for more detailsceph-keystone-rgw.yml
will setup required keystone users and endpoints for Ceph.ceph-rgw-haproxy.yml
will setup the HAProxy VIP for Ceph Rados GW. Ensure you specifyhaproxy_all
group in your inventory with the HAProxy hosts.ceph-rsyslog-server.yml
will setup rsyslog server on thersyslog_all
hosts specified. NB If there is already an existing rsyslog server that you are connecting into, you should not run this.
Your deployment should be successful.
NOTE: If there are any errors, troubleshoot as a standard ceph-ansible
deployment.
- RAX Public Cloud general-8 (or equivalent) using:
- Ubuntu 16.04 (xenial)
- CentOS7
For MaaS integration, perform the following export commands.
Otherwise just use ./run_tests.sh
to build the AIO.
export PUBCLOUD_USERNAME=<username>
export PUBCLOUD_API_KEY=<api_key>
To run an AIO scenario for Ceph you can run the following export on a general1-8 or perf2-15 flavor instance, unless otherwise noted: #export RE_JOB_SCENARIO="name of scenario from below"
build_releasenotes
This will build the project releae notes using sphinx and place it in
the directory rpc-ceph/release/build/
functional: This is a base AIO for Ceph, includes MaaS testing, this runs on each commit, with the following components:
- 2 x rgw hosts
- 3 x osd hosts
- 3 x mon hosts
- 3 x mgr hosts
- 1 x rsyslog server
- HAproxy configured on localhost
This job does not run the benchmarking playbooks.
bluestore: This is the same as the functional job but runs using bluestore, and 3 collocated OSD devices per osd host.
rpco_newton: An RPC-O newton integration test, that will deploy an RPC-O AIO, and integrate it with Ceph, followed by Tempest tests. This runs daily, as it takes a long time to build.
- RPC-O AIO @ newton
- Keystone
- Glance
- Cinder
- Nova
- Neutron
- Tempest
- 2 x rgw hosts
- 3 x osd hosts
- 3 x mon hosts
- 3 x mgr hosts
NB: This requires a perf2-15 instance.
rpco_pike This is the same as the rpco_newton job but built against the pike branch of RPC-O.
rpco_queens This is the same as the rpco_newton and rpco-pike jobs but built against the queens branch of RPC-O.
rpco_rocky This is the same as the rpco_newton and rpco-pike jobs but built against the rocky branch of RPC-O.
keystone_rgw: A basic keystone integration test, that will run on each commit. Utilizing the swift client to ensure Keystone integration is working.
- Keystone deployed from OpenStack-Ansible role
- 2 x rgw hosts
- 3 x osd hosts
- 3 x mon hosts
- 3 x mgr hosts
Additionally this test runs the FIO and RGW benchmarking playbooks to ensure they work, but does not run the MaaS playbooks.
- Trusty deployments - due to changes in losetup Trusty will not work with the current method.
- Different Ceph versions.
- Upgrade testing.