-
Notifications
You must be signed in to change notification settings - Fork 4
QuickStart
This guide provides the directory structure and files for configuring the pillar data. A manual configuration replaces the discovery and configuration stages (aka stage 1 and stage 2).
Note that this directory structure and file contents is not absolute. Salt only needs the relevant key value data.
You must have the following:
- one master node
- four nodes with additional unpartitioned disks
- any additional nodes
- Salt installed (1 master, all are minions, keys accepted)
clone github... rpm tbd... Need these details
Both Salt and Ceph use the terminology cluster to represent a collection of servers. With this configuration, only one Salt cluster exists and only one Ceph cluster is supported. The intent is to support multiple Ceph clusters soon.
Practically speaking, the term cluster will refer to the Ceph cluster. The Ceph cluster may be a subset of servers in a Salt cluster.
The pillar and salt files are located in the ceph subdirectory in their respective trees as shown below
/srv/pillar
├── ceph
│ ├── init.sls
│ └── master_minion.sls
└── top.sls
/srv/salt
├── ceph
│ └── init.sls
└── top.sls
The top.sls allow integration with other Salt automation. The top.sls contains
base:
'*':
- ceph
The master_minion.sls defines the name of the minion of the salt-master. For example,
master_minion: master.ceph
The strategy with this Salt configuration is to keep all configurable data in the pillar including the selection of optional or custom state files.
The pillar structure is divided into two directories: /srv/pillar/ceph/cluster and /srv/pillar/ceph/stack.
The files in /srv/pillar/ceph/cluster are the FQDN of each minion with an sls extension. For example
/srv/pillar/ceph/cluster
├── master.ceph.sls
├── data1.ceph.sls
├── data2.ceph.sls
├── data3.ceph.sls
└── data4.ceph.sls
The contents of each file are the cluster assignment and role assignment(s). For example, data1.ceph.sls might contain
cluster: ceph
roles:
- admin
- mon
- storage
Other roles are igw for iSCSI gateways, rgw for Rados gateways and mds for CephFS.
The stack directory uses the external pillar functionality provided by the stack.py module. The configuration file is /srv/pillar/ceph/stack/stack.cfg. This file includes other yaml files. These files may be specified explicitly or rely on pillar data via Jinja to determine filenames. All files are relative to the directory where the stack.cfg resides.
For example, these two lines are from stack.cfg
global.yml
{{ pillar.get('cluster') }}/cluster.yml
The first is a file in the same directory named global.yml. The second evaluates to ceph/cluster.yml for each minion whose cluster value is ceph in /srv/pillar/ceph/cluster.
The remaining files are shown here.
/srv/pillar/ceph/stack
├── ceph
│ ├── cluster.yml
│ ├── ceph_conf.yml
│ ├── minions
│ │ ├── master.ceph.yml
│ │ ├── data1.ceph.yml
│ │ ├── data2.ceph.yml
│ │ ├── data3.ceph.yml
│ │ ├── data4.ceph.yml
│ └── roles
│ ├── admin.yml
│ ├── mon.yml
│ └── storage.yml
├── global.yml
└── stack.cfg
The purpose and content of each file is explained below. Note that the ceph subdirectory above refers to the name of the cluster. That is, the Ceph cluster is named ceph by default.
This file defines values for all clusters.
time_server: salt
time_service: ntp
This file defines any global values for the entire cluster.
fsid: d719f027-9dbf-30f1-b1ba-a836e36ad0d7
osd_creation: default
pool_creation: default
public_network: 172.16.21.0/24
cluster_network: 172.16.22.0/24
This file defines the mon_host and mon_initial_members necessary to generate the ceph configuration file.
mon_host:
- 172.16.21.11
- 172.16.21.12
- 172.16.21.13
mon_initial_members:
- data1
- data2
- data3
minions/*.ceph.yml
These files define specific values depending on the assigned roles. For a node serving as both a storage node and monitor, the contents are
public_address: 172.16.21.11
storage:
data+journals: []
osds:
- /dev/vdb
- /dev/vdc
roles/*.yml
These files define the keyrings for each role. The admin.yml would contain
keyring:
- admin: AQCuhptXffTYHRAAt2tKEIaZdfYc99iVaCGdyA==
- [QuickStart]