Storage pool type: rbd
Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. RADOS block devices implement a feature rich block level storage, and you get the following advantages:
-
thin provisioning
-
resizable volumes
-
distributed and redundant (striped over multiple OSDs)
-
full snapshot and clone capabilities
-
self healing
-
no single point of failure
-
scalable to the exabyte level
-
kernel and user space implementation available
Note
|
For smaller deployments, it is also possible to run Ceph services directly on your {pve} nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible. |
This backend supports the common storage properties nodes
,
disable
, content
, and the following rbd
specific properties:
- monhost
-
List of monitor daemon IPs. Optional, only needed if Ceph is not running on the PVE cluster.
- pool
-
Ceph pool name.
- username
-
RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster.
- krbd
-
Enforce access to rados block devices through the krbd kernel module. Optional.
Note
|
Containers will use krbd independent of the option value.
|
/etc/pve/storage.cfg
)rbd: ceph-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 pool ceph-external content images username admin
Tip
|
You can use the rbd utility to do low-level management tasks.
|
If you use cephx
authentication, you need to copy the keyfile from your
external Ceph cluster to a Proxmox VE host.
Create the directory /etc/pve/priv/ceph
with
mkdir /etc/pve/priv/ceph
Then copy the keyring
scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
The keyring must be named to match your <STORAGE_ID>
. Copying the
keyring generally requires root privileges.
If Ceph is installed locally on the PVE cluster, this is done automatically by 'pveceph' or in the GUI.