-
Notifications
You must be signed in to change notification settings - Fork 3
Logical Storages Configuration
Jan Tomášek edited this page Aug 22, 2019
·
2 revisions
Logical storages are managed by the user with role ADMIN through the storage-administration-api
Archival Storage currently supports three types of logical storages: common File System, ZFS and Ceph. Configuration entries which are used the same way for all logical storages are:
- name
- priority - higher number means higher priority, object is retrieved from one of the logical storages with highest priority
- storageType - FS/ZFS/CEPH
- note
- writeOnly - When the logical storage is attached to the Archival Storage it is set to writeOnly until all the existing objects are not copied to it.
- reachable - Set automatically by Archival Storage during every request (all logical storages are tested for reachability before the request starts). Logical storages which are not reachable are not used for object retrieval. If some attached storage is not reachable the system acts like if its in read-only mode.
- id
Example configuration:
{
"name": "local storage",
"host": "localhost",
"port": 0,
"priority": 10,
"storageType": "FS",
"note": null,
"config": "{\"rootDirPath\":\"d:\\\\testdata\"}",
"writeOnly": false,
"reachable": true,
"id": "4fddaf00-43a9-485f-b81a-d3a4bcd6dd83"
}
- If the FS is local its host should be set to localhost.
- Value of port in this case is not taken into account.
- config contains entry with the path to the root folder used by the logical storage to store objects. The application must have r/w access rights to it. Backslashes have to be escaped for Windows OS.
- on the remote machine create the arcstorage user and add its public key located at src/main/resources/arcstorage.pub
- create test folder on the remote machine and make the user owner of the test folder
- In order to retrieve ZFS storage state, the arcstorage user on the remote server must have passwordless sudo permissions, so that zfs list and zpool list commands may be executed over SSH by the Archival Storage.
- By default only root can execute ZFS commands, to allow other sudo users (arcstorage user) to run ZFS commands, create a zfs file inside /etc/sudoers.d directory with content like:
Cmnd_Alias C_ZFS = \
/sbin/zfs "", /sbin/zfs help *, \
/sbin/zfs get, /sbin/zfs get *, \
/sbin/zfs list, /sbin/zfs list *, \
/sbin/zpool "", /sbin/zpool help *, \
/sbin/zpool iostat, /sbin/zpool iostat *, \
/sbin/zpool list, /sbin/zpool list *, \
/sbin/zpool status, /sbin/zpool status *, \
/sbin/zpool upgrade, /sbin/zpool upgrade -v
ALL ALL = (root) NOPASSWD: C_ZFS
Example configuration:
{
"name": "remote storage",
"host": "192.168.10.60",
"port": 22,
"priority": 1,
"storageType": "FS",
"note": null,
"config": "{\"rootDirPath\":\"/opt/data/test\"}",
"writeOnly": false,
"reachable": true,
"id": "01abac74-82f7-4afc-acfc-251f912c5af1"
}
- host is set to the IP address of the remote server.
- port is set to the SSH port of the the logical storage.
- config is the same as for Local FS
- ZFS (local/remote) has ZFS storage type and pool name specified in JSON config.. rest of the config is the same as for FS
{
"storageType": "ZFS",
"config": "{\"rootDirPath\":\"/opt/data/test\",\"poolName\":\"arcpool\"}",
...
}
- install Ceph and RGW http://docs.ceph.com/docs/master/start/
- installation is not trivial, and may take a few hours
- installation requires infrastructure of nodes and administrator account to manage them
- create a user (see Ceph RGW manual)
- In order to retrieve Ceph storage state, the RGW server must run SSH and must allow Archival Storage to connect (with the same key used in the case of remote FS). Moreover, the arcstorage user on the RGW server must have passwordless sudo permissions, so that ceph df and ceph -s commands may be executed over SSH by the Archival Storage.
Example configuration:
{
"name": "ceph",
"host": "192.168.10.61",
"port": 7480,
"priority": 1,
"storageType": "CEPH",
"note": null,
"config": "{\"adapterType\":\"S3\", \"userKey\":\"SKGKKYQ50UU04XS4TA4O\",\"userSecret\":\"TrLjA3jdlzKcvyN1vWnGqiLGDwCB90bNF71rwA5D\",\"sshPort\":2226}",
"writeOnly": false,
"reachable": true,
"id": "8c3f62c0-398c-4605-8090-15eb4712a0e3"
}
- host is set to the IP address of the remote server.
- port is set to the Ceph RADOS gateway port.
- config contains:
- adapterType - currently only S3 is supported
- userKey and userSecret - credentials used to access the Ceph S3 cluster
- sshPort ssh port on which the RGW server listen