Rio CSI is a standard K8s CSI Plugin which provide scalable, distributed Persistent Storage. Rio-csi use LVM as it's persist storage implementation and iScsi to operate remote disks
- Access Modes
- ReadWriteOnce
- ReadWriteMany
- ReadOnlyMany
- Volume modes
-
Filesystem
mode -
Block
mode
-
- Supports fsTypes:
ext4
,btrfs
,xfs
- Volume metrics
- Topology
- Snapshot
- Clone
- from snapshot
- from pvc
- Set IOLimit
- Volume Resize
- Thin Provision
- Backup/Restore
- Ephemeral inline volume
- all the nodes must install lvm2 utils and load dm-snapshot kernel module for snapshot feature
yum install lvm2 -y # centos
sudo apt-get -y install lvm2 # ubuntu
lsmod # 查看当前被内核加载的模块
cat /proc/modules # 也可以查看
# 通过 modinfo 查看 模块的详细信息
modinfo ext4
# modprobe 向内核增加或者删除指定的模块
modprobe btrfs # 增加 btrfs 模块
modprobe -r btrfs # 卸载 btrfs 模块
$ modprobe dm-snapshot # 或者 modprobe dm_snapshot
$ lsmod | grep dm
dm_snapshot 40960 0
dm_bufio 28672 1 dm_snapshot
rdma_cm 61440 1 ib_iser
iw_cm 45056 1 rdma_cm
ib_cm 53248 1 rdma_cm
ib_core 225280 4 rdma_cm,iw_cm,ib_iser,ib_cm
- Create LVM volume group on node, rio-csi will use this LVM volume group to manage storage
# find available devices and partions
lsblk
# find available physical volumes
lvmdiskscan
# create pv
pvcreate /dev/test
# create volume group
vgcreate riovg /dev/test
- install open-iscsi on every node and make sure none of the node's InitiatorName are the same
# install iscsi
apt -y install open-iscsi
# change to the same IQN you set on the iSCSI target server
vi /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2018-05.world.srv:www.initiator01
- Install CSI Custom Resources and CSI Operator
kubectl apply -f operator.yaml
the default driver namespace is riocsi, all the RBAC resource and Operator is in riocsi
Verify that the rio-csi driver operator are installed and running using below command :
$ kubectl get pods -n rio-csi
NAME READY STATUS RESTARTS AGE
csi-driver-node-ffgvv 2/2 Running 0 19s
csi-driver-node-hztjq 2/2 Running 0 19s
csi-driver-node-wsrtf 2/2 Running 0 19s
csi-provisioner-65b68bbcc8-b46rj 3/3 Running 0 19s
you cant find csi-driver-node daemonset running on every node in the cluster and one csi-provisioner
- Open K8s Snapshot feature
As snapshot is a beata feature of K8s we should open K8s snapshot feature config to use this feature
kubectl --feature-gates VolumeSnapshotDataSource=true
- Install Snapshot Controller
K8s Cluster have no Snapshot Controller as default. Apply snapshotoperator.yaml to install snapshot controller and snapshot CRDs
kubectl apply -f snapshotoperator.yaml
- Create a Storage Class to use this driver
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rio-sc
parameters:
storage: "lvm"
volgroup: "myvg"
provisioner: rio-csi
Specify volgroup to select Volume Group from nodes
- create PVC to use above Storage Class
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
spec:
storageClassName: rio-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2G
- Check Volume is created
kubectl get volume -n riocsi
To delete the CRDs from the cluster: // TODO
UnDeploy the controller to the cluster: // TODO
// TODO(user): Add detailed information on how you would like others to contribute to this project
// TODO This project aims to follow the Kubernetes Operator pattern
It uses Controllers which provides a reconcile function responsible for synchronizing resources untile the desired state is reached on the cluster
Copyright 2022.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.