The end-2-end build includes:
Please note: We take security and users' trust seriously. If you believe you have found a security issue in Discoblocks, please responsibly disclose by following the security policy.
This is the home of Discoblocks, an open-source declarative disk configuration system for Kubernetes helping to automate CRUD (Create, Read, Update, Delete) operations for cloud disk device resources attached to Kubernetes cluster nodes.
- Website: https://discoblocks.io
- Announcement & Forum: GitHub Discussions
- Documentation: GitHub Wiki
- Recording of a demo: Demo
Some call storage snorage because they believe it is boring... but we could have fun and dance with the block devices!
Discoblocks can be leveraged by cloud native data management platform (like Ondat.io) to management the backend disks in the cloud.
When using such data management platform to overcome the block disk device limitation from hyperscalers, a new set of manual operational tasks needs to be considered like:
- provisioning block devices on the Kubernetes worker nodes
- partioning, formating, mounting the block devices within specific path (like /var/lib/vendor)
- capacity management and monitoring
- resizing and optimizing layouts related to capacity management
- decommissioning the devices in secure way
- by default every resource created by DiscoBlocks has a finalizer, so deletion will be blocked until the corresponding DiskConfig has been deleted
- by default every additional disk has owner reference to the first disk ever created for pod, Deletion of first PVC terminates all other
At the current stage, Discoblocks is leveraging the available hyperscaler CSI (Container Storage Interface) within the Kubernetes cluster to:
- introduce a CRD (Custom Resource Definition) per workload with
- StorageClass name
- capacity
- mount path within the Pod
- nodeSelector
- podSelector
- access modes: Access mode of PersistentVolume
- availability mode:
- ReadWriteOnce: New disk for each pod, including pod restart
- ReadWriteSame: All pod gets the same volume on the same node
- ReadWriteDaemon: DaemonSet pods always re-use existing volume on the same node
- upscale policy
- upscale trigger percentage
- maximum capacity of disk
- maximum number of disks per pod
- extend capacity
- cool down period after upscale
- pause autoscaling
- provision the relevant disk device using the CSI (like EBS on AWS) when the workload deployment will happen
- monitor the volume(s)
- resize automatically the volume based on the upscale policy
- create and mount new volumes in running pod based on maximum number of disks
Note: that an application could be using Discoblocks to get persistent storage but this option would not be safe for production as there will not be any data platform management to address high availability, replication, fencing, encryption, ...
Discoblocks_pre_alpha.mp4
- Kubernetes cluster
- Kubernetes CLI
- Cert Manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
type: gp3
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF
kubectl apply -f https://github.com/ondat/discoblocks/releases/download/v[VERSION]/discoblocks-bundle.yaml
cat <<EOF | kubectl apply -f -
apiVersion: discoblocks.ondat.io/v1
kind: DiskConfig
metadata:
name: nginx
spec:
storageClassName: ebs-sc
capacity: 1Gi
mountPointPattern: /usr/share/nginx/html/data
nodeSelector:
matchLabels:
kubernetes.io/os: linux
podSelector:
app: nginx
policy:
upscaleTriggerPercentage: 80
maximumCapacityOfDisk: 2Gi
maximumNumberOfDisks: 3
coolDown: 10m
EOF
kubectl apply create deployment --image=nginx nginx
- How to find logs?
kubectl logs -n kube-system deploy/discoblocks-controller-manager
- Which PersistentVolumeClaims are created by Discoblock?
kubectl get diskconfig [DISK_CONFIG_NAME] -o yaml | grep " message: "
kubectl get pvc -l discoblocks=[DISK_CONFIG_NAME]
- How to find first volume of a PersistentVolumeClaim groups?
kubectl get pvc -l 'discoblocks=[DISK_CONFIG_NAME],!discoblocks-parent'
- How to find additional volumes of a PersistentVolumeClaim groups?
kubectl get pvc -l 'discoblocks=[DISK_CONFIG_NAME],discoblocks-parent=[PVC_NAME]'
- How to delete a group of PersistentVolumeClaims?
- You have to delete only the first volume, all other members of the group would be terminated by Kbernetes.
- What Discoblocks related events happened on my Pod?
kubectl get event --field-selector involvedObject.name=[POD_NAME] -o wide | grep discoblocks.ondat.io
- Why my deleted objects are hanging in
Terminating
state?- Discoblocks prevents accidentally deletion with finalizers on almost every object it touches.
DiskConfig
object deletion removes all finalizers.kubectl patch pvc [PVC_NAME] --type=json -p='[{"op": "remove", "path": "/metadata/finalizers/0"}]'
- How to ensure volume monitoring works in my Pod?
kubectl debug [POD_NAME] -q -c debug --image=nixery.dev/shell/curl -- sleep infinity && kubectl exec [POD_NAME] -c debug -- curl -s telnet://localhost:9100
- How to enable Prometheus integration?
kubectl apply -f https://raw.githubusercontent.com/ondat/discoblocks/v[VERSION]/config/prometheus/monitor.yaml
Prometheus integration is isabled by default, to enable it please apply the ServiceMonitor manifest on your cluster.
Metrics provided by Discoblocks:
- Golang related metrics
- PersistentVolumeClaim operations by type:
discoblocks_pvc_operation_counter
- resourceName
- resourceNamespace
- operation
- size
- Errors by type:
discoblocks_error_counter
- resourceType
- resourceName
- resourceNamespace
- errorType
- operation
We love your input! We want to make contributing to this project as easy and transparent as possible. You can find the full guidelines here.
Please reach out for any questions or issues via our Github Discussions.
Alternatively you can:
Discoblocks is under the Apache 2.0 license. See LICENSE file for details.