Project Status: Alpha
The M3DB Operator is a project dedicated to setting up M3DB on Kubernetes. It aims to automate everyday tasks around managing M3DB. Specifically, it aims to automate:
- Creating M3DB clusters
- Destroying M3DB clusters
- Expanding clusters (adding instances)
- Shrinking clusters (removing instances)
- Replacing failed instances
More information:
The following instructions serve as a quickstart to get an M3DB cluster up and running in your Kubernetes cluster. This setup is not for production use, as there's no persistent storage. More information on production-grade clusters can be found in our docs.
The M3DB operator targets Kubernetes 1.10 and 1.11. We generally aim to target the latest two minor versions supported by GKE but welcome community contributions to support more versions!
The M3DB operator is intended for creating highly available clusters across distinct failure domains. For this reason we currently only support Kubernetes clusters with nodes in at least 3 zones, but support for zonal clusters is coming soon.
When running on GKE, the user applying the manifests will need the ability to allow cluster-admin-binding
during the
installation. Use the following ClusterRoleBinding
with the user name provided by gcloud:
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<[email protected]>
With Helm:
helm repo add m3db https://s3.amazonaws.com/m3-helm-charts-repository/stable;
helm install m3db/m3db-operator --namespace m3db-operator
With kubectl
(will install in the default
namespace):
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.2.0/bundle.yaml
Create a simple etcd cluster to store M3DB's topology:
kubectl apply -f https://raw.githubusercontent.com/m3db/m3db-operator/v0.2.0/example/etcd/etcd-basic.yaml
Apply manifest with your zones specified for isolation groups:
apiVersion: operator.m3db.io/v1alpha1
kind: M3DBCluster
metadata:
name: simple-cluster
spec:
image: quay.io/m3db/m3dbnode:latest
replicationFactor: 3
numberOfShards: 256
# Default endpoints if using provided etcd manifests.
etcdEndpoints:
- http://etcd-0.etcd:2379
- http://etcd-1.etcd:2379
- http://etcd-2.etcd:2379
isolationGroups:
- name: group1
numInstances: 1
nodeAffinityTerms:
- key: failure-domain.beta.kubernetes.io/zone
values:
- <zone-a>
- name: group2
numInstances: 1
nodeAffinityTerms:
- key: failure-domain.beta.kubernetes.io/zone
values:
- <zone-b>
- name: group3
numInstances: 1
nodeAffinityTerms:
- key: failure-domain.beta.kubernetes.io/zone
values:
- <zone-c>
podIdentityConfig:
sources: []
namespaces:
- name: metrics-10s:2d
preset: 10s:2d
dataDirVolumeClaimTemplate:
metadata:
name: m3db-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
To resize a cluster, specify the new number of instances you want in each zone either by reapplying your manifest or
using kubectl edit
. The operator will safely scale up or scale down your cluster.
Delete a cluster using kubectl delete
. You will to remove the etcd data as well, or wipe the data generated by the
operator if you intend to reuse the etcd cluster for another M3DB cluster:
kubectl exec etcd-0 -- env ETCDCTL_API=3 etcdctl del --keys-only --prefix ""
We welcome community contributions to to the M3DB operator! Please see CONTRIBUTING.md for more information. Please note that on creating a pull request you will be asked to agree to the Uber CLA before we can accept your contribution.
This project is licensed under the Apache license -- see the LICENSE file for details.