-
Notifications
You must be signed in to change notification settings - Fork 498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support DM 2.0 in TiDB Operator #2868
Labels
Milestone
Comments
IANTHEREAL
changed the title
Support DM in TiDB Operator
Support DM 2.0 in TiDB Operator
Jul 28, 2020
Merged
This was referenced Aug 21, 2020
This was referenced Aug 31, 2020
This was referenced Sep 24, 2020
Merged
This was referenced May 12, 2021
This was referenced May 20, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description
Integrates DM with TiDB Operator. Ideally, we would like TiDB Operator to manage DM as well.
Integration Method
Add dm controller under the logic of the existing TiDB operator
Deployment
dm-master
anddm-worker
are deployed through statefulsetsdm-ctl
locally throughkubectl port-forward
.Configuration
Deployment configurations should be saved through k8s
configmaps
and updated by the mechanism of statefulsets.Service
dm-master
should expose serviceNodePort
method.Rolling Update
dm-master
With the help of the partition of statefulset, rolling update in order from large to small numbers, check whether the POD is the latest state, and then check whether the POD is the leader. If it is the leader, deploy the leader migration, and then set it as the current POD sequence number partition.
dm-worker
Update directly.
Scale
For scale-in operation, delete member info first, and then delete pod from cluster.
High Availability
Achieved by DM's architecture.
Failover
dm-master
Assuming that the cluster has 3 master pods, if a dm-master pod goes down for more than 5 minutes (configurable), the operator will add a new dm-master pod. At this time, 4 pods will exist at the same time. After the failed dm-master node is restored, the operator will delete the newly started node. Then cluster will still have 3 master pods.
dm-worker
Almost the same as the above failover processing. The difference is that a new dm-worker may be started and the dm task has been assigned. If the worker goes offline at this time, the task will be re-assigned which will cause the task interrupted for a little while. Therefore, operator will not delete the newly started node, but keep it as 4 nodes. If users enables advanced-statefulsets, we can delete the intermediate node.
Monitor
Add the container containing the new dm monitoring file to the existing tidbMonitor pod. When deploying the monitoring of the dm cluster, copy the promethus and grafana configs to the target volumns.
Log
Reuse the current EFK system.
Category
Feature
Value
Support dm-controller in TiDB Operator so that users can easily deploy, scale, upgrade dm cluster through TiDB Operator.
TODO lists
idbscheduler special schedule strategy for dm-masterWorkload Estimation
45
Time
GanttProgress: 90%
GanttStart: 2020-08-03
GanttDue: 2020-10-30
The text was updated successfully, but these errors were encountered: