-
Notifications
You must be signed in to change notification settings - Fork 188
Data decrease in shard-mode when change auto-increment in downstream #1895
Comments
This is caused by after changing the table structure in downstream, user didn't tell DM so DM will generate DELETE DML in safemode like
which matches many rows in downstream since it's a shard merging task. |
In this case, should we support operate-schema before sync start? |
currently, we have almost no practical action to |
I think if we have no error when task starts, user usually don't know they need to set the schema manually. |
yes, that proposal can only work with guiding from document. |
closed by #1915 |
Bug Report
Please answer these questions before submitting your issue. Thanks!
What did you do? If possible, provide a recipe for reproducing the error.
200 upstream mysql, 200* dm worker, 3*dm master, 200K QPS+TPS in upstream.
Upgrade dm cluster from 2.0.1 to nightly。
What did you expect to see?
After upgrade process completed, data will go on to migrate to downstream tidb.
What did you see instead?
The data in specified table decreased.
Versions of the cluster
DM version (run
dmctl -V
ordm-worker -V
ordm-master -V
):Upstream MySQL/MariaDB server version:
Downstream TiDB cluster version (execute
SELECT tidb_version();
in a MySQL client):How did you deploy DM: DM-Ansible or manually?
Other interesting information (system version, hardware config, etc):
current status of DM cluster (execute
query-status
in dmctl)Operation logs
dm-worker.log
for every DM-worker instance if possibledm-master.log
if possibleConfiguration of the cluster and the task
dm-worker.toml
for every DM-worker instance if possibledm-master.toml
for DM-master if possibletask.yaml
if possibleinventory.ini
if deployed by DM-AnsibleScreenshot/exported-PDF of Grafana dashboard or metrics' graph in Prometheus for DM if possible
The text was updated successfully, but these errors were encountered: