Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The TiDB treats TiFlash and TiKV engines differently when using br restore #23665

Closed
lilinghai opened this issue Mar 30, 2021 · 6 comments
Closed
Assignees
Labels
severity/critical sig/planner SIG: Planner type/bug The issue is confirmed as a bug.

Comments

@lilinghai
Copy link
Contributor

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

deploy the tidb v5.0.0-nightly cluster with 4 kv,2 tiflash,3 pd,2 db
br restore the TPCC 10000 warehouses(no tiflash data) to cluster
In the process of restore,alter the tpcc tables tiflash replica 2 ; Then execute sqls like "select count(*) from district;"

2. What did you expect to see? (Required)

The result of TiFlash and Tikv is same

3. What did you see instead (Required)

When the table is not ready
using TiKV engine, the result is 0 and the time of query is quickly.

MySQL [tpcc]> select count(*) from district;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)

While using TiFlash engine, the result is different and the time of query is very long.

ERROR 1105 (HY000): Region epoch not match after retries: Region {80196,13,172} not in region cache.

4. What is your TiDB version? (Required)

Release Version: v5.0.0-nightly
Edition: Community
Git Commit Hash: 09a4c57
Git Branch: heads/refs/tags/v5.0.0-nightly
UTC Build Time: 2021-03-29 14:39:50
GoVersion: go1.13
Race Enabled: false
TiKV Min Version: v3.0.0-60965b006877ca7234adaced7890d7b029ed1306
Check Table Before Drop: false

@lilinghai lilinghai added the type/bug The issue is confirmed as a bug. label Mar 30, 2021
@lilinghai
Copy link
Contributor Author

The TiDB can't accurately check whether The TiFlash replica is available. Although the information_schema.tiflash_replica shows that the TiFlash replica is available, the process of TiFlash ingest sst is not complete.
If the users use the default value of tidb_isolation_read_engines, the TiDB will select the TiFlash engine and then It will raise error.

@lilinghai
Copy link
Contributor Author

set @@tidb_allow_fallback_to_tikv='tiflash' can't also control the behavior.

@xuyifangreeneyes
Copy link
Contributor

xuyifangreeneyes commented Mar 31, 2021

set @@tidb_allow_fallback_to_tikv='tiflash' can't also control the behavior.

Currently TiDB only falls back to TiKV when ErrTiFlashServerTimeout occurs. Maybe other kinds of error like ErrRegionUnavailable should also trigger fallback behavior?

@xuyifangreeneyes
Copy link
Contributor

/assign

@winoros
Copy link
Member

winoros commented Apr 26, 2021

@lilinghai Could provide a detailed TiDB log?

@ti-srebot
Copy link
Contributor

Please edit this comment or add a new comment to complete the following information

Not a bug

  1. Remove the 'type/bug' label
  2. Add notes to indicate why it is not a bug

Duplicate bug

  1. Add the 'type/duplicate' label
  2. Add the link to the original bug

Bug

Note: Make Sure that 'component', and 'severity' labels are added
Example for how to fill out the template: #20100

1. Root Cause Analysis (RCA) (optional)

2. Symptom (optional)

3. All Trigger Conditions (optional)

4. Workaround (optional)

5. Affected versions

6. Fixed versions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
severity/critical sig/planner SIG: Planner type/bug The issue is confirmed as a bug.
Projects
None yet
Development

No branches or pull requests

5 participants