Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ddl: Fix potential data lost of alter_partition_by (#8337) #8355

Conversation

ti-chi-bot
Copy link
Member

This is an automated cherry-pick of #8337

What problem does this PR solve?

Issue Number: close #8206

Problem Summary:

Introduced by #7822

When executing alter table xxx partition by ... to turn a non-partition table into partition table, there could be a chance that tiflash see a non-partition table turn into a partition table (using the same table_id). But it would be skipped by the previous implementation

template <typename Getter, typename NameMapper>
void SchemaBuilder<Getter, NameMapper>::applyPartitionDiff(
const TiDB::DBInfoPtr & db_info,
const TableInfoPtr & table_info,
const ManageableStoragePtr & storage)
{
const auto & orig_table_info = storage->getTableInfo();
if (!orig_table_info.isLogicalPartitionTable())
{
LOG_ERROR(
log,
"old table in TiFlash not partition table {} with database_id={}, table_id={}",
name_mapper.debugCanonicalName(*db_info, orig_table_info),
db_info->id,
orig_table_info.id);
return;
}

And then tiflash mistaken drop the old table along with all its partitions, but actually those partition are now attached to a new logical table. This leads to data lost after `alter table xxx partition ...

What is changed and how it works?

  • Use tidb_isolation_read_engines instead of hint in test cases
  • Allow turning a non-partition table into partition table
  • When applying SchemaDiff for alter partition, we first create the new table and override the partition id mapping before dropping the old table

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Side effects

  • Performance regression: Consumes more CPU
  • Performance regression: Consumes more Memory
  • Breaking backward compatibility

Documentation

  • Affects user behaviors
  • Contains syntax changes
  • Contains variable changes
  • Contains experimental features
  • Changes MySQL compatibility

Release note

None

@ti-chi-bot ti-chi-bot added release-note-none Denotes a PR that doesn't merit a release note. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. type/cherry-pick-for-release-7.1 This PR is cherry-picked to release-7.1 from a source PR. labels Nov 10, 2023
@ti-chi-bot ti-chi-bot added the cherry-pick-approved Cherry pick PR approved by release team. label Nov 10, 2023
Copy link
Contributor

ti-chi-bot bot commented Nov 10, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign schrodingerzhu for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed do-not-merge/cherry-pick-not-approved size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Nov 10, 2023
@JaySon-Huang
Copy link
Contributor

this does not affect release-7.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cherry-pick-approved Cherry pick PR approved by release team. release-note-none Denotes a PR that doesn't merit a release note. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. type/cherry-pick-for-release-7.1 This PR is cherry-picked to release-7.1 from a source PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants