Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In the CutOver phase wait events up to lock timeout, the _del table lock to be not release, cause DB not writable. Introduced in #755 #992

Open
brightgogo opened this issue Jun 21, 2021 · 8 comments · Fixed by bytebase/gh-ost#16

Comments

@brightgogo
Copy link

when gh-ost appling binlog, pauses for a period of time, and then resumes appling binlog and row copy.
At this time, the row copy is completed and start the cutOver phase.
binlog apply delay, causes timeout waiting for events up to lock thread and then lock was not released on rollback, causing the DB to be locked and cannot be written.

gh-ost execution log :

2021-06-20 17:03:23 INFO Row copy complete
Copy: 594148/594148 100.0%; Applied: 1366; Backlog: 0/1000; Time: 11m38s(total), 11m38s(copy); streamer: mysql-bin.008173:15450343; Lag: 0.01s, State: migrating; ETA: due
2021-06-20 17:03:23 INFO Grabbing voluntary lock: gh-ost.8642188.lock
2021-06-20 17:03:23 INFO Setting LOCK timeout as 6 seconds
2021-06-20 17:03:23 INFO Looking for magic cut-over table
2021-06-20 17:03:23 INFO Creating magic cut-over table test._test_1_1624179105_del
Copy: 594148/594148 100.0%; Applied: 1366; Backlog: 0/1000; Time: 11m38s(total), 11m38s(copy); streamer: mysql-bin.008173:15508966; Lag: 0.01s, State: migrating; ETA: due
2021-06-20 17:03:23 INFO Magic cut-over table created
2021-06-20 17:03:23 INFO Locking test.test_1, test._test_1_1624179105_del
2021-06-20 17:03:23 INFO Tables locked
2021-06-20 17:03:23 INFO Session locking original & magic tables is 8642188
2021-06-20 17:03:23 INFO Writing changelog state: AllEventsUpToLockProcessed:1624179803921417462
2021-06-20 17:03:23 INFO Waiting for events up to lock
Copy: 594148/594148 100.0%; Applied: 1366; Backlog: 2/1000; Time: 11m38s(total), 11m38s(copy); streamer: mysql-bin.008173:15830960; Lag: 0.01s, State: migrating; ETA: due
Copy: 594148/594148 100.0%; Applied: 1479; Backlog: 38/1000; Time: 11m41s(total), 11m38s(copy); streamer: mysql-bin.008173:44351489; Lag: 0.01s, State: migrating;
2021-06-20 17:03:26 ERROR Timeout while waiting for events up to lock
2021-06-20 17:03:26 ERROR 2021-06-20 17:03:26 ERROR Timeout while waiting for events up to lock
2021-06-20 17:03:26 INFO Looking for magic cut-over table
2021-06-20 17:03:26 INFO Will now proceed to drop magic table and unlock tables
2021-06-20 17:03:26 INFO Dropping magic cut-over table
2021-06-20 17:03:26 INFO Dropping magic cut-over table
2021-06-20 17:03:26 INFO Dropping table test._test_1_1624179105_del

image

the above sync.Once implementation logic, the one that is grabbed first holds the m.Lock() lock, and then another thread will wait until the previous thread After the execution is complete, change the o.done status from 0 to 1, and the subsequent thread can exit without having to execute.

In gh-ost during rollback operation,if the atomicCutOver() --> defer func() --> DropAtomicCutOverSentryTableIfExists() acquires the lock first, and the drop _del table will wait for the _del table lock, and the in the AtomicCutOverMagicLock() drop _del operation will Blocked,causes the DB to be unwritable more than 300s.

Thank you!

@shlomi-noach
Copy link
Contributor

link to #755

@shlomi-noach
Copy link
Contributor

I suspect the sync.Once solution is incorrect and that this issue is true.

@brightgogo
Copy link
Author

I suspect the sync.Once solution is incorrect and that this issue is true.

Thanks for your reply, that there is way to avoid this problem?

modify sync.Once , return the status value before executing the function.

// if atomic.CompareAndSwapUint32(&o.done, 0, 1) {
// f()
// }

@shlomi-noach
Copy link
Contributor

I'll be honest that I've last looked at this quite a while back and it will take me a while to make myself familiar with the logic again. Meanwhile consider -cut-over=two-step

@Takashi-kun
Copy link

Takashi-kun commented Aug 25, 2021

I had same lock conflict issue when got Timeout while waiting for events up to lock and it blocked about 30 sec(equal to mysqld's wait_timeout ).

...
2021-07-28 17:58:52 INFO Grabbing voluntary lock: gh-ost.2358732884.lock
2021-07-28 17:58:52 INFO Setting LOCK timeout as 2 seconds
2021-07-28 17:58:52 INFO Looking for magic cut-over table
2021-07-28 17:58:52 INFO Creating magic cut-over table `mydb`.`_my_table_del`
2021-07-28 17:58:52 INFO Magic cut-over table created
2021-07-28 17:58:52 INFO Locking `mydb`.`my_table`, `mydb`.`_my_table_del`
2021-07-28 17:58:52 INFO Tables locked
2021-07-28 17:58:52 INFO Session locking original & magic tables is 2358732884
2021-07-28 17:58:52 INFO Writing changelog state: AllEventsUpToLockProcessed:1627462732508940152
2021-07-28 17:58:52 INFO Waiting for events up to lock
2021-07-28 17:58:52 INFO Waiting for events up to lock: skipping AllEventsUpToLockProcessed:1627462700466978089
2021-07-28 17:58:52 INFO Intercepted changelog state AllEventsUpToLockProcessed
2021-07-28 17:58:52 INFO Handled changelog state AllEventsUpToLockProcessed
2021-07-28 17:58:53 ERROR Timeout while waiting for events up to lock
2021-07-28 17:58:53 ERROR 2021-07-28 17:58:53 ERROR Timeout while waiting for events up to lock
2021-07-28 17:58:53 INFO Looking for magic cut-over table
2021-07-28 17:58:53 INFO Will now proceed to drop magic table and unlock tables
2021-07-28 17:58:53 INFO Dropping magic cut-over table
2021-07-28 17:58:53 INFO Dropping magic cut-over table
2021-07-28 17:58:53 INFO Dropping table `mydb`.`_my_table_del`
Copy: 943917/943917 100.0%; Applied: 14; Backlog: 0/1000; Time: 8m45s(total), 4m40s(copy); streamer: mysql_bin.000453:133897973; Lag: 0.01s, HeartbeatLag: 0.01s, State: migrating; ETA: due
...
Copy: 943917/943917 100.0%; Applied: 14; Backlog: 0/1000; Time: 9m10s(total), 4m40s(copy); streamer: mysql_bin.000453:134218078; Lag: 0.01s, HeartbeatLag: 0.01s, State: migrating; ETA: due
2021-07-28 17:59:22 INFO Table dropped
2021-07-28 17:59:22 INFO Releasing lock from `mydb`.`my_table`, `mydb`.`_my_table_del`
[mysql] 2021/07/28 17:59:22 packets.go:36: unexpected EOF
2021-07-28 17:59:22 ERROR invalid connection
2021-07-28 17:59:22 ERROR invalid connection
Copy: 943917/943917 100.0%; Applied: 14; Backlog: 0/1000; Time: 9m15s(total), 4m40s(copy); streamer: mysql_bin.000453:134290613; Lag: 0.01s, HeartbeatLag: 0.01s, State: migrating; ETA: due
2021-07-28 17:59:26 INFO Grabbing voluntary lock: gh-ost.2358739822.lock
...

I set set global wait_timeout=2 before executing cut-over to shorten Dropping table lock time.

@wangzihuacool
Copy link
Contributor

wangzihuacool commented Sep 9, 2021

I suspect the sync.Once solution is incorrect and that this issue is true.

Thanks for your reply, that there is way to avoid this problem?

modify sync.Once , return the status value before executing the function.

// if atomic.CompareAndSwapUint32(&o.done, 0, 1) {
// f()
// }

I met the same issue.
In face , okToUnlockTable <- true has been sent to function AtomicCutOverMagicLock, and the thread holding sentry table's lock will execute the function dropCutOverSentryTableOnce then. But main thread execute the function first. How about sleep 1 second between sending okToUnlockTable and executing the function.

                okToUnlockTable <- true
		time.Sleep(1 * time.Second)
		dropCutOverSentryTableOnce.Do(func() {
			this.applier.DropAtomicCutOverSentryTableIfExists()
		})

@vleejl
Copy link

vleejl commented Nov 15, 2021

Can we use defer to drop oldTable in exceptional circumstances? like this

func (this *Applier) AtomicCutOverMagicLock(sessionIdChan chan int64, tableLocked chan<- error, okToUnlockTable <-chan bool, tableUnlocked chan<- error, dropCutOverSentryTableOnce *sync.Once) error {
	tx, err := this.db.Begin()
	if err != nil {
		tableLocked <- err
		return err
	}
	defer func() {
		dropCutOverSentryTableOnce.Do(func() {
			this.DropAtomicCutOverSentryTableIfExists()
		})
		......
	}()
}

In my opinion, we should avoid managing resource in different thread, so we should deal exceptional circumstances in function AtomicCutOverMagicLock.

@vleejl
Copy link

vleejl commented Feb 25, 2022

Can we use defer to drop oldTable in exceptional circumstances? like this

func (this *Applier) AtomicCutOverMagicLock(sessionIdChan chan int64, tableLocked chan<- error, okToUnlockTable <-chan bool, tableUnlocked chan<- error, dropCutOverSentryTableOnce *sync.Once) error {
	tx, err := this.db.Begin()
	if err != nil {
		tableLocked <- err
		return err
	}
	defer func() {
		dropCutOverSentryTableOnce.Do(func() {
			this.DropAtomicCutOverSentryTableIfExists()
		})
		......
	}()
}

In my opinion, we should avoid managing resource in different thread, so we should deal exceptional circumstances in function AtomicCutOverMagicLock.

@shlomi-noach Hello, In my product enviroment, this problem happens from time to time. However, due to the logic limitation of the service, service interruption cannot be accepted so we can't use -cut-over=two-step, and we are eager to solve this problem. Do you have time to review this repair idea?

btruhand added a commit to Faire/gh-ost that referenced this issue Oct 20, 2023
…to lock

This PR contains a fix for the deadlock issue described in github#992 (which is also discussed here github#1171). The fix is introduced in the original repo here https://github.com/github/gh-ost/pull/1180/files. This PR contains a cherry-pick of that fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants