Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
drainer/: Reduce memory usage (#735) #737
drainer/: Reduce memory usage (#735) #737
Changes from 14 commits
0dfc864
212d716
4520757
04d71a1
8a6124a
b860497
dafce05
f9fc0e0
e21f41c
9bd6252
e513e0d
4b74231
1e51514
60bd55e
be6146d
9b4a2e5
e6aded0
6ddab70
03427a7
21d4efc
6789bdf
81e586c
64aa654
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chan *Txn
withchan X
where X only contains single DML or DDL, we don't need to implement something similar to a buffered channel.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OldValues
andValues
. So maybe we can count the sizeof these two maps?txn
tobatchManager
but not single DML or DDL. A successfully executed single DML doesn't mean the wholetxn
is successful.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't avoid OOM by checking everywhere we might use memory (and making the program more complex), so the problem we need to solve is to lower the possibility, which I think can be solved by just setting smaller buffer sizes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that setting smaller buffer sizes will cause performance digression. If we just set buffer size of loader.input to 8 the Update Event wil drop from 7k+ to 3k+.
If we change
default
in loader tocase <-time.After(10 * time.Microsecond)
the situation will get better, but we still have performance digression. The update event drop from around 7.2k to around 6.7k. However, if the binlogs are very few, it takes more time forload
to execute cached binlogs.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any better suggestions for @lichunzhu ? @suzaku
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please change
output: make(chan MergeItem)
back tooutput: make(chan MergeItem, 10)
and try again?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@suzaku
It still has performance digression. It's the situation which we set
default
tocase <-time.After(10 * time.Microsecond)
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will it slow if only change s.input to be no buffer without using
txnManager
here ?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. From 10:27 on I set
loader.input
a 8 size buffer and don't usetxnManager
here. The Update Event drop from 7k+ to 3k+.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any test for
txnManager
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If in the end we decided to use
txnManager
, I will add some tests.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only close
ret
wheninput
is closed?it's also an implied assumption, caller must close both the
input
channel andtxnManager
in itsclosing procedure
by itselfThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function
Close()
can closeret
now.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should atomic operation here to avoid data race
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There won't be concurrent operations to change
t.closed
so I think we don't need to use atomic operation.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
YOU NEED TO ADD A COMMENT
IT'S NOT THREAD SAFE
, and I really don't like to make any assumptions.