-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
meta: fix the allocator batch size compute logic #17271
Conversation
/run-all-tests |
Codecov Report
@@ Coverage Diff @@
## master #17271 +/- ##
===========================================
Coverage 79.8700% 79.8700%
===========================================
Files 520 520
Lines 140005 140005
===========================================
Hits 111822 111822
Misses 19228 19228
Partials 8955 8955 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/run-all-tests |
n1 = CalcNeededBatchSize(newBase, int64(n), increment, offset, alloc.isUnsigned) | ||
// Although the step is customized by user, we still need to make sure nextStep is big enough for insert batch. | ||
if nextStep < n1 { | ||
nextStep = n1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was n1*2
before but it's n1
now. Is that right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, allocate what you have asked for is correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
c8a5200
to
1922f0e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/run-all-tests |
/run-integration-copr-test |
/run-cherry-picker |
Signed-off-by: sre-bot <[email protected]>
cherry pick to release-2.1 in PR #17547 |
Signed-off-by: sre-bot <[email protected]>
cherry pick to release-3.0 in PR #17548 |
Signed-off-by: sre-bot <[email protected]>
cherry pick to release-3.1 in PR #17549 |
Signed-off-by: sre-bot <[email protected]>
cherry pick to release-4.0 in PR #17550 |
What problem does this PR solve?
Problem Summary: fix logic in allocator batch size computation
What is changed and how it works?
What's Changed:
when the local cache size if not enough for allocN, do the new batchSize computation based on the global new base in the txn.
so we postpone the NextStep adjustment to meta txn and store it after that.
Related changes
Check List
Tests
Release note