Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[branch-2.1](memory) Fix Jemalloc Cache Memory Tracker #37905

Merged
merged 1 commit into from
Jul 16, 2024

Conversation

xinyiZzz
Copy link
Contributor

pick #37464

## Proposed changes

Doris uses Jemalloc as default Allocator, Jemalloc Cache consists of two
parts:
- Thread Cache, cache a specified number of Pages in Thread Cache.
- Dirty Page, memory Page that can be reused in all Arenas.

1. Metadata should not be counted as cache, this will cause memory GC to
be delayed, leading to BE OOM.
2. Fix Jemalloc dirty page memory size, previous code used dirty page
number * page size (4K on x86), which is much smaller than the actual
memory. the fix is ​​the sum of dirty page memory of all size classes of
extents.
@doris-robot
Copy link

Thank you for your contribution to Apache Doris.
Don't know what should be done next? See How to process your PR

Since 2024-03-18, the Document has been moved to doris-website.
See Doris Document.

@xinyiZzz
Copy link
Contributor Author

run buildall

Copy link
Contributor

clang-tidy review says "All clean, LGTM! 👍"

@doris-robot
Copy link

TeamCity be ut coverage result:
Function Coverage: 36.44% (9240/25354)
Line Coverage: 28.00% (75547/269826)
Region Coverage: 26.82% (38838/144784)
Branch Coverage: 23.56% (19717/83680)
Coverage Report: http://coverage.selectdb-in.cc/coverage/233447a7a9924f4b3888c0e88f31a3aacd368f2e_233447a7a9924f4b3888c0e88f31a3aacd368f2e/report/index.html

@yiguolei yiguolei merged commit 9861f81 into apache:branch-2.1 Jul 16, 2024
19 of 22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants