-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] memory fragmentation #592
Comments
To solve the problem, firstly I'm doing some tests with Install JEMALLOC_VERSION=5.2.1
wget https://github.com/jemalloc/jemalloc/releases/download/$JEMALLOC_VERSION/jemalloc-$JEMALLOC_VERSION.tar.bz2
tar -xf ./jemalloc-$JEMALLOC_VERSION.tar.bz2
cd jemalloc-$JEMALLOC_VERSION
# for the node with high query rate or wasm cache size, recommend below config
# ./configure --with-malloc-conf=background_thread:true,dirty_decay_ms:5000,muzzy_decay_ms:5000
./configure --with-malloc-conf=background_thread:true,metadata_thp:auto,dirty_decay_ms:30000,muzzy_decay_ms:30000
make
sudo make install Start terrad with LD_PRELOAD=/usr/local/lib/libjemalloc.so terrad start |
When the following config is applied to query nodes, it shows high memory consumption than normal nodes. ./configure --with-malloc-conf=background_thread:true,metadata_thp:auto,dirty_decay_ms:30000,muzzy_decay_ms:30000 so the modified config is applied as following for query nodes with large wasm cache size ./configure --with-malloc-conf=background_thread:true,dirty_decay_ms:5000,muzzy_decay_ms:5000 |
I found that version v0.5.9 not only has a memory leak, but it also affects syncing perfomance.
|
We also found an OOM issue when intensive querying tendermint RPC endpoint. But we should investigate it better and may be have to wait until this ticket will be resolved. |
yea memory clear spend a lot of resources. according to this article "What we found was that, as allocations went up, memory would also go up. However, as objects were deleted, memory would not go back down unless all objects created at the top of the address range were also removed, exposing the stack-like behavior of the glibc allocator. In order to avoid this, you would need to make sure that any allocations that you expected to stick around would not be assigned to a high order address space." I also suspect the wasm cache. In wasm cache structure, each code cache can hold bunch of child memories which is not released due to cache memory is still accessible. |
most stable config is ./configure --with-malloc-conf=background_thread:true,dirty_decay_ms:0,muzzy_decay_ms:0 with jemalloc installed |
Issue Status: 1. Open 2. Started 3. Submitted 4. Done This issue now has a funding of 5000.0 UST (5000.0 USD @ $1.0/UST) attached to it.
|
Issue Status: 1. Open 2. Started 3. Submitted 4. Done Workers have applied to start work. These users each claimed they can complete the work by 265 years from now. 1) pmlambert has applied to start work (Funders only: approve worker | reject worker). I can try to reproduce it and spend some time debugging it. Learn more on the Gitcoin Issue Details page. |
Issue Status: 1. Open 2. Started 3. Submitted 4. Done Workers have applied to start work. These users each claimed they can complete the work by 265 years from now. 1) bradlet has applied to start work (Funders only: approve worker | reject worker). Just wanting to find out if this issue is still active. The linked PR was merged w/ comments showing memory usage was stable. But, it was reopened shortly thereafter. Is the bounty still available, and if so, what is the remaining work request? Learn more on the Gitcoin Issue Details page. |
I noticed that my validator node takes up nearly 2x memory usage compared to a non-validator node. on 0.5.11. sentry node
validator node
|
Hi @YunSuk-Yeo It supposedly works better than jemalloc. It is however Linux & macOS only at the current time AFAIK. |
When we use memory smaller than 300MB, the memory usage is stable. |
Issue Status: 1. Open 2. Started 3. Submitted 4. Done Workers have applied to start work. These users each claimed they can complete the work by 264 years, 11 months from now. 1) jadelaawar has applied to start work (Funders only: approve worker | reject worker). I have reviewed your bug and have already figured out a solution for it! Learn more on the Gitcoin Issue Details page. |
Issue Status: 1. Open 2. Started 3. Submitted 4. Done Work for 5000.0 UST (5010.00 USD @ $1.0/UST) has been submitted by:
|
Describe the bug
[email protected]
series are undergoing OOM problem. The memory usages and grow speed slightly decreased withjemalloc
adoption in wasmvm part, but still seeing linearly increasing memory allocation (1GB per day).When I attach
bcc/memory-leak
tool to core process, the memory is quite stable so I assume there is no actual leak instead it has memory fragmentation issues.Reported memory usages
The text was updated successfully, but these errors were encountered: