-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory useage keeps on high level and wouldn't be released by storage process #3422
Comments
Is there any new progress on this issue |
It seems this issue was stuck lacking reproducing environment, are you facing similar issues? If so, could you please help provide some information on this? |
this issure can be fixed by configuring the items "enable_rocksdb_prefix_filtering" and "enable_partitioned_index_filter" of nebula-storaged.conf file, close it plz. |
Need rebuild index after modify enable_rocksdb_prefix_filtering ? |
Rebuild of the index is not needed, I think those two configurations are cache related. @huzhi915 please correct me if wrong. Thanks. |
exactly. |
Talked it offline, now it's work. |
Describe the bug (required)
After importing data, memory useage keeps on high level with bloom filter, which often results in OOM in some scenario -- huge and full data import everytime.
In out internal tests, prefix bloom filter couldn't reduce the memory usage obviously instead of whole key bloom filter with a 8 billion dataset.
Your Environments (required)
uname -a
g++ --version
orclang++ --version
lscpu
a3ffc7d8
)How To Reproduce(required)
Steps to reproduce the behavior:
Expected behavior
Additional context
The text was updated successfully, but these errors were encountered: