We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
性能更好一些 请问贵团队在处理大型数据建图时会采用什么策略来优化内存占用和传输速度呢?
内存占用不断上升 数据传输在后期非常缓慢
数据总量150G,两种label的边数据分别占78G,64G,共7亿条数据,顶点占3G,共6千5百万数据 loader命令:bin/hugegraph-loader -g hugegraph -f ethereum/struct.json -s ethereum/schema.groovy -h 192.168.1.2 -p 7878 边数据schema: {“source_name”:"xxxxxxxxxxxx","target_name":"xxxxxxxxx","name":"xxxxxxxxxxxxx","value":xxxx}
The text was updated successfully, but these errors were encountered:
Add thrid-party dependency licenses (#117)
4dbe157
* add dep licenses * Update LICENSE-JavaHamcrest.txt Co-authored-by: imbajin <[email protected]>
No branches or pull requests
Expected behavior 期望表现
性能更好一些
请问贵团队在处理大型数据建图时会采用什么策略来优化内存占用和传输速度呢?
Actual behavior 实际表现
内存占用不断上升
数据传输在后期非常缓慢
Steps to reproduce the problem 复现步骤
数据总量150G,两种label的边数据分别占78G,64G,共7亿条数据,顶点占3G,共6千5百万数据
loader命令:bin/hugegraph-loader -g hugegraph -f ethereum/struct.json -s ethereum/schema.groovy -h 192.168.1.2 -p 7878
边数据schema:
{“source_name”:"xxxxxxxxxxxx","target_name":"xxxxxxxxx","name":"xxxxxxxxxxxxx","value":xxxx}
Status of loaded data 数据状态
Vertex/Edge summary 数据量
Specifications of environment 环境信息
The text was updated successfully, but these errors were encountered: