You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This task focuses on optimizing the serialization process and improving the overall performance of HugeGraph. It involves various strategies and techniques to enhance the efficiency and effectiveness of serialization operations.(这个任务的重点是优化HugeGraph的序列化过程,提高系统的整体性能。它涉及到多种策略和技术,以增强序列化操作的效率和效果。)
Tasks to be Completed (需要完成的任务) :
Update in 2023.07.13
1.Support constant enumeration encoding for frequently occurring property values.
(支持对高频属性值进行常量枚举编码)
2.Introduce lazy deserialization for BackendProps by wrapping the original bytes.
(通过将原始字节包裹起来,实现按需延迟反序列化类BackendProps)
3.Replace individual byte arrays with byte buffers to reduce fragmentation during serialization.
(在序列化过程中,将各个字节数组替换为字节缓冲区,减少碎片)
4.Optimize off-heap memory usage for gremlin dedup/emit steps.
(针对gremlin dedup/emit等步骤,优化堆外内存的使用)
5.Fine-tune the caching structure for adjacency edges, such as using vertex Id as the key to improve cache hit rate.
(优化邻接边的缓存结构,例如使用顶点Id作为键,提高缓存命中率)
8. Replace the current serialization framework with the high-performance serialization framework Fury. (将当前序列化框架替换成高性能序列化框架Fury)
Approach(参与方式):
Please reply to this issue with the respective task number and PR number (you can set it as draft mode for now), and we will mark it as "work in progress 🏗" for you later.
(请在此问题下回复相应的任务编号和PR编号(您可以将其先设置为 draft 模式),我们稍后会为您进行标记 "work in progress 🏗")
Suggestion(建议):
Before submitting a pull request, it is recommended to create an issue where you can share the difficulties you encountered or your proposed solution. This will help facilitate the completion of the task.(在提交Pull Request之前,建议先发布一个Issue,在该Issue中分享您遇到的困难或您的解决思路。这将有助于更好地完成任务。)
The text was updated successfully, but these errors were encountered:
Task Summary - HugeGraph Serialization Optimization/Performance Optimization(任务汇总 - HugeGraph 序列化优化/性能优化)
Task Description (任务描述):
This task focuses on optimizing the serialization process and improving the overall performance of HugeGraph. It involves various strategies and techniques to enhance the efficiency and effectiveness of serialization operations.(这个任务的重点是优化HugeGraph的序列化过程,提高系统的整体性能。它涉及到多种策略和技术,以增强序列化操作的效率和效果。)
Tasks to be Completed (需要完成的任务) :
Update in 2023.07.13
1.Support constant enumeration encoding for frequently occurring property values.
(支持对高频属性值进行常量枚举编码)
2.Introduce lazy deserialization for BackendProps by wrapping the original bytes.
(通过将原始字节包裹起来,实现按需延迟反序列化类BackendProps)
3.Replace individual byte arrays with byte buffers to reduce fragmentation during serialization.
(在序列化过程中,将各个字节数组替换为字节缓冲区,减少碎片)
4.Optimize off-heap memory usage for gremlin dedup/emit steps.
(针对gremlin dedup/emit等步骤,优化堆外内存的使用)
5.Fine-tune the caching structure for adjacency edges, such as using vertex Id as the key to improve cache hit rate.
(优化邻接边的缓存结构,例如使用顶点Id作为键,提高缓存命中率)
6.Improve the efficiency of adjacency edge queries with target point Id conditions.
(优化带有目标点Id条件的邻接边查询的效率) @msgui (done feat: optimising adjacency edge queries #2242)
7.Optimize for adjacency edge query: skip source-vertex if edge-label does not link source-vertex's label.
(如果顶点类型不具有该类型的边,通过直接返回空结果来优化根据顶点查找边的过程) @Z-HUANT (done feat(api): optimize adjacent-edges query #2408)
8. Replace the current serialization framework with the high-performance serialization framework Fury. (将当前序列化框架替换成高性能序列化框架Fury)
Approach(参与方式):
Please reply to this issue with the respective task number and PR number (you can set it as draft mode for now), and we will mark it as "work in progress 🏗" for you later.
(请在此问题下回复相应的任务编号和PR编号(您可以将其先设置为 draft 模式),我们稍后会为您进行标记 "work in progress 🏗")
Suggestion(建议):
Before submitting a pull request, it is recommended to create an issue where you can share the difficulties you encountered or your proposed solution. This will help facilitate the completion of the task.(在提交Pull Request之前,建议先发布一个Issue,在该Issue中分享您遇到的困难或您的解决思路。这将有助于更好地完成任务。)
The text was updated successfully, but these errors were encountered: