Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gremlin查询时返回数据条数800000的限制在哪里设置 #84

Closed
fisherinbox opened this issue Oct 9, 2018 · 2 comments
Closed

Comments

@fisherinbox
Copy link

Expected behavior 期望表现

{type something here...}

Actual behavior 实际表现

{type something here...}

Steps to reproduce the problem 复现步骤

  1. {step 1}
  2. {step 2}
  3. {step 3}

Status of loaded data 数据状态

Vertex/Edge summary 数据量

  • loaded vertices amount: {like 10 million}
  • loaded edges amount: {like 20 million}
  • loaded time: {like 200s}

Vertex/Edge example 数据示例

{type something here...}

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

{type something here...}

Specifications of environment 环境信息

  • hugegraph version: {like v0.7.4}
  • operating system: {like centos 7.4, 32 CPUs, 64G RAM}
  • hugegraph backend: {like cassandra 3.10, cluster with 20 nodes, 3 x 1TB HDD disk each node}
@javeme
Copy link
Contributor

javeme commented Oct 9, 2018

这是单次查询结果80w限制规则。

背景:
因为返回过多记录数会导致内存等问题,我们的建议是尽量多加查询条件以限制结果数,因为过多的最终结果对于人去分析也是很难进行的。

当然,如果是想要获取全部的数据可以通过分页API实现,请参考分页查询

@yuyang0
Copy link

yuyang0 commented Oct 10, 2018

@javeme 请问下java客户端怎么分页查询?

VGalaxies pushed a commit that referenced this issue Aug 3, 2024
refact(rpc): merge rpc module into commons
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants