-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster (v3.2.0) becomes unstable when more data was ingested to a existing space #4668
Comments
cc @Sophie-Xie |
the Replica Factor is 2? we suggest the replica factor is odd number, please try again with replica factor 3 or 1 |
Thanks for the reply. I will try with replica factor 3 and report back. |
INT64 and STRING are both ok, I think. |
After setting Replica Factor 3; cluster seems to be more stable. |
Please check the FAQ documentation before raising an issue
Describe the bug (required)
Cluster becomes unstable beyond these statistics
-- vertices: 50 Million
-- Edges: 3.3 Billon
We are seeing this in our TEST cluster
Your Environments (required)
Using docker images
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 1
Model name: AMD EPYC 7551 32-Core Processor
Stepping: 2
CPU MHz: 1996.300
BogoMIPS: 3992.60
Hypervisor vendor: *****
Virtualization type: full
L1d cache: 32K
L1i cache: 64K
L2 cache: 512K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
a3ffc7d8
)Not sure how to get this
How To Reproduce(required)
Steps to reproduce the behavior:
-- graphd VM count: 3 (16 vCPU, 128 GB, 2 X 2 TB Premium SSD NVMe Disks)
-- metad VM count: 3 (16 vCPU, 128 GB, 2 X 2 TB Premium SSD NVMe Disks)
-- storaged VM count: 9 (16 vCPU, 128 GB, 2 X 2 TB Premium SSD NVMe Disks)
-- vertices: 50 Million
-- Edges: 3.3 Billon
Expected behavior
Cluster should be stable for at least 1 billion vertices and 50 billion edges
Additional context
The text was updated successfully, but these errors were encountered: