Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k6 can run only ~888 VU maximum with 4kb objects against one node of Object storage cluster from load node #42

Open
jingerbread opened this issue Nov 22, 2022 · 1 comment
Labels
bug Something isn't working I3 Minimal impact performance More of something per second S4 Routine U4 Nothing urgent

Comments

@jingerbread
Copy link

jingerbread commented Nov 22, 2022

k6 can run ~888 VU maximum against one node of Object storage cluster (VUs number can vary from run to run)
After that it starts throwing timeout error, find during doing performance test, that prevents us from testing higher number of VUs per cluster node from load node

/home/service/k6 run -e DURATION=200 -e WRITE_OBJ_SIZE=4 -e WRITERS=889 -e READERS=0 -e DELETERS=0 -e GRPC_ENDPOINTS=10.78.69.106:8080 -e PREGEN_JSON=/home/service/grpc_4kb_600.json /home/service/scenarios/grpc.js

Run     [======================================] setup()
write   [--------------------------------------]
ERRO[0019] GoError: dial endpoint: open gRPC connection: gRPC dial: context deadline exceeded
	at reflect.methodValueCall (native)
	at file:///home/service/scenarios/grpc.js:20:35(127)  hint="script exception"

Run     [======================================] setup()
write   [--------------------------------------]"

https://github.com/grafana/k6/blob/46b4847179a8d9f942e92274c438e49ae289507a/core/local/local.go#L145
code: xk6-neofs in native.go "prmDial.SetTimeout(5 * time.Second)"

After increasing timeout, we get other error, seems it hangs after initing 889 VU

xtime="2022-11-21T20:18:44Z" level=error msg="GoError: dial endpoint: open gRPC connection: gRPC dial: context deadline exceeded\n\tat reflect.methodValueCall (native)\n\tat file:///home/service/scenarios/grpc.js:20:35(127)\n" hint="error while initializing VU #889 (script exception)"
## Steps to Reproduce (for bugs)

We should also need to clarify requirements of how many threads (VUs) we can handle per cluster node.

@jingerbread jingerbread changed the title k6 can run only 727 VU maximum with 4kb objects against one node of Object storage cluster EpicMetal k6 can run only 888 VU maximum with 4kb objects against one node of Object storage cluster EpicMetal from load node Nov 22, 2022
@jingerbread jingerbread changed the title k6 can run only 888 VU maximum with 4kb objects against one node of Object storage cluster EpicMetal from load node k6 can run only 888 VU maximum with 4kb objects against one node of Object storage cluster from load node Nov 22, 2022
@jingerbread jingerbread changed the title k6 can run only 888 VU maximum with 4kb objects against one node of Object storage cluster from load node k6 can run only ~888 VU maximum with 4kb objects against one node of Object storage cluster from load node Nov 22, 2022
@fyrchik
Copy link
Contributor

fyrchik commented Dec 28, 2022

I was able to get it running on my laptop with WRITERS=2000:

dzeta@wpc ~/r/xk6-neofs (master) [105]> ./k6 run -e DURATION=259200 -e WRITE_OBJ_SIZE=10000 -e WRITERS=2000 -e READERS=0 -e DELETERS=0 -e GRPC_ENDPOINTS=s03.neofs.devenv:8080 -e PREGEN_JSON=../grpc.json scenarios/grpc.js
.....
  execution: local
     script: scenarios/grpc.js
     output: -

  scenarios: (100.00%) 1 scenario, 2000 max VUs, 72h0m5s max duration (incl. graceful stop):
           * write: 2000 looping VUs for 72h0m0s (exec: obj_write, gracefulStop: 5s)

INFO[0004] Pregenerated containers:       1              source=console
INFO[0004] Pregenerated read object size: 1024 Kb        source=console
INFO[0004] Pregenerated total objects:    1              source=console
INFO[0004] Reading VUs:                   0              source=console
INFO[0004] Writing VUs:                   2000           source=console
INFO[0004] Deleting VUs:                  0              source=console
INFO[0004] Total VUs:                     2000           source=console

running (0d00h00m11.7s), 2000/2000 VUs, 13 complete and 0 interrupted iterations
write   [--------------------------------------] 2000 VUs  0d00h00m11.7s/72h0m0s

So I think the issue is either related to an object size (10000 in my case, 4 in yours) or possibly some system limits on a load/storage node. @jingerbread can you reproduce it with a bigger WRITE_OBJ_SIZE?

@roman-khimov roman-khimov added U4 Nothing urgent S4 Routine I3 Minimal impact performance More of something per second bug Something isn't working labels Dec 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working I3 Minimal impact performance More of something per second S4 Routine U4 Nothing urgent
Projects
None yet
Development

No branches or pull requests

3 participants