-
Notifications
You must be signed in to change notification settings - Fork 652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low throughput benchmark test compared to vert.x #392
Comments
I'm reproducing this only with Reactor Netty using
|
I wrote a spring 1.5+netty test project seems to be much faster but using
Do you know if there's a way to workaround it in spring boot 2 with webflux? |
@duke-cliff Can you try this https://github.com/violetagg/webflux-vertx-benchmark/tree/master/reactornetty |
@duke-cliff did you try Reactor Netty example? Thanks a lot. |
We're currently looking into this in Spring WebFlux. Please follow https://jira.spring.io/browse/SPR-17250. |
I will also try the reactor+netty by the end of this week and will come back to you. |
I re-run all different implementations again with the same setup. Reactor is 50% of native netty throughput. Reactor is 10x than webflux. |
@duke-cliff Did you run it several times or just once? On my side I do not see such big difference between Reactor Netty/Netty/Vert.x. Reactor Netty
Netty
Vert.x
|
The server information was wrong above. It's actually running on c5.2xlarge(8CPU/ 16GB) Maybe you hit some other bottlenecks on the (CPU/Memory/Network IO). Reactor Netty
Netty
Vert.x
|
@violetagg Have you considered setting up a benchmark over at TechEmpower (or would you support an effort to do so)? |
@duke-cliff what configuration are you using ? default ? 1 thread ? Epoll ? |
@smaldini On reactor project? I just run the default. On webflux I tried both 1 thread and 16 threads. |
I have the same problem. I want to use webflux, but I can't. This is because performance decreases as requests increase. |
@violetagg Is this still being looked at? The low performance is really a barrier for our adoption. I think the comparison against Vertx is a really good point of reference. Another way is to see Webflux rating high in these benchmarks: https://www.techempower.com/benchmarks/#section=data-r17&hw=ph&test=plaintext&l=zik0vz-1 |
Hi All, Can you test with the newest binaries for Reactor Netty and Spring Framework. Thanks, |
@violetagg Spring Cloud Gateway (Greenwich SR1 and Snapshot) stops working under heavy load under this release. |
What does "stops working" mean? |
What I see is that TPS drops from 1200 to 200 to PCF rebooting the container because it runs out of memory. I do not see this behavior with SR6 (and all else the same). |
@dave-fl Did you get a chance to test against SR8 or Spring Boot 2.1.5 above? Haven't tested on my side. Wondering if there's any improvement from this version as @violetagg replied |
@duke-cliff @dave-fl I believe we fixed the issues from comment #392 (comment) |
We did several improvements in the API and the internal implementation in 0.8.x, 0.9.x and 1.0.x |
For better understanding, I put our test programs both in vert.x and webflux here: https://github.com/uken/webflux-vertx-benchmark.
I am doing some load test comparison on webflux(spring boot2) vs vert.x.
The test case is a very simple text/json echo service. So far from the test result, webflux's throughput is much lower than vert.x(10x with annotation, 8x with functional). One interesting thing I found is with the same load, vert.x is only using basically one thread(eventloop) with a very low CPU usage.
But on webflux, by default it's using all of the NIO threads, but throughput is still not even comparable.
I thought both webflux and vert.x are based on netty(epoll on linux), hope someone could show me what I can tweak with in order to get the result closer.
The client I use is wrk/wrk2.
Options are:
wrk -d60s -t30 -c100 -R 500000 -s post.lua http://xxxx/text
$ more post.lua
wrk.method = "POST"
wrk.body = "{"key":"entity_1","point":1}"
wrk.headers["Content-Type"] = "application/json"
The text was updated successfully, but these errors were encountered: