-
Notifications
You must be signed in to change notification settings - Fork 652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very low throughput - Spring Webflux Netty vs Gatling #650
Comments
Hi @sercasti,
|
Thanks for your quick reply! |
If you return the file as a To demonstrate the benefits you need to introduce some latency due to I/O (e.g. remote call) in the handling of the request. |
@sercasti Can you report back with your findings? Back when I was evaluating this the performance was really low compared to Vertx but I'm still hoping it will improve. |
@gihad I keep trying, but getting the same results. https://github.com/sercasti/demoReactive/blob/PI/src/main/java/com/example/demo/controller/Controller.java |
@sercasti but you didn't move the computation to a separate thread pool with |
@sercasti Let me explain in details what you are observing: Now can you execute the following scenarios:
I hope that when you measure, you use two different machines/VMs for the client and the server in order to have real results) Regards, Violeta |
Your limit will always be how fast your thread pool can complete work + the time for the system to context switch to join on when the work is ready. You should be publishing on the worker thread pool and not interrupting the event loop. I would add some logic to allow some results to get cached or perform some non blocking work. You can use a fixed delay of say 50ms ( |
Closing for now unless there are some additional details you want to share @sercasti |
@gihad This issue is closed based on the scenario that @sercasti tries to measure. |
@gihad part of the problem is currently addressed by WebFlux for 5.2 already (content negotiation optimizations, encoder/decoder etc) and we have a few improvements which are coming, some in 0.8 and the rest in 0.9. One area we are particularly exposed on in benchmarks (like #654) is single body responses. We are doing multiple flushes for what could be one single network flush (chunked encoding, flush headers, then flush body, then flush last http). I noticed locally that detecting those single body responses and ship them with one flush + content length we are pretty similar in latency and req/s. We'll probably include that change for our next versions while continuing with various other optimizations. Vert.x
Us with sendObject optimization (one flush) (off 0.8 branch)
|
Thanks @smaldini, this is very encouraging. |
Expected behavior
A vainilla Hello World example of Spring WebFlux with Netty should outperform Spring WebFlux with Tomcat, easily.
Actual behavior
After 900 active gatling users, netty starts spitting ConnectTimeoutException
Same test against the same controller but using webflux over tomcat has 100% success. Maybe I'm missing some configuring to make netty scale, but I couldn't find any.
Steps to reproduce
Use this project or start a vainilla netty reactor service: https://github.com/sercasti/demoReactive
All it does is start a rest endpoint with this code:
Then execute this gatling test:
https://github.com/sercasti/gatlingStressTest/blob/master/src/test/scala/baeldung/RecordedSimulation.scala
You will see that after the first 1000 requests are successfully attended to, starts to blow up.
Simulation results folder: https://github.com/sercasti/gatlingStressTest/tree/master/target/gatling/recordedsimulation-20190318004124114
Reactor Netty version
0.8.5
JVM version (e.g.
java -version
)1.8.0_181
OS version (e.g.
uname -a
)Mojave Darwin Kernel Version 18.2.0
The text was updated successfully, but these errors were encountered: