-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark accuracy #11
Comments
Lets me comment on your points in order.
|
@ohler55 Great response, thanks for all the details and the positive feedback. I will need to come back to you with specific answers, but just generally:
|
|
I thought I'd add some notes while I'm looking through the code.
Agoo doesn't implement the same benchmark as the other rack compatible servers, because it serves from a static directory by default. Whether or not this reflects the real world (e.g. does passenger do this by default too?) should probably be discussed, but at the very least, I think we should have the SAME rackup file and run that for all servers.
It's not clear to me why we are using
perfer
vswrk
andab
or a variety of other testing tools.wrk
can definitely push a large number of requests. I'll be interested to see the results I get withperfer
The
puma
benchmark uses rackup command. At least in the case offalcon
, therackup
command imposes severe performance limitations. It might not be the same for puma, but I don't know. The best way to test puma would be in cluster mode.If we used
wrk
to perform test, we can also report on latency, which is a useful metric. Throughput and latency are related and both useful numbers to report.The benchmark page doesn't feel very impartial. I think we should make the benchmark results as objective as possible. There should be some caveats section so that people know the limitations of such benchmarks.
The text was updated successfully, but these errors were encountered: