We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I tested the http_concurrent_conn_calls benchmark on a machine with 32 cores, and I see indeed a huge regression with jsonrpsee, about 98%!
http_concurrent_conn_calls
I ran it in this order:
In 2. I observe a performance gain of 98% in sync/512, sync/1024, async/512 and async/1024, similar performance in the other cases.
sync/512
sync/1024
async/512
async/1024
This means that jsonrpsee is twice as slow as jsonrpc in cases sync/512, sync/1024, async/512 and async/1024.
The text was updated successfully, but these errors were encountered:
Closed by #718, feel to try it out again @librelois
A followup PR is coming soon where it will possible to configure to backlog in the HTTP server.
backlog
Sorry, something went wrong.
No branches or pull requests
I tested the
http_concurrent_conn_calls
benchmark on a machine with 32 cores, and I see indeed a huge regression with jsonrpsee, about 98%!I ran it in this order:
In 2. I observe a performance gain of 98% in
sync/512
,sync/1024
,async/512
andasync/1024
, similar performance in the other cases.This means that jsonrpsee is twice as slow as jsonrpc in cases
sync/512
,sync/1024
,async/512
andasync/1024
.The text was updated successfully, but these errors were encountered: