Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spit up tight grouping at top of JSON serialization by testing with 2048 connections #4480

Closed
nathantippy opened this issue Feb 24, 2019 · 3 comments

Comments

@nathantippy
Copy link
Contributor

OS (Please include kernel version)

Expected Behavior

Actual Behavior

Steps to reproduce behavior

Other details and logs

@nathantippy
Copy link
Contributor Author

JSON serialization test runs with 512 connections, by increasing the connections to 2048 it will help show the differences between the solution which have all grouped near each other at the top of the results.

@zloster
Copy link
Contributor

zloster commented Mar 25, 2019

I also was wondering some time ago where is the cause the results saturation for the JSON.
But I've ruled out the number of connections because of the information discussed in #3538. Specifically this one:

Locally I think I got 9.8 Gbs with iperf and measured a max of 1.5M packets per second. Any benchmark that is over 1.5M is obviously and correctly using pipelining.

At 256 and 512 concurrency the ulib is very close to the above limit - ~1.28M and ~1,36M RPS. I'm assuming the response fits one Ethernet packet. The data is here: https://www.techempower.com/benchmarks/#section=test&runid=50068a69-f68c-44fc-b8f7-2d44567e8c78&hw=ph&test=json&l=ziimf3-7&f=0-0-9zldt-13ydj4-9zlds-jz6sg-0-4fti68-0-0

The above numbers align very well with the results from the Caching test: 1.37M RPS at 256 concurrency with 1 object extracted from the cache.
You'll have to dig in the results.json file. I don't know if it's possible to display this result in the web viewer.

So to wrap it up - IMO increasing the number of connections will not help here. The reason is that currently the frameworks are saturating the capability of the network infrastructure to transmit response packets.

@nathantippy
Copy link
Contributor Author

Good answer, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants