-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change flow control for writers #2698
Comments
you should run some benchmarks before 3.0 release |
btw raw results for round 15 techempower benchmarks are ready https://tfb-status.techempower.com/ |
aiohttp consistently outperform sanic :) despite fact that sanic is super fast |
@fafhrd91 is there a page with human readable results? |
write buffer can help with performance. what I do in actix is write everything to buffer and flush this buffer only once per event loop iteration. so for example if you have multiple request in processing queue, you can write all response once before pushing data to a socket. only with this optimization actix can process 1m requests a second |
human readable results are not ready yet. you should open json file in Firefox, it shows json nicely, at least nightly Firefox. |
@asvetlov here is script import json
#data = open("results.2018-01-27-17-41-53-405.json", 'r').read()
#data = open("results.2018-01-27-17-41-53-405.json", 'r').read()
data = open("results.2018-01-18-09-55-11-468.json", 'r').read()
data = json.loads(data)
raw_data = data['rawData']
res = {}
for key in ['fortune', 'plaintext', 'db', 'update', 'json', 'query']:
results = []
d = raw_data[key]
for name, val in d.items():
#if len(val) < 5:
# continue
info = val[0]
s, e = info['startTime'], info['endTime']
if 'totalRequests' not in info:
continue
reqs = info['totalRequests']
persec = reqs / (e-s)
results.append((persec, name, reqs))
results.sort()
res[key] = [(name, r1, r2) for (r1, name, r2) in list(reversed(sorted(results)))]
print(json.dumps(res)) |
I would suggest sending the headers together - if these haven't been sent explicitly - with the first data sent by the |
another question, is there reason. aiohttp can not compete with frameworks from top no matter what you do. |
BTW just taking a look to your data
Looks like |
I doubt apistar use much of python inside :). same as japronto, it is fast but python is used for loading c-extension. I don't know about sanic, it should be fast in json and plaintext, but it still can not process two sequential requests. |
new use case for python: a shell for .so files :) if you need real speed check my actix-web framework |
You got a new star. |
Returning to buffering: now aiohttp does 2 syscalls if kernel buffer is not full, with switching to storing sent bytes in user-space memory on overflow and pausing if internal buffer is overflown too. |
I think dropping buffer is fine. merging header with first chunk could specific optimization, I think this optimization could be used for responses where body already is set. also if we give ability to send first chunk with headers to developer, event for streaming responses, that would be enough for optimization (happy path) all other types of responses could be processed as separate transport calls. |
also, if handler does some IO operation, separate syscall for headers is fine |
It depends on the usage pattern. If the user is trying to write very small data pieces, It is usually better to merge them in a small buffer before sending them to kernel. In this situation, I would personally prefer a small (4KB) internal buffer, merge and send them at once, then drain the buffer before send again. |
Say again, asyncio streams are relative simple objects, they don't merge several |
Maybe some benchmark is necessary |
Since
writer.write()
is a coroutine -- no need for internal writer buffer at all.On socket's internal send buffer overflow the writer should be paused without trying to fill application level transport buffer -- it makes no sense but wastes extra memory and CPU cycles.
The text was updated successfully, but these errors were encountered: