Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streams optimization #496

Merged
merged 11 commits into from
Sep 19, 2015
Merged

Streams optimization #496

merged 11 commits into from
Sep 19, 2015

Conversation

homm
Copy link
Contributor

@homm homm commented Sep 7, 2015

This is proof of concept of significant StreamReader speedup. The main idea is avoiding memory copying and reallocations. The collections.deque is used instead of the bytearray, but unlike for DataQueue, no reading limitations are applied.

Performance

My benchmarks: https://gist.github.com/homm/3b9c2909905f76668bf9

Test old time new time speedup
Large chunk
test_stream_feed 0.0298 0.00001 2450x
test_stream_read_all 0.2127 0.00005 3948x
test_stream_read_less 0.2433 0.0506 4.8x
test_stream_read_any 0.0608 0.00005 1128x
test_stream_read_exactly 0.2005 0.0173 11.5x
Small chunks
test_stream_feed 0.0674 0.0094 7.1x
test_stream_read_all 0.2633 0.0508 5.2x
test_stream_read_less 0.2165 0.0556 3.9x
test_stream_read_any 0.1022 0.0237 4.3x
test_stream_read_exactly 0.1825 0.0371 4.9x

Memory

As a result of removing relocations, the memory consumption and fragmentation is also greatly reduced. I've got following result for the benchmark:

  • Without test_stream_read_all test.
    • Old: mostly 350-450MB, up to 550MB
    • New: 92MB
  • With test_stream_read_all test.
    • Old: mostly 350-450MB, up to 550MB
    • New: 170MB

Distinct

The main difference is how read(BIG_LIMIT) and readany() produce the result when the buffer is full: In the old implementation, the whole buffer is returned at once. In the new implementation, only the first chunk is returned.

@fafhrd91
Copy link
Member

fafhrd91 commented Sep 7, 2015

Sure! Please re-write readline(), do not touch tests, I'd like to test it on real system

@homm
Copy link
Contributor Author

homm commented Sep 7, 2015

@fafhrd91 Done.

In my opinion readline() is totaly broken, because it checks limits after reading from _buffer and also it corrupts input stream if line size > limit, because it reads the undefined amount of data and lose it. But I think this is not the topic of current PR, and I've restored the old behavior.

@ludovic-gasc
Copy link
Contributor

Thank you a lot @homm, I'm pretty excited to test that with the FrameworkBenchmarks suite to see the difference with the previous version of aiohttp.

Can I already test with this branch, or I must wait that you fix all the tests compliance first ?

@fafhrd91
Copy link
Member

fafhrd91 commented Sep 7, 2015

@GMLudo should be fine without tests

@kxepal
Copy link
Member

kxepal commented Sep 8, 2015

@homm Nice work! In additional to aiohttp test, it also breaks aiocouchdb ones:

Traceback (most recent call last):
  File "/home/kxepal/projects/aio-libs/aiocouchdb/aiocouchdb/tests/utils.py", line 36, in wrapper
    return testcase.loop.run_until_complete(future)
  File "/usr/local/opt/virtualenv/aiocouchdb/lib/python3.3/site-packages/asyncio/base_events.py", line 316, in run_until_complete
    return future.result()
  File "/usr/local/opt/virtualenv/aiocouchdb/lib/python3.3/site-packages/asyncio/futures.py", line 275, in result
    raise self._exception
  File "/usr/local/opt/virtualenv/aiocouchdb/lib/python3.3/site-packages/asyncio/tasks.py", line 238, in _step
    result = next(coro)
  File "/usr/local/opt/virtualenv/aiocouchdb/lib/python3.3/site-packages/asyncio/tasks.py", line 377, in wait_for
    return fut.result()
  File "/usr/local/opt/virtualenv/aiocouchdb/lib/python3.3/site-packages/asyncio/futures.py", line 275, in result
    raise self._exception
  File "/usr/local/opt/virtualenv/aiocouchdb/lib/python3.3/site-packages/asyncio/tasks.py", line 236, in _step
    result = coro.send(value)
  File "/home/kxepal/projects/aio-libs/aiocouchdb/aiocouchdb/v1/tests/test_document.py", line 232, in test_get_with_atts
    yield from result.release()
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/multipart.py", line 187, in release
    yield from self.resp.release()
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/client_reqrep.py", line 638, in release
    chunk = yield from content.readany()
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/streams.py", line 391, in wrapper
    result = yield from func(self, *args, **kw)
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/streams.py", line 454, in readany
    return (yield from super().readany())
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/streams.py", line 199, in readany
    return self._read_nowait()
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/streams.py", line 253, in _read_nowait
    self._buffer_size -= len(data)
TypeError: object of type 'int' has no len()

I tried to reproduce the problem with public instance, but here I also found that patch breaks multipart reader as well in other way:

import asyncio
import aiohttp


@asyncio.coroutine
def go():
    resp = yield from aiohttp.request(
        'get',
        'https://kxepal.cloudant.com/aiohttp-496-bug/docid',
        headers={'Accept': 'multipart/related'},
        params={'attachments': 'true'})
    reader = aiohttp.MultipartReader.from_response(resp)
    while True:
        part = yield from reader.next()
        if part is None:
            break
        print((yield from part.read()))


if __name__ == '__main__':
    asyncio.get_event_loop().run_until_complete(go())

Expected:

bytearray(b'{"_id":"docid","_rev":"6-783d465b150e111b56447cd1d4e533a8","_attachments":{"test":{"content_type":"application/octet-stream","revpos":1,"digest":"md5-JS3VMT0Dk7FXE0dUvc9Vag==","length":15,"follows":true}}}')
bytearray(b'Hello, aiohttp!')

Actual:

Traceback (most recent call last):
  File "/home/kxepal/projects/aio-libs/aiocouchdb/test.py", line 21, in <module>
    asyncio.get_event_loop().run_until_complete(go())
  File "/usr/local/opt/virtualenv/aiocouchdb/lib/python3.3/site-packages/asyncio/base_events.py", line 316, in run_until_complete
    return future.result()
  File "/usr/local/opt/virtualenv/aiocouchdb/lib/python3.3/site-packages/asyncio/futures.py", line 275, in result
    raise self._exception
  File "/usr/local/opt/virtualenv/aiocouchdb/lib/python3.3/site-packages/asyncio/tasks.py", line 236, in _step
    result = coro.send(value)
  File "/home/kxepal/projects/aio-libs/aiocouchdb/test.py", line 14, in go
    part = yield from reader.next()
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/multipart.py", line 178, in next
    item = yield from self.stream.next()
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/multipart.py", line 460, in next
    self._last_part = yield from self.fetch_next_part()
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/multipart.py", line 475, in fetch_next_part
    headers = yield from self._read_headers()
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/multipart.py", line 536, in _read_headers
    headers, *_ = parser.parse_headers(lines)
  File "/home/kxepal/projects/aio-libs/aiohttp/aiohttp/protocol.py", line 85, in parse_headers
    raise errors.InvalidHeader(name)
aiohttp.errors.InvalidHeader: 400, message='Invalid HTTP Header: 50E111B56447CD1D4E533A8","_ATTACHMENTS"'

parser.parse_headers receives the following lines:

['', 'content-type: application/json\r\n\r\n{"_id":"docid","_rev":"6-783d465b1', '50e111b56447cd1d4e533a8","_attachments":{"test":{"content_type":"application/octet-stream","revpos":1,"digest":"md5-JS3VMT0Dk7FXE0dUvc9Vag==","length":15,"follows":true}}}\r\n--c1782ac8c1a68da8b0bd4f8983b86fa3', '']

while it should stop right after '\r\n\r\n'.

Thoughts? If you need any additional information, feel free to ask.

@homm
Copy link
Contributor Author

homm commented Sep 8, 2015

@kxepal Thanks, I've fixed the error in readline(), so MultipartReader works again.

But I can't reproduce the error with aiocouchdb. I've ran tests from master and got a lot exceptions in __del__ method of client_reqrep.ClientResponse (I have Python 3.4):

Traceback (most recent call last):
  File "/Users/Master/Code/async/aiohttp/aiohttp/client_reqrep.py", line 549, in __del__
    self._loop.call_exception_handler(context)
AttributeError: 'NoneType' object has no attribute 'call_exception_handler'
Exception ignored in: <bound method HttpResponse.__del__ of <ClientResponse() [200 None]>
{'CONTENT-TYPE': 'application/json'}

But except for these errors all tests are passed.

Ran 302 tests in 2.083s
OK (skipped=9)

@kxepal
Copy link
Member

kxepal commented Sep 8, 2015

@homm Oh, my. It seems I didn't push the fixes. Thanks for notice!

The problem happens with make check-couchdb - simple make check runs tests against mocks and cannot || not able to trigger this issue. However, I found the error on my side. Ironically, it's within the danger zone where I touch _buffer that you had changed (: s/extend/append/ solves the problem.

Sorry for false alarm (:

@homm
Copy link
Contributor Author

homm commented Sep 8, 2015

@kxepal

s/extend/append/ solves the problem.

Unfortunately no, because there is also _buffer_size and total_bytes properties. Why you can't use feed_data there?

I had the idea to create a wrapper around collections.deque with bytearray syntax (extend and len) for backward compatibility. But I don't know Is it really necessary for private property of 0.x version library :—)

@kxepal
Copy link
Member

kxepal commented Sep 8, 2015

@homm
Actually, I don't know why. Strange decisions eventually happens (: Will investigate public API for better workaround.

As for idea to mimic deque to bytesarray, I think it doesn't worth any effort. In the end, field is private - there are no guarantees provided on them.

@homm
Copy link
Contributor Author

homm commented Sep 8, 2015

@kxepal

The problem happens with make check-couchdb

After that I have various exceptions from new master:

  File "/Users/Master/Code/async/aiohttp/aiohttp/connector.py", line 299, in connect
    'Cannot connect to host %s:%s ssl:%s' % key) from exc
nose.proxy.ClientOSError: Cannot connect to host localhost:5984 ssl:False
----------------------------------------------------------------------
  File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/selectors.py", line 451, in __init__
    self._kqueue = select.kqueue()
OSError: [Errno 24] Too many open files

----------------------------------------------------------------------
Ran 305 tests in 0.621s

FAILED (SKIP=12, errors=255)

@kxepal
Copy link
Member

kxepal commented Sep 8, 2015

@homm hm...what's your fd limit?

@fafhrd91
Copy link
Member

fafhrd91 commented Sep 9, 2015

@homm i tested your patch in production, everything is good. we need to fix kxepal's tracebacks and update tests, then lets merge change.

regarding performance, in system with normal load, small/fast requests, mostly requests to external subsystems difference is not visible. but that is expected. in system with a lot of load (file upload/download) performance slightly better, but memory usage is much better.

@asvetlov
Copy link
Member

ping

@homm
Copy link
Contributor Author

homm commented Sep 16, 2015

Sorry. I'll try to find some time for this tomorrow.

@homm
Copy link
Contributor Author

homm commented Sep 16, 2015

Ok, done.

Instead of checking internal stream._buffer, the stream is consumed until the end. If there was not eof, it is added right before reading.

@kxepal
Copy link
Member

kxepal commented Sep 17, 2015

\o/ it's green! +1

asvetlov added a commit that referenced this pull request Sep 19, 2015
@asvetlov asvetlov merged commit 69dd2d4 into aio-libs:master Sep 19, 2015
@asvetlov
Copy link
Member

Thanks!

asvetlov added a commit that referenced this pull request Sep 19, 2015
@homm homm deleted the optimize-streams branch January 27, 2017 02:00
@lock
Copy link

lock bot commented Oct 29, 2019

This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a new issue for
related bugs.

If you feel like there's important points made in this discussion,
please include those exceprts into that new issue.

@lock lock bot added the outdated label Oct 29, 2019
@lock lock bot locked as resolved and limited conversation to collaborators Oct 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants