-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak with Python 3.11.2 #7252
Comments
If the memory leak appears only in a specific patch version of Python, I suspect the bug report will need to go to cpython. |
Also, sharing the script would be useful. |
tracemalloc profile after running for awhile: (This is python 3.11.3 which also exhibits the leak) This doesn't accurately reflect the heap sizes I see, but I suspect that second line with 241614 allocations is the issue. This is the main body of the script I'm using to reproduce it; it's loading a set of around 700 private IP endpoints which respond in a variety of different ways.
|
I have an update. It seems that removing the I know I had to put that in at some earlier time to prevent a memory leak. I wonder if something has changed in recent Python versions that has fixed the cause of this, and now that flag ends up causing a leak. |
I'm not familiar with the code, but at a glance it just appends to a list: Maybe check the size of that list? I think your tracealloc might be indicating something with that too. No idea why a cpython change would affect the behaviour though. |
I'm seeing the leak as well with I see a lot of
|
python/cpython#98539 looks related |
related elastic/rally#1712 |
related elastic/rally#1714 |
There is currently a relatively fast memory leak when using cpython 3.11.2+ and cleanup_closed with aiohttp For my production instance it was leaking ~450MiB per day of `MemoryBIO`, `SSLProtocol`, `SSLObject`, `_SSLProtocolTransport` `memoryview`, and `managedbuffer` objects see aio-libs/aiohttp#7252 see python/cpython#98540
Looks like this was backported via python/cpython@bd8b32b? |
It was backported the same day that 3.11.0 was released. So, I think the suspicion is that the "fix" may actually be causing the memory leak? Because it must have been in 3.11.1+, but probably just missed the 3.11.0 release, which aligns with the reports. |
related issue aio-libs/aiohttp#7252 related PR #93013
Going by elastic/rally#1714, it looks like we just need to add some checks for None. Before that change, it would never be None, so the code worked correctly. I guess something about that exception could also end up resulting in a memory leak? |
I haven't seen the exception but I have a Home Assistant leaking about 450MiB of ram per day. I just turned off |
The leaked objects are The The Thats about as far as I've gotten with the debugging |
Well, if you could also test with the changes in #7280 later, that'd be great. |
I don't know if the exceptions might get suppressed for some reason, but I suppose that if an exception occurs on that abort(), then none of the other transports will get aborted, and none of the transports will get removed from that list in order to be garbage collected. So, it makes sense that a memory leak could occur. |
I'll switch my production instance to use #7280 after the |
Setting Trying #7280 now. |
Hey all, seems like you all are triaging and attempting to solve the issue so best of luck 🙏 For the raw delta, we saw an increase in ~600MB in Inactive Memory over ~6.5k HTTP requests via |
While its too soon to be sure (need to give it a few more hours), keeping |
I think this change will only have an effect with |
I can no longer replicate the issue after the change in #7280 It would be great to get a 3.8.5 since this affects Home Assistant users in production. |
I'm thinking it might be better to just get this fixed in cpython, 3.11.4 will be released in 3 weeks, so if we get it fixed there in the next week, it'll probably beat aiohttp to a new release anyway. |
…93013) * Disable cleanup_closed for aiohttp.TCPConnector with cpython 3.11.2+ There is currently a relatively fast memory leak when using cpython 3.11.2+ and cleanup_closed with aiohttp For my production instance it was leaking ~450MiB per day of `MemoryBIO`, `SSLProtocol`, `SSLObject`, `_SSLProtocolTransport` `memoryview`, and `managedbuffer` objects see aio-libs/aiohttp#7252 see python/cpython#98540 * Update homeassistant/helpers/aiohttp_client.py
…93013) * Disable cleanup_closed for aiohttp.TCPConnector with cpython 3.11.2+ There is currently a relatively fast memory leak when using cpython 3.11.2+ and cleanup_closed with aiohttp For my production instance it was leaking ~450MiB per day of `MemoryBIO`, `SSLProtocol`, `SSLObject`, `_SSLProtocolTransport` `memoryview`, and `managedbuffer` objects see aio-libs/aiohttp#7252 see python/cpython#98540 * Update homeassistant/helpers/aiohttp_client.py
Should be fixed in 3.11.4. So, the only affected versions are 3.11.1, 3.11.2 and 3.11.3. |
Thanks 👍 |
Enabling cleanup closed on python 3.11.1+ and before python 3.11.4 leaks memory relatively quickly (see aio-libs/aiohttp#7252)
* Disable cleanup_closed on cpython <= 3.11.3 Enabling cleanup closed on python 3.11.1+ and before python 3.11.4 leaks memory relatively quickly (see aio-libs/aiohttp#7252)
…ome-assistant#93013) * Disable cleanup_closed for aiohttp.TCPConnector with cpython 3.11.2+ There is currently a relatively fast memory leak when using cpython 3.11.2+ and cleanup_closed with aiohttp For my production instance it was leaking ~450MiB per day of `MemoryBIO`, `SSLProtocol`, `SSLObject`, `_SSLProtocolTransport` `memoryview`, and `managedbuffer` objects see aio-libs/aiohttp#7252 see python/cpython#98540 * Update homeassistant/helpers/aiohttp_client.py
Describe the bug
I am seeing a memory leak with aiohttp using Python 3.11.2.
When I downgrade to Python 3.11.0, it does not leak.
This is a script that does a large number of client requests to a large number of different servers. The script leaks hundreds of MB over the course of a few hours. I haven't yet gotten this to a form that can be easily reproduced and shared, but I thought I would get this entered in case anyone else is seeing similar symptoms.
To Reproduce
The script I am running creates about 700 https client sessions to different servers. Each session does a simple GET request, which most succeed but some fail in various ways. The script sleeps for 5 seconds and then repeats this.
Expected behavior
This should not leak memory over time. And it doesn't with 3.11.0. (I don't currently have an easy way to run it with 3.11.1).
Logs/tracebacks
Python Version
Python 3.11.2
aiohttp Version
multidict Version
yarl Version
OS
Fedora release 37 (Thirty Seven)
Related component
Client
Additional context
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: