-
-
Notifications
You must be signed in to change notification settings - Fork 856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak when creating lots of AsyncClient contexts #978
Comments
That’s super surprising if so. Can you test against master and see if you’re getting the same results? There’s not anywhere I can think of on a first pass where that could be an issue for us, so yeah any digging into it would be most welcome. What are you using to measure the memory usage? |
I did test against master and have the same issue, can you reproduce? |
It might be worth trying |
Running the test with |
Interesting. I wonder if simply creating instances of |
I think I have a more minimal (if even weirder) example:
Removing |
Edited the minimal example, you're right, just creating a SSLConfig reproduces the issue, but only if it's followed by an await before returning |
Found a more minimal example that doesn't use httpx of httpcore, switched back to asyncio so I open the issue on cpython
|
Just out of curiosity, in python 3.7 the bug exists? |
I have no idea, building |
Useful points for further investigation...
|
yes, which implies no to the other points Can you reproduce the behavior I observe on my systems? I think it might depend on the system level ssl library |
I'm on macOS 10.14.6, Python 3.8.2, and it seems I'm not able to reproduce the leak. I ran your latest script and monitored the memory usage on 1000 tasks. I see an increase of RAM usage up to ~700MB, then once the I also tried the script for the issue description, the one with |
See https://bugs.python.org/issue40727 I also cannot reproduce the leak on windows, only Linux |
I can't exactly replicate that, no. It'll will use 1.5GB memory while it has 2000 instances simultaneously in memory, although it'll free up once they're out of scope. One thing we might want to consider in any case is globally caching our SSL contexts. They're a little bit slow to create, and they're memory hungry, so it'd probably make sense for us to cache a small number of them so that users who're (unnecessarily) creating lots of clients aren't being negatively impacted. |
Creating and using them synchronously doesn't run into memory issues (as you don't keep them simultaneously in memory, and memory gets freed correctly), also it takes just a few seconds to create thousands. |
Just adding this so we've got some bearings on this. ca_path = certifi.where()
contexts = []
for i in range(2000):
context = ssl.SSLContext(ssl.PROTOCOL_TLS)
context.load_verify_locations(ca_path)
contexts.append(context) Will take about 1.5GB and nearly 15 seconds on my machine, which is fairly slow and hungry whatever. Might be a good argument in favour of caching SSLContexts. In any case, I'll close this since it's not our issue. You don't need to create 2000 clients - create a single client, and reuse it throughout. You only need to create additional clients if you need a different configuration on them. |
I'm experiencing this too:
And it only happens when setting
|
You might want to dig into why that's not possible... There's no particularly graceful way for us to get around the fact that instantiating a bunch of |
@UrbiJr I think You can try to see if using that library would fix your issue, in that case httpx might think of doing the same if enough other people run into this. Also I'm not sure your issue is related to what I ran into last year, as I was unable to reproduce that on windows, but in a year I guess lots of things changed |
I would use the same |
I have too much code based on |
Hrm. One option here might be to create a single global SSLContext instance... ssl_context = httpx.create_ssl_context() And then to pass that to all the client instances... client = httpx.AsyncClient(verify=ssl_context) |
Ok. Sorry I did not think of that as I was focused on |
…l.SSLContext before there was one ssl.SSLContext per client. see encode/httpx#978
…l.SSLContext before there was one ssl.SSLContext per client. see encode/httpx#978
…l.SSLContext before there was one ssl.SSLContext per client. see encode/httpx#978
import httpx
import gc
import asyncio
print(f"httpx version: {httpx.__version__}")
async def make_async_client():
async with httpx.AsyncClient(verify=False) as client:
await client.request(method='get', url='https://gorest.co.in/public/v1/users')
await asyncio.sleep(10)
async def main(n):
tasks = []
for _ in range(n):
tasks.append(make_async_client())
print(f"Creating {n} contexts, sleeping 10 secs")
await asyncio.wait(tasks)
asyncio.run(main(2000))
print("Finished run, still using lots of memory")
gc.collect()
input("gc.collect() does not help :(") memory leak again. What am I doing wrong? httpx: 0.18.2 |
You don't need to create 2000 clients - create a single client, and reuse it throughout.
⬆️
|
If I create a one client, how should I close it? |
From docs, so you can't create 1 client for your session and then you should manually close it and create a new one, so it will continue to leak memory
|
For example if I have |
Checklist
master
.Describe the bug
After creating an AsyncClient context (with async with), it does not seem to be garbage collected, that can be a problem for very long running services that might create a bunch of them and eventually run out of memory
To reproduce
Comparison with
aiohttp
Expected behavior
Memory gets freed, after exiting the async context, like for
aiohttp
Actual behavior
Memory does not get freed, even after explicitly calling
gc.collect()
Debugging material
Environment
master
asyncio
andtrio
Additional context
I understand typically you need to have only one async ClientSession, but it shouldn't leak memory anyway, for very long running processes it can be a problem
Thanks for this great library! If you're interested I can try to debug this issue and send a PR
The text was updated successfully, but these errors were encountered: