-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP client leaks memory. #1321
Comments
Does it also occur with libasync? |
No. Looks like libasync doesn't leak. |
But we can't switch to libasync now, because it causes very strange behaviour and we are unable to understand what exactly is happening. |
Strange behavior? |
might be the same as of #1122 |
Thanks. It fixes first case, but doesn't second (server throwing error). |
I've had suspicions that Exceptions leaked. Are you talking about a leak on the client when both client and server are in the same process? |
No. Client and server are different processes. Server doesn't leak. |
Which tool I should use for debugging memory leaks? |
I generally use It's much more difficult to find a GC memory leak though |
Сould the leak be caused by forced task interruption ( |
Also sometimes interruption causes assertion. Just got one (libasync driver):
|
If the client does this, it shouldn't cause a leak currently, your connection will be recycled. If the server runs into an exception, keep-alive is disabled and the connection is closed |
Looks like the task was interrupted and destroyed/freed the AsyncDNS while it had a mutex/condition copied in the DNS resolver thread. I might have to put that object on the GC, thanks |
Scratch that, the connection will not be recycled on exception in the response handler. Here's the reference to the keep-alive behavior on the server-side: |
Our third party data source servers are very buggy. Dropped connections, half-open TCP, wrong JSONs, slow downloading, various HTTP errors and so on. |
Here's a few ideas not tested, may or may not be better suited for your purpose. client.request("http://....",
(scope req) { },
(scope res) {
Timer tm = setTimer(5.seconds, { res.diconnect(); });
scope(exit) { if (tm.pending()) { tm.stop(); tm.destroy(); } }
writeFile("myFile.txt", res.bodyReader.readAll()); //...
} or client.request("http://....",
(scope req) { },
(scope res) {
auto fstream = openFile("myfile.txt", FileMode.createTrunc);
ubyte[] buf = new ubyte[4092];
while(res.bodyReader.waitForData(2.seconds)) {
res.bodyReader.read(buf[0 .. res.bodyReader.leastSize])
fstream.write(buf); //...
}
} If you'd like to timeout on the connect(), you'd need to implement it in the driver directly. I did it like this: https://github.com/etcimon/vibe.d/blob/master/source/vibe/core/drivers/libasync.d#L316 For DNS, you'd be better off adding the address in the hosts file if you think it's never going to change, or avoiding DNS resolver altogether. |
I'd like to timeout on request regardless of its stage. |
It needs to be implemented at every stage, currently interrupt is a reliable shortcut, although it doesn't work with the AsyncDNS |
Gzipped data decompression also leaks. Actually it's more critical than the previous leaks. Code: https://github.com/japplegame/vibe-http-client-gzip-test Comment https://github.com/japplegame/vibe-http-client-gzip-test/blob/master/server/source/app.d#L16 and leaks disappear. Server works without leaks in both cases. |
Yes, I fixed this in my fork and I proposed a fix here: |
HTTP client leak fix hasn't been merged/done yet. |
Free TCPContext on connection failure - fixes #1321
Sorry, I mistakenly thought that ca3e23c and zlib patches fixed it. |
Ping |
Closed (temporarily?). Because I should give the bounty to @etcimon. And now HTTP client doesn't leak (at least for our project). |
@s-ludwig you should claim this, I only changed some 3 lines of code although it prompted you to do a lot of refactoring and overall improvements. |
@etcimon: IMO, you should definitely claim it! You've invested quite some time for diagnosis and preparing pull requests and my works isn't really directly related. I also still really need to take a deeper look at #1324. BTW, I remember now that you asked about libasync in conjunction with the new vibe-core package somewhere, I left a tab open but it got lost before I got back to it to answer. So I'm not yet fully decided on this one. There are some issues with libasync currently:
eventcore's downsides:
|
Ok thanks! I need to look into each of these points to be sure, but I think the benchmark could be given a 2nd try. There were a lot of GC string appending done by mistake in low level code due to |
I'm sure vibe.d will stop supporting 2.066/2.067 eventually
That can be fixed with some micro-optimizations. For example,
This would belong in the micro-optimizations step. I think a good strategy would be e.g. using array for <10 consecutive connections and rbtree for >10, and inlining as much of these operations possible. Redis does this for its hashmap (hset) implementation to speed it up on small data
Phobos integration was brought up at the time I launched libasync and the idea was to also integrate it into std.concurrency I think this will also trickle down to std.socket and require some major work to make it merge. That's something I haven't been ready to do myself yet (because time) but you might agree it'll be considerably harder to maintain something that produces so much interlinking, unless you find someone else to make it ready and be committed to it.
I've always liked your APIs better, I just couldn't go through any back and forth on the design at the time because I was time constrained.
That's the definition of a WIP ;)
That's also because it's WIP Honestly, I'm overburdened with development projects in production (using my/your tools) and I feel like these tools are mature right now, whereas some performance optimizations would help I think it could be easier to run perf on libasync. My bottlenecks are currently in postgres (70%+ of cpu time is spent there, 20%+ in botan/openssl) so it's not to my advantage to do this currently, but I'd drop it anytime for eventcore once it's complete because I've always admired your quality of code as being far superior in the way it's planned from the start. |
This is for Windows because the events don't carry the object pointer. It could also be useful to introduce a custom tls bucket specifically for the event objects |
ok I just added inlining for AsyncTCPConnection.send, recv, catchSocketError, etc. which means it should be inlining in the driver all the way down to the system calls. |
Configuration: CentOS 7 x86_64, vibe.d-0.7.26, libevent driver
There is something wrong in HTTP client error handling.
Client code:
If to run this code without a HTTP server, the client starts to leak memory very fast.
But if to run a simple HTTP server:
The client immediately stops leaking. Stop the server and you will see leak again.
If to change server code to throwing error:
the client leaks, but much more slowly then without a server. But still leaks.
The text was updated successfully, but these errors were encountered: