-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Many stale connections in state CLOSE_WAIT and FIN_WAIT2 #1040
Comments
I have the same issue, and from some digging around, it seems it's a v8 engine bug (affects other libraries other than socket.io) - #1015 (comment) "I've been testing my current setup using 4GB of ram and I can only get to about 200-300 users before memory is sucked up in a few hours. I'm really not doing much other than some redis pubsub and relaying messages." - #1015 (comment) Suggested solution is to disable websockets, or use node version 0.4.12 (which may break other things). Or I guess, restart server every so often (use monit?). Also see: |
Thanks for the info. |
I was just wrestling with what I think is this issue, on my own socket.io/redis app, on Nodejitsu. It definitely seems like it's socket.io's Redis store. I'm going to follow the OP's lead and switch to SockJS. Chat logs of me working through it in the #nodejitsu support channel: |
@konklone: Just to let you know, we didn't use Socket.IO's Redisstore. |
Issue still occurring in engine.io with Node v0.10. I made the simplest program possible. Just connect and client sends data to server every once in awhile and vice-versa. |
I too was seeing this issue with v0.10 using the Redis store. Eventually moved away from socket.io to Faye instead which has proven to be much much more reliable in production. |
Experiencing the same thing with socket.io on Node v0.8. End up with a thousand or so of FIN_WAIT2 connections before memory maxes out. |
@nategood (and others): we've had success resolving these sorts of issues in MemoryStore implementations (aka default implementations) using the latest Socket.io (v0.9.16?) (and Node.js in our case - v0.10.xx). I think the solution on 0.9.14 might have been the fix. Using RedisStore still has this issue, however, even with the latest Socket.io; we're going to try deploying the solution from #1303 in the next few days and see what's what. |
Any updates on this? RedisSessionStore still leaking all over the place. |
+1 on this one, we're also seeing it on our chat application (nodejs v0.10.22 / socket.io 0.9.16) |
FIN_WAIT2 issue can be avoided by using https://github.com/soplwang/node-ka-patch |
Awesome! Looks like https://github.com/soplwang/node-ka-patch might be a working workaround (mind the wordplay! ;)) I built it into our node.js servers and TCP sockets seem to behave as expected now. |
Unfortunately, I have to revise my previous post: Although
|
I'm running a pretty basic Socket.IO application which just relays messages from a redis pub/sub queue to clients.
The application uses the current Socket.IO 0.9.10 on Node.js 0.8.10, and all transports are enabled.
There are ~4000 simultaneous connections, but after some hours the server has 4 times that many TCP connections. These connections seems to be stale and in the state CLOSE_WAIT or FIN_WAIT2.
The number of these un-dead connections grows linearly with time, and results in high memory usage and load.
As far as I could find out via google this is a result of clients not correctly closing connections. It is my understanding that the application (Socket.IO) should force-close these connections after some timeout. Is that correct? Is there a bug in Socket.IO?
Any ideas to further debug?
Thanks
The text was updated successfully, but these errors were encountered: