Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve the performance and reliability of redis store. #1260

Closed
wants to merge 1 commit into from
Closed

Improve the performance and reliability of redis store. #1260

wants to merge 1 commit into from

Conversation

ifsnow
Copy link

@ifsnow ifsnow commented Jun 14, 2013

similar issue : #1133

  • Improve the performance of subscribe/unsubscribe/message event logic. (redis.js)
  • Remove duplicates clearHandlers call. (transport.js)

I operate the system which have lots of simultaneous connections. (frequent reconnection)
As time passed, The performance and reliability drops dramatically.

I applied the changed code and the performance and reliability was dramatically improved.

If it was helpful to the other person, it will be good.

@reatailret
Copy link

This problem is solved #1167 ?

@ifsnow
Copy link
Author

ifsnow commented Jun 14, 2013

This is an improvement of an event-handling way.

I did not use the native Event module of node.js.
Registering an event per pub/sub function is quite ineffective.

It's a simple test result.

Repeat every 10 seconds.

  • Create 400 connections.
  • Reconnect all connections.
  • Measure time until all connections succeeds.

After 10 minutes.

  • 0.9.16 version - average time : 6,500 ms (lost connection occurs.)
  • patched version - average time : 1,400 ms (100% success)

@KasperTidemann
Copy link

This is a great improvement and should definitely be merged, in my humble opinion.

@toblerpwn
Copy link

Update: Pull request #1371 has essentially resolved the leaks for us in production with ~3,500 concurrent sockets. For our environment, that was the correct solution. See my comments there for more details, or read below for my findings on this pull request.

Hey everyone!

We deployed this in production with ~2,500 high-traffic concurrent socket.io connections, and have seen great (but not perfect) results.

To help illustrate, I've attached a few graphs of CPU usage in our staging/load testing environments. Note that this environment is using the Node.js Cluster functionality with 2 total workers, both using/listening with Socket.io redis-store.

First, in this environment, here is a side-by-side example of how well this fork handles our load vs. core socket.io#0.9.16:

redis-store-mem-1

Second, here is an exploration of what happens over time with this branch (and why we say it is still imperfect):

redis-store-mem-2

(Note that, in this environment, our Node.js processes "peg" out at ~60% overall CPU usage. In other words: when the overall CPU usage approaches 60%, that means our Node.js processes are fully pegged, meaning that the socket.io "leak" has fully taken its toll, user interactions are suffering greatly, and the server will go down soon. In other other words: think of 60% as 100%.)

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants