-
Notifications
You must be signed in to change notification settings - Fork 485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make use of redis pub-sub for improved performance #46
Conversation
Nice! hopefully this solves #41 |
Created a small repo to test this issue |
Thank you for your PR! I think |
We can listen for the entire |
anyway implementations of add, del looks wrong for me. |
sorry, it looks correct. i was misunderstanding the redis client, will check again later. |
|
||
next(); | ||
}, function(err) { | ||
if (err) self.emit('error', err); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You move on even if there's an err
. Add return
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And also pass it to the callback?
We can actually verify that this PR is needed. In the attached graph we have 3 instances on the same redis-adapter. The green and yellow ones are getting 50% traffic each, while the blue one is getting 0%. The high CPU on the blue instances shows there is a high base-line CPU for non-active nodes. Looking forward to this! Thanks @koszta |
Guys, I merged and applies changes of my own. @koszta you are the man. Can you take a look at |
I checked it out. Everything looks good! Lots of improvements to my code. :) Thanks for everyone who tested this or contributed to it! |
@rauchg could you bump the package version on |
We just deployed our upgrade to version 0.2.0 to production with this PR, and it fixed EVERYTHING! Immediately seeing a huge drop in CPU across our multiple servers, and our auto-scaling groups now can handle more sockets before needing to scale up. Thank you @koszta ! 💯 ❗ 💥 |
@objectivSee thanks for the amazing feedback! This is what we live for ❤️ 💓 ❤️ |
This follows socketio#46. Each node will now listen to only three channels: - `socket.io#<namespace>#*`: used when broadcasting - `socket.io-request#<namespace>#`: used for requesting information (ex: get every room in the cluster) - `socket.io-response#<namespace>#`: used for responding to requests We keep the benefits of socketio#46 since: - messages from other namespaces are ignored - when emitting to a single room, the message is sent to `socket.io#<namespace>#<my-room>`, so listeners can check whether they have the room before unpacking the message (which is CPU consuming). But there is no need to subscribe / unsubscribe every time a socket joins or leaves a room (which is also CPU consuming when there are thousands of subscriptions).
This follows socketio#46. Each node will now listen to only three channels: - `socket.io#<namespace>#*`: used when broadcasting - `socket.io-request#<namespace>#`: used for requesting information (ex: get every room in the cluster) - `socket.io-response#<namespace>#`: used for responding to requests We keep the benefits of socketio#46 since: - messages from other namespaces are ignored - when emitting to a single room, the message is sent to `socket.io#<namespace>#<my-room>`, so listeners can check whether they have the room before unpacking the message (which is CPU consuming). But there is no need to subscribe / unsubscribe every time a socket joins or leaves a room (which is also CPU consuming when there are thousands of subscriptions).
This follows #46. Each node will now listen to only three channels: - `socket.io#<namespace>#*`: used when broadcasting - `socket.io-request#<namespace>#`: used for requesting information (ex: get every room in the cluster) - `socket.io-response#<namespace>#`: used for responding to requests We keep the benefits of #46 since: - messages from other namespaces are ignored - when emitting to a single room, the message is sent to `socket.io#<namespace>#<my-room>`, so listeners can check whether they have the room before unpacking the message (which is CPU consuming). But there is no need to subscribe / unsubscribe every time a socket joins or leaves a room (which is also CPU consuming when there are thousands of subscriptions).
Previously socket.io-redis was listening for the whole pattern socket.io#* and the messages were sent to socket.io#id. This means that all the socket.io servers will receive all messages, even if the server does not use that namespace, even if there are no client listening to any of those rooms on them. If there are some nodes which send a lot of message, all the socket.io servers will have 100% cpu really quickly.
I changed it to use socket.io#namespace#room in order to be able to filter the messages on the redis server already, so the socket.io servers will receive only the messages they need to receive.