-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broker can stall when stress testing 40 * 10000 messages #101
Comments
Probably a worker per client makes sense, so that slow clients cannot slow down others. Maybe a priority queue per client would be the best approach. This way you could prioritize pending control packets over new messages, since new messages will generate even more pressure in form of more control packets. |
Just a note here that to anyone who comes across this issue - I've begun a fairly significant refactor of the broker which is looking quite promising, so I would caution against making any sizeable PRs for the time being. I hope to resolve some of the longer standing issues we've been experiencing. |
This, and all issues related this, are resolved in the upcoming v2.0.0 release. I hope to get it ready for pre-release in the near future |
This issue has been resolved in v2.0.0 |
When running
./mqtt-stresser -broker tcp://localhost:1883 -num-clients=40 -num-messages=10000
the broker appears to occasionally stall, although it does not freeze, and it continues to process other messages as expected.Testing against the latest https://github.com/fhmq/hmq we find only marginal performance difference.
This raises questions about whether there is a benefit to continue using circular buffers (which are difficult to maintain and control) now that the performance of channels has been improved significantly, or if the circular buffers mechanism should be replaced with a worker pool. This would also alleviate issues discussed in #95 and could potentially reduce the overall number of goroutines as mentioned in #80.
Discussion invited :)
The text was updated successfully, but these errors were encountered: