Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Broker can stall when stress testing 40 * 10000 messages #101

Closed
mochi-co opened this issue Sep 10, 2022 · 4 comments
Closed

Broker can stall when stress testing 40 * 10000 messages #101

mochi-co opened this issue Sep 10, 2022 · 4 comments
Assignees
Labels
discussion Something to be discussed

Comments

@mochi-co
Copy link
Collaborator

mochi-co commented Sep 10, 2022

When running ./mqtt-stresser -broker tcp://localhost:1883 -num-clients=40 -num-messages=10000 the broker appears to occasionally stall, although it does not freeze, and it continues to process other messages as expected.

Testing against the latest https://github.com/fhmq/hmq we find only marginal performance difference.

This raises questions about whether there is a benefit to continue using circular buffers (which are difficult to maintain and control) now that the performance of channels has been improved significantly, or if the circular buffers mechanism should be replaced with a worker pool. This would also alleviate issues discussed in #95 and could potentially reduce the overall number of goroutines as mentioned in #80.

Discussion invited :)

@mochi-co mochi-co added the discussion Something to be discussed label Sep 10, 2022
@mochi-co mochi-co self-assigned this Sep 10, 2022
@alexsporn
Copy link
Contributor

alexsporn commented Sep 12, 2022

Probably a worker per client makes sense, so that slow clients cannot slow down others.
Channels are a good alternative to the circular buffer, but they will also stall once full if you don't allow dropping of messages when the buffer fills up.

Maybe a priority queue per client would be the best approach. This way you could prioritize pending control packets over new messages, since new messages will generate even more pressure in form of more control packets.

@mochi-co
Copy link
Collaborator Author

Just a note here that to anyone who comes across this issue - I've begun a fairly significant refactor of the broker which is looking quite promising, so I would caution against making any sizeable PRs for the time being. I hope to resolve some of the longer standing issues we've been experiencing.

@mochi-co
Copy link
Collaborator Author

This, and all issues related this, are resolved in the upcoming v2.0.0 release. I hope to get it ready for pre-release in the near future

@mochi-co
Copy link
Collaborator Author

This issue has been resolved in v2.0.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Something to be discussed
Projects
None yet
Development

No branches or pull requests

2 participants