-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
disable buffer pooling in DotNetty transport #4252
disable buffer pooling in DotNetty transport #4252
Conversation
Deleted since akkadotnet#4252 eliminated need for this
I actually do see a major difference in the numbers... The first line for a single client appears to be off by some serious factor? |
@dnickless that's a byproduct of our batching system that we added, which improved overall performance significantly: #4106 The first round of the benchmark puts very little pressure on either side of the wire since it's a single actor emitting 20 messages at a time, beneath the default threshold of 30 messages per batch used by the batching system. As a result, we're depending on the recurring 40ms timer to flush out writes in that scenario - hence why the first round of the benchmark is a big outlier, since the operating system can't guarantee exactly 40ms each time. I added an article on how to performance tune this new batching system to the Akka.NET documentation here: https://getakka.net/articles/remoting/performance.html Just to give you some idea on the numbers: Before: https://getakka.net/articles/remoting/performance.html#no-io-batching Average performance: 82,539 msg/s. Standard deviation: 46,827 msg/s. After: https://getakka.net/articles/remoting/performance.html#with-io-batching Average performance: 141,091 msg/s. Standard deviation: 15,291 msg/s. Those figures came from a different piece of hardware than the one I used for the benchmark yesterday (we haven't replaced our standard benchmarking setup since moving onto Azure DevOps instead of our home-grown CI.) The figures on the website came from a 2019-generation development laptop. The figures on this PR came from a 2012 laptop running much older hardware. Just now, I re-ran the benchmark with these changes on a third machine (home office machine - AMD Ryzen setup built in 2017) which has overall better hardware.
You'll see the big drop in the first round of performance there too - it's the same root issue: falling beneath the batching threshold. All of those batch thresholds (time, msg count, total bytes) can be customized according to the article I linked earlier in Akka.NET v1.4.0. |
* disabled Ask_does_not_deadlock spec * Delete Bug3370DotNettyLinuxBufferPoolSpec.cs Deleted since #4252 eliminated need for this * relaxing timeout on FSMTimingSpec
close #3879
close #3273
close #4244
For reasons that are fundamentally structural, it's not safe for Akka.Remote to use DotNetty's buffer pooling of any kind - the reason being that serialization and deserialization is handled outside of the
ChannelPipeline
itself thus the bytes returned from the channel aren't safe for release until after they're successfully decoded by Akka.Remote's endpoint actors.In order to take advantage of buffer pooling, Akka.Remote will need to be redesigned with a more integrated serialization pipeline in mind - something we've discussed on #2378
Ran some local benchmarks on my development machine here - no major changes in observed performance at all.
Before
After