-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prioritize verification of higher fee TXs and do not starve RpcServer w/ Large mempool #356
Prioritize verification of higher fee TXs and do not starve RpcServer w/ Large mempool #356
Conversation
In MSDN, I saw a description like this:
But most of the operations in "Cache.cs" are writes. So I don't think this will improve performance. https://docs.microsoft.com/en-us/dotnet/api/system.threading.readerwriterlock |
Without using it readers block each other; which is bad. It is possible to implement read / write locks without significantly impacting write performance; if the c# implementation has issues we can implement a proper implementation that is optimized for the needs, but a straight mutex if there are paths that allow concurrent readers is not going to be optimal. It's true that ReadWriteLock doesn't have the most efficient implementation. I switched the PR to use ReaderWriterLockSlim. Are you suggesting I remove the user of R/W lock just for the Cache but leave it for the other places? |
There seems to be some sort of issue after switching to ReadWriteLockSlim that is causing my testing to deadlock. I'll update when I understand why. |
Tested to be working now with the faster locks. They are actually quite fast: About 3x faster than ReadWriteLock. |
The code should also dispose the locks as well when their parent objects are disposed. I'll handle that in one more commit. |
That should do it. |
Transaction[] transactions; | ||
lock (temp_pool) | ||
{ | ||
if (temp_pool.Count == 0) continue; | ||
transactions = temp_pool.ToArray(); | ||
temp_pool.Clear(); | ||
if (temp_pool.Count < 1000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should make the 1000 here a constant to make it more easily configurable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be a bit more accurate actually if the condition were <= instead of <, but it doesn't make much of a difference just optimizes for the case that temp_pool.Count is exactly 1000.
neo/Network/LocalNode.cs
Outdated
.OrderBy(p => p.NetworkFee / p.Size) | ||
.ThenBy(p => p.NetworkFee) | ||
.ThenBy(p => new BigInteger(p.Hash.ToArray())) | ||
.Take(1000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto, this should be the same constant mentioned above.
|
||
// Use normal AddTransactionLoop to verify if there is a large mem_pool to avoid | ||
// starvation of RpcSerer with large mem_pool. | ||
if (millisSinceLastBlock > 10000 && remain.Length < 1000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should probably make these numbers constants also.
Ok, I'll start a node running 288 and start working on PRs to that PR. Sound good? In the mean time this is here for anyone who needs it running 2.8.0 |
I had copied the fee sorting for the transactions from the place that removed greater than the max, and took for granted that the sorting needed to be in the opposite order for where I am using it. This is now fixed. Also getting a transaction's NetworkFee where this introduces it can throw a null reference exception on line 8 of Transaction.cs: |
Yeah I’m closing it now. |
I think after a block the code that reinserts the transactions back into the message queue needs to insert them in order of higher priority transactions first though if it doesn’t already do that. |
When there is a large mem_pool readers from RpcServer should be able to do as much concurrently as possible. This change is build on top of #355 that switches to using R/W locks for higher concurrency. It causes TXs to be verified in batches of 1000 instead of all at once starving the RpcServer.