Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prioritize verification of higher fee TXs and do not starve RpcServer w/ Large mempool #356

Conversation

jsolman
Copy link
Contributor

@jsolman jsolman commented Aug 24, 2018

When there is a large mem_pool readers from RpcServer should be able to do as much concurrently as possible. This change is build on top of #355 that switches to using R/W locks for higher concurrency. It causes TXs to be verified in batches of 1000 instead of all at once starving the RpcServer.

@erikzhang
Copy link
Member

In MSDN, I saw a description like this:

ReaderWriterLock works best where most accesses are reads, while writes are infrequent and of short duration.

But most of the operations in "Cache.cs" are writes. So I don't think this will improve performance.

https://docs.microsoft.com/en-us/dotnet/api/system.threading.readerwriterlock

@jsolman
Copy link
Contributor Author

jsolman commented Aug 25, 2018

Without using it readers block each other; which is bad. It is possible to implement read / write locks without significantly impacting write performance; if the c# implementation has issues we can implement a proper implementation that is optimized for the needs, but a straight mutex if there are paths that allow concurrent readers is not going to be optimal.

It's true that ReadWriteLock doesn't have the most efficient implementation. I switched the PR to use ReaderWriterLockSlim.

Are you suggesting I remove the user of R/W lock just for the Cache but leave it for the other places?

@jsolman
Copy link
Contributor Author

jsolman commented Aug 25, 2018

There seems to be some sort of issue after switching to ReadWriteLockSlim that is causing my testing to deadlock. I'll update when I understand why.

@jsolman
Copy link
Contributor Author

jsolman commented Aug 25, 2018

Tested to be working now with the faster locks. They are actually quite fast:

https://blogs.msdn.microsoft.com/pedram/2007/10/07/a-performance-comparison-of-readerwriterlockslim-with-readerwriterlock/

About 3x faster than ReadWriteLock.
So it is a little slower than a monitor, but what is gained in concurrency in cases like the mem_pool with the current code base is far greater than the slight performance difference in the write lock versus using C# lock statement.

@jsolman
Copy link
Contributor Author

jsolman commented Aug 25, 2018

The code should also dispose the locks as well when their parent objects are disposed. I'll handle that in one more commit.

@jsolman
Copy link
Contributor Author

jsolman commented Aug 25, 2018

That should do it.

Transaction[] transactions;
lock (temp_pool)
{
if (temp_pool.Count == 0) continue;
transactions = temp_pool.ToArray();
temp_pool.Clear();
if (temp_pool.Count < 1000)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should make the 1000 here a constant to make it more easily configurable

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be a bit more accurate actually if the condition were <= instead of <, but it doesn't make much of a difference just optimizes for the case that temp_pool.Count is exactly 1000.

.OrderBy(p => p.NetworkFee / p.Size)
.ThenBy(p => p.NetworkFee)
.ThenBy(p => new BigInteger(p.Hash.ToArray()))
.Take(1000)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto, this should be the same constant mentioned above.


// Use normal AddTransactionLoop to verify if there is a large mem_pool to avoid
// starvation of RpcSerer with large mem_pool.
if (millisSinceLastBlock > 10000 && remain.Length < 1000)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should probably make these numbers constants also.

@vncoelho
Copy link
Member

@jsolman, we need you on #288.
Let's think about these kind of improvements there. I think that it is the best way to move towards.

@jsolman
Copy link
Contributor Author

jsolman commented Aug 25, 2018

Ok, I'll start a node running 288 and start working on PRs to that PR. Sound good? In the mean time this is here for anyone who needs it running 2.8.0

@EdgeDLT EdgeDLT mentioned this pull request Aug 25, 2018
@jsolman
Copy link
Contributor Author

jsolman commented Sep 4, 2018

I had copied the fee sorting for the transactions from the place that removed greater than the max, and took for granted that the sorting needed to be in the opposite order for where I am using it. This is now fixed.

Also getting a transaction's NetworkFee where this introduces it can throw a null reference exception on line 8 of Transaction.cs:
Fixed8 input = References.Values.Where(p => p.AssetId.Equals(Blockchain.UtilityToken.Hash)).Sum(p => p.Value);
So I'm now guarding for that now.

@erikzhang
Copy link
Member

Since #288 has been merged. Can this be closed? @jsolman

@jsolman
Copy link
Contributor Author

jsolman commented Sep 14, 2018

Yeah I’m closing it now.

@jsolman jsolman closed this Sep 14, 2018
@jsolman
Copy link
Contributor Author

jsolman commented Sep 14, 2018

I think after a block the code that reinserts the transactions back into the message queue needs to insert them in order of higher priority transactions first though if it doesn’t already do that.

Thacryba pushed a commit to simplitech/neo that referenced this pull request Feb 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants