-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remember recent transactions we've received and don't ask for them again #19287
base: main
Are you sure you want to change the base?
Conversation
chia/full_node/full_node.py
Outdated
# any invalid transactions we've seen recently, we don't need | ||
# to see again, so add those to the filter as well | ||
|
||
# first remove transactions that are older than 2 minutes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you explain the 2 minutes aspect of this here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was an arbitrary timeout. it might make sense to make it longer. It seemed unnecessary to keep old transactions around indefinitely, that's all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I bumped it to 10 minutes
I can't reproduce the symptom that this is supposed to address. I'm having second thoughts about including this PR. I'm tempted to rebase it on top of main and possibly explore it further there. |
I looked at a 30 second log in debug mode from yesterday and pulled out the lines which had double spend transactions, then I looked for duplicates among those transactions and can show a brief sample of numerous duplicates of the same transaction. While I'm not certain this is the sole cause of my heavy CPU utilization, it shows reprocessing of data sometimes 15 seconds apart from the first time it was done, the double spends seemed to be present most during my high CPU utilization periods. Note, I set Windows 10 processing-duplicate-transactions-results.txt adding the unedited 30 second log below just in case you want to see other things within it for those timestamps |
@OverActiveBladderSystem there's a built-in CPU profiler that can be enabled in To look at the CPU usage over time, assuming you have the chia-blockchain source, To focus on a specific spike, run:
or
This will generate a gprof2dot graph and save it in the current directory. It requires |
My understanding of the cost of having these invalid transactions in your peers' mempools is that it's proportional to your peer churn. Only when you connect to a peer do we request mempool items, we send our BIP158 filter of the transactions we have in our mempool (it's like a bloom filter) and the peer responds with its 100 "best" transactions that we don't have. Best meaning high fee-per-cost, so in this case it will be a random 100 invalid transactions. According to your log, we validate each transaction in ~3 milliseconds (for 100 of them, that's 0.3 seconds). But then we'll never get any more bad transactions from that peer again, unless we disconnect and re-connect. During steady state, transactions are only propagated as we learn about new ones and we validate them and add them to our own mempool. |
55b50db
to
e4535c8
Compare
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
Conflicts have been resolved. A maintainer will review the pull request shortly. |
Purpose:
Avoid requesting the same mempool transactions multiple time, especially ones that have failed validation.
Current Behavior:
New Behavior:
Testing Notes: