-
Notifications
You must be signed in to change notification settings - Fork 36.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mempool/validation: mempool ancestor/descendant limits for packages #21800
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
24a625f
to
95c8f11
Compare
95c8f11
to
b80cd25
Compare
Concept ACK. Treating every transaction in the package as each other's ancestor and descendant is a good, conservative heuristic to use, since it can never underestimate the true number of ancestors/descendants. If it's too limiting, we could potentially implement looser limits in future. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK. The simplification taken of assuming each transaction has as many more ancestors and descendants as there are transactions in the package is a clever way to avoid writing/repurposing a bunch of code that's not necessary for the ultimate purpose of allowing package relay.
However, the more the results returned by TMPA diverge from the results we'd get from submitting each transaction to our mempool individually, the more I think package validation should be using a different interface than individual transaction validation (e.g., a different parameter to TMPA or a different RPC altogether).
Reviewed all the code, ran the functional tests, and tweaked the tests to see if I could get something unexpected to happen (nothing did). I have one minor inline comment below about using a different error string when we're using different evaluation criteria.
I tried to think of additional tests, but the only thing I've got is that it might be nice to either replicate or convert the CPFP carve out tests from test/functional/mempool_package_onemore.py
to use TMPA, e.g. as a variation of the current "A-shaped test" (I actually tried to do this real quick but couldn't get it to work; it was clearly my fault as I got the same error whether I used submitrawtransaction
or TMPA).
Overall, this PR LGTM. Thanks!
@harding thank you for the review!!!
I agree, and they will likely continue to diverge if we add |
There's certainly no need for users to interact with packages before there's a reliable package relay mechanism, so starting with regtest-only seems like a good idea to me. If someone later comes along with a reason it should be user-facing, they can put in the (probably trivial) amount of work to make the RPC available on mainnet and the other networks.
For users, yes. I don't think anyone is using it today for L2 testing and I'm not sure it's really well suited to that---testmempoolaccept tells you whether your transaction would be accepted into the current mempool, but L2 testers really want to know whether the transaction will be accepted into a future mempool; a failure now is a reliable harbinger of failure later, but a success now doesn't guarantee success in the future (even ignoring that relay policy can be made more restrictive). |
I'm not convinced this is true. It is impossible for L2 testers to know whether the package would be accepted into a future mempool as fee levels could theoretically be anything up to infinite. So there is nothing to test. The L2 testing would be for checking that the package would be accepted at the current fee levels (e.g. possibly just before broadcasting the package or for general testing at current fee levels) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concept ACK
b80cd25
to
a837686
Compare
…heckLimits This does not change existing behavior. The ancestor/descendant limits are inclusive of the entries themselves, but CalculateAncestorsAndCheckLimits() does not need access to them.
When calculating ancestor/descendant counts for transactions in the package, as a heuristic, count every transaction in the package as an ancestor and descendant of every other transaction in the package. This may overestimate, but will not underestimate, the ancestor/descendant counts. This shortcut still produces an accurate count for packages of 1 parent + 1 child.
6d8f687
to
5d513c0
Compare
Code review ACK 5d513c0 |
https://cirrus-ci.com/task/6628050143019008: Run tx_pool_standard with args ['/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz', '-runs=1', '/tmp/cirrus-ci-build/ci/scratch/qa-assets/fuzz_seed_corpus/tx_pool_standard']INFO: Seed: 517370066
INFO: Loaded 1 modules (547316 inline 8-bit counters): 547316 [0x556b20760ed8, 0x556b207e68cc),
INFO: Loaded 1 PC tables (547316 PCs): 547316 [0x556b207e68d0,0x556b21040810),
INFO: 3949 files found in /tmp/cirrus-ci-build/ci/scratch/qa-assets/fuzz_seed_corpus/tx_pool_standard
INFO: -max_len is not provided; libFuzzer will not generate inputs larger than 1048576 bytes
INFO: seed corpus: files: 3949 min: 1b max: 1048576b total: 95048838b rss: 213Mb
#128 pulse cov: 17131 ft: 25503 corp: 122/895b exec/s: 64 rss: 225Mb
#256 pulse cov: 17352 ft: 36520 corp: 244/2752b exec/s: 64 rss: 247Mb
#512 pulse cov: 23370 ft: 57316 corp: 399/6397b exec/s: 56 rss: 292Mb
#1024 pulse cov: 25687 ft: 82402 corp: 673/19Kb exec/s: 44 rss: 400Mb
fuzz: test/fuzz/tx_pool.cpp:232: auto (anonymous namespace)::tx_pool_standard_fuzz_target(FuzzBufferType)::(anonymous class)::operator()() const: Assertion `"it != result_package.m_tx_results.end()" && check' failed.
==24597== ERROR: libFuzzer: deadly signal
#0 0x556b1cad4ab1 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x278fab1)
#1 0x556b1ca1fc08 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26dac08)
#2 0x556b1ca04d53 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26bfd53)
#3 0x7f50944353bf (/lib/x86_64-linux-gnu/libpthread.so.0+0x153bf)
#4 0x7f509407918a (/lib/x86_64-linux-gnu/libc.so.6+0x4618a)
#5 0x7f5094058858 (/lib/x86_64-linux-gnu/libc.so.6+0x25858)
#6 0x7f5094058728 (/lib/x86_64-linux-gnu/libc.so.6+0x25728)
#7 0x7f5094069f35 (/lib/x86_64-linux-gnu/libc.so.6+0x36f35)
#8 0x556b1cf8b787 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x2c46787)
#9 0x556b1cb00a77 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x27bba77)
#10 0x556b1e4256f7 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x40e06f7)
#11 0x556b1e4253a5 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x40e03a5)
#12 0x556b1ca06411 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c1411)
#13 0x556b1ca05b55 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c0b55)
#14 0x556b1ca08477 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c3477)
#15 0x556b1ca087d9 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c37d9)
#16 0x556b1c9f74ae (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26b24ae)
#17 0x556b1ca202f2 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26db2f2)
#18 0x7f509405a0b2 (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#19 0x556b1c9cc24d (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x268724d)
NOTE: libFuzzer has rudimentary signal handlers.
Combine libFuzzer with AddressSanitizer or similar for better crash reports.
SUMMARY: libFuzzer: deadly signal
MS: 0 ; base unit: 0000000000000000000000000000000000000000
0xb3,0xb3,0xb3,0xb3,0x83,0x83,0x2f,0x2f,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xb,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0xc9,0xc9,0xcb,0x1,0x0,0xcb,
\xb3\xb3\xb3\xb3\x83\x83//\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\x0b\x00\x00\x00\x00\x00\x00\x00eeeeeeeeeee\xc9\xc9\xcb\x01\x00\xcb
artifact_prefix='./'; Test unit written to ./crash-8bc5eec8ddffef064fc1834346b11548218b6153
Base64: s7Ozs4ODLy/Ly8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLCwAAAAAAAABlZWVlZWVlZWVlZcnJywEAyw==Run tx_pool_standard with args ['/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz', '-runs=1', '/tmp/cirrus-ci-build/ci/scratch/qa-assets/fuzz_seed_corpus/tx_pool_standard']INFO: Seed: 517370066
INFO: Loaded 1 modules (547316 inline 8-bit counters): 547316 [0x556b20760ed8, 0x556b207e68cc),
INFO: Loaded 1 PC tables (547316 PCs): 547316 [0x556b207e68d0,0x556b21040810),
INFO: 3949 files found in /tmp/cirrus-ci-build/ci/scratch/qa-assets/fuzz_seed_corpus/tx_pool_standard
INFO: -max_len is not provided; libFuzzer will not generate inputs larger than 1048576 bytes
INFO: seed corpus: files: 3949 min: 1b max: 1048576b total: 95048838b rss: 213Mb
#128 pulse cov: 17131 ft: 25503 corp: 122/895b exec/s: 64 rss: 225Mb
#256 pulse cov: 17352 ft: 36520 corp: 244/2752b exec/s: 64 rss: 247Mb
#512 pulse cov: 23370 ft: 57316 corp: 399/6397b exec/s: 56 rss: 292Mb
#1024 pulse cov: 25687 ft: 82402 corp: 673/19Kb exec/s: 44 rss: 400Mb
fuzz: test/fuzz/tx_pool.cpp:232: auto (anonymous namespace)::tx_pool_standard_fuzz_target(FuzzBufferType)::(anonymous class)::operator()() const: Assertion `"it != result_package.m_tx_results.end()" && check' failed.
==24597== ERROR: libFuzzer: deadly signal
#0 0x556b1cad4ab1 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x278fab1)
#1 0x556b1ca1fc08 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26dac08)
#2 0x556b1ca04d53 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26bfd53)
#3 0x7f50944353bf (/lib/x86_64-linux-gnu/libpthread.so.0+0x153bf)
#4 0x7f509407918a (/lib/x86_64-linux-gnu/libc.so.6+0x4618a)
#5 0x7f5094058858 (/lib/x86_64-linux-gnu/libc.so.6+0x25858)
#6 0x7f5094058728 (/lib/x86_64-linux-gnu/libc.so.6+0x25728)
#7 0x7f5094069f35 (/lib/x86_64-linux-gnu/libc.so.6+0x36f35)
#8 0x556b1cf8b787 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x2c46787)
#9 0x556b1cb00a77 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x27bba77)
#10 0x556b1e4256f7 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x40e06f7)
#11 0x556b1e4253a5 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x40e03a5)
#12 0x556b1ca06411 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c1411)
#13 0x556b1ca05b55 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c0b55)
#14 0x556b1ca08477 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c3477)
#15 0x556b1ca087d9 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c37d9)
#16 0x556b1c9f74ae (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26b24ae)
#17 0x556b1ca202f2 (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26db2f2)
#18 0x7f509405a0b2 (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
#19 0x556b1c9cc24d (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x268724d)
NOTE: libFuzzer has rudimentary signal handlers.
Combine libFuzzer with AddressSanitizer or similar for better crash reports.
SUMMARY: libFuzzer: deadly signal
MS: 0 ; base unit: 0000000000000000000000000000000000000000
0xb3,0xb3,0xb3,0xb3,0x83,0x83,0x2f,0x2f,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xb,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0xc9,0xc9,0xcb,0x1,0x0,0xcb,
\xb3\xb3\xb3\xb3\x83\x83//\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\x0b\x00\x00\x00\x00\x00\x00\x00eeeeeeeeeee\xc9\xc9\xcb\x01\x00\xcb
artifact_prefix='./'; Test unit written to ./crash-8bc5eec8ddffef064fc1834346b11548218b6153
Base64: s7Ozs4ODLy/Ly8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLCwAAAAAAAABlZWVlZWVlZWVlZcnJywEAyw== |
MOVEONLY; no change in behavior. Rename because there is another helper funciton in chain_transaction in test_framework.util.py
5d513c0
to
accf3d5
Compare
Thanks @fanquake. I've pushed a fix for what I expect is the culprit - single transaction ancestor/descendant limits can sometimes be slightly looser due to CPFP carve out; the decision to not return any tx results here would have caused that crash. I've gated |
Addressed review comments and added tests for size limits (thanks @ariard for motivating) Ready for review again! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK accf3d5.
The conservative limits check for package acceptance as implemented by this PR sounds correct to me to comply to the package limits as currently evaluated per-transaction and also verified test coverage.
@@ -1079,6 +1079,19 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std:: | |||
m_viewmempool.PackageAddTransaction(ws.m_ptx); | |||
} | |||
|
|||
// Apply package mempool ancestor/descendant limits. Skip if there is only one transaction, | |||
// because it's unnecessary. Also, CPFP carve out can increase the limit for individual |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note, I still wonder if we need to make CPFP carve-out composable with package acceptance in the future. Otherwise a counterparty can build a branch of descendants from an output A on a multi-party unconfirmed ancestor to block package acceptance on an output B.
I don't think this is required for current LN safety, as least as long as implementation backends don't try to chain their commitment+CPFP transactions. But it might be useful for other second-layers applications (e.g Lightning Pool's batch execution+commitment txn).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use packages to just remove the carve out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally yes, carve-out don't scale for multi-party (n > 2) fee-bumping.
Though extending the carve-out to safely chain CPFPs was argued by Matt on the LDK side (see here https://lightningdevkit.slack.com/archives/CTBLT3CAU/p1625786787422100). Personally, I think this approach is a bit doomed as we can assume first-stage of the CPFP-chain can always be replaced by a valid claim of a counterparty, thus downgrading the feerate of your other broadcasted commitments and I prefer the concurrent domino fee-bumping approach even if it's more complexity swallowed by the LN backend.
Further, I think there is another multi-party contract issue we have to solve. Namely, the first-stage state transaction can be symmetric but the second-stage states asymmetric-though-composable. E.g a CTV tree with a root transaction with branch A/branch B spending 2 isolated outputs. Alice broadcast root+branch A and Bob broadcast root+branch B, thus accidentally or maliciously evicting Alice's branch. I think we would like the mempool logic to aggregate those in-flight packages by conserving the highest-feerate subgraphs and that would require some form of carve-out to avoid package limits interfering ?
utACK accf3d5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CheckPackageLimits
(and CheckPackage
did too) computes the virtual size without taking the number of signature operations.
It's not an issue for now as PreChecks
(which does) is always called before any code path that result in CheckPackageLimits
or CheckPackage
, but we need to be careful to not introduce a DOS vector in the future.
@darosior good point. I agree sigop limits should be properly enforced for packages. At the moment, since the sigops limit is checked in For |
The signature ops check is not about avoiding checking too many signatures in the mempool, but to keep the mempool consistent with what could be included in the next blocks by rational miners. |
My takeaway from this is that thinking of feerate as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reACK accf3d5
I've left a bunch of small comments. None are critical, so feel free to ignore or take them in a follow-up PR.
setAncestors, staged_ancestors, | ||
limitAncestorCount, limitAncestorSize, | ||
limitDescendantCount, limitDescendantSize, errString); | ||
// It's possible to overestimate the ancestor/descendant totals. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that this errString is only used for the PackageMempoolAcceptResult
, I think you can just drop the "possibly" prefix. It may have been useful on individual MempoolAcceptResult
to disambiguate between failure because a single transaction definitely exceeded the limits and failure because a transaction was in a package that possibly exceeded limits. Now that we're not using it there, I think it should just be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, I don't think this string ever gets used (in logging or returned to users). Should testmempoolaccept
be updated to return the reject reason and debug message for the transaction result and package result (see ValidationState::ToString()
, which indirectly gets called if sendrawtransaction
fails).
chain_hex.append(txhex) | ||
chain_txns.append(tx) | ||
|
||
(chain_hex, chain_txns) = create_raw_chain(node, first_coin, self.address, self.privkeys) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for these parens. You can just assign to the individual variables:
(chain_hex, chain_txns) = create_raw_chain(node, first_coin, self.address, self.privkeys) | |
chain_hex, chain_txns = create_raw_chain(node, first_coin, self.address, self.privkeys) |
Same for several other assignments below.
signedtx = node.signrawtransactionwithkey(hexstring=rawtx, privkeys=privkeys, prevtxs=prevtxs) | ||
assert signedtx["complete"] | ||
tx = tx_from_hex(signedtx["hex"]) | ||
return (tx, signedtx["hex"], my_value, tx.vout[0].scriptPubKey.hex()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to construct a tuple here. In python you can just use an expression list in a return statement:
return (tx, signedtx["hex"], my_value, tx.vout[0].scriptPubKey.hex()) | |
return tx, signedtx["hex"], my_value, tx.vout[0].scriptPubKey.hex() |
However, both of these are dangerous interfaces with an untyped language like Python, since users of this function could easily assign the return expressions to the wrong variables. A safer interface would be to return a named tuple so that the caller needs to explicitly unpack the wanted values from the return object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users of named tuples are not forced to access them by name, because they are also tuples as the name suggests:
>>> A = namedtuple('A', ['x','y'])
>>> A(1,2)
A(x=1, y=2)
>>> y, x = A(1,2)
>>> x, y
(2, 1)
Dataclasses seem closer to what you want:
>>> from dataclasses import dataclass
>>> @dataclass
... class A:
... x: int
... y: int
...
>>> A(1,2)
A(x=1, y=2)
>>> x,y = A(1,2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot unpack non-iterable A object
assert signedtx_child["complete"] | ||
return signedtx_child["hex"] | ||
|
||
def create_raw_chain(node, first_coin, address, privkeys, chain_length=25): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the default chain_length
parameter seems too specific to the specific packages test. Perhaps remove it and call the function with that specific value:
def create_raw_chain(node, first_coin, address, privkeys, chain_length=25): | |
def create_raw_chain(node, first_coin, address, privkeys, chain_length): |
@@ -176,3 +181,75 @@ def create_self_transfer(self, *, fee_rate=Decimal("0.003"), from_node, utxo_to_ | |||
def sendrawtransaction(self, *, from_node, tx_hex): | |||
from_node.sendrawtransaction(tx_hex) | |||
self.scan_tx(from_node.decoderawtransaction(tx_hex)) | |||
|
|||
def make_chain(node, address, privkeys, parent_txid, parent_value, n=0, parent_locking_script=None, fee=DEFAULT_FEE): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make_chain()
doesn't seem like the right name for this function. I'd expect such a function to return a chain of transactions, rather than an individual transaction.
It looks like the n
parameter isn't ever used by any of the callers. Perhaps remove it and update the function documentation to say that the function spends the first output. The function can always be updated in future to allow n to be different.
package_hex = [] | ||
parent_txns = [] | ||
parent_values = [] | ||
scripts = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not parent_scipts
to go with the other parent_...
(or even parent_spks
).
self.test_anc_count_limits_bushy() | ||
|
||
# The node will accept our (nonstandard) extra large OP_RETURN outputs | ||
self.restart_node(0, extra_args=["-acceptnonstdtxn=1"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to make this test work without using -acceptnonstdtxn
, perhaps by having bulk_transaction
just add standard outputs? Since this test is all about mempool policy, it seems better to have the policy of the node being tested to be as close to realistic as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely agree, it's a shame to turn off standardness in a policy-based tests. Will try to do this in a followup.
assert_equal(2, node.getmempoolinfo()["size"]) | ||
testres_too_heavy = node.testmempoolaccept(rawtxs=[pc_hex, pd_hex]) | ||
for txres in testres_too_heavy: | ||
assert_equal(txres["package-error"], "package-mempool-limits") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we improved the error messages being returned to the user, we could more precisely check that the package was rejected due to exceeds ancestor size limits.
package_hex = [] | ||
for j in range(2): # Two legs (left and right) | ||
# Mempool transaction (Mb and Mc) | ||
mempool_tx = CTransaction() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to declare variables in Python
else: # OP_TRUE | ||
inputs = [{"txid": txid, "vout": 1}] | ||
outputs = {self.address: value - high_fee} | ||
small_tx = tx_from_hex(node.createrawtransaction(inputs, outputs)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe choose between tx_small
and small_tx
and stick to it :)
I think there's enough here to warrant a followup. |
fa7db1c [test] checks descendants limtis for second generation Package descendants (ritickgoenka) Pull request description: This PR adds a new functional test to test the new descendant limits for packages that were proposed in #21800. ``` +----------------------+ | | | M1 | | ^ ^ | | M2 ^ | | . ^ | | . ^ | | . ^ | | . ^ | | M24 ^ | | ^ | | P1 | | ^ | | P2 | | | +----------------------+ ``` This test is for checking a transaction to fail its descendant count limits because of a combination of mempool descendants, package direct descendants, and package indirect descendants. In this test, P1 has M1 as a mempool ancestor, P2 has no in-mempool ancestors, but when combined P2 has M1 as an ancestor and M1 exceeds descendant_limits (23 in-mempool descendants + 2 in-package descendants, a total of 26 including itself) ACKs for top commit: ryanofsky: Code review ACK fa7db1c. Only were suggested changes since last review: simplifying test and dropping P3 transaction as John suggested, and adding assert_equal I suggested glozow: ACK fa7db1c jnewbery: ACK fa7db1c Tree-SHA512: d1eb993550ac8ce31cbe42e17c6522a213ede66970d5d9391f31a116477ab5889fefa6ff2df6ceadd63a28c1be1ad893b0e8e449247e9ade2ca61dc636508d68
fa7db1c [test] checks descendants limtis for second generation Package descendants (ritickgoenka) Pull request description: This PR adds a new functional test to test the new descendant limits for packages that were proposed in bitcoin#21800. ``` +----------------------+ | | | M1 | | ^ ^ | | M2 ^ | | . ^ | | . ^ | | . ^ | | . ^ | | M24 ^ | | ^ | | P1 | | ^ | | P2 | | | +----------------------+ ``` This test is for checking a transaction to fail its descendant count limits because of a combination of mempool descendants, package direct descendants, and package indirect descendants. In this test, P1 has M1 as a mempool ancestor, P2 has no in-mempool ancestors, but when combined P2 has M1 as an ancestor and M1 exceeds descendant_limits (23 in-mempool descendants + 2 in-package descendants, a total of 26 including itself) ACKs for top commit: ryanofsky: Code review ACK fa7db1c. Only were suggested changes since last review: simplifying test and dropping P3 transaction as John suggested, and adding assert_equal I suggested glozow: ACK fa7db1c jnewbery: ACK fa7db1c Tree-SHA512: d1eb993550ac8ce31cbe42e17c6522a213ede66970d5d9391f31a116477ab5889fefa6ff2df6ceadd63a28c1be1ad893b0e8e449247e9ade2ca61dc636508d68
This PR implements a function to calculate mempool ancestors for a package and enforces ancestor/descendant limits on them as a whole. It reuses a portion of
CalculateMemPoolAncestors()
; there's also a small refactor to move the reused code into a generic helper function. Instead of calculating ancestors and descendants on every single transaction in the package and their ancestors, we use a "worst case" heuristic, treating every transaction in the package as each other's ancestor and descendant. This may overestimate everyone's counts, but is still pretty accurate in the our main package use cases, in which at least one of the transactions in the package is directly related to all the others (e.g. 1 parent + 1 child, multiple parents with 1 child, or chains).Note on Terminology: While "package" is often used to describe groups of related transactions within the mempool, here, I only use package to mean the group of not-in-mempool transactions we are currently validating.
Motivation
It would be a potential DoS vector to allow submission of packages to mempool without a proper guard for mempool ancestors/descendants. In general, the purpose of mempool ancestor/descendant limits is to limit the computational complexity of dealing with families during removals and additions. We want to be able to validate multiple transactions on top of the mempool, but also avoid these scenarios: