-
Notifications
You must be signed in to change notification settings - Fork 856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bring back signature batch verification #351
Comments
IMHO no. Is only relevant on block import. |
Adopt ed25519-zebbra over ed25519-dalek paritytech/substrate#8055 if you want batch verification of ed25519 I implemented half-aggregation for sr25519 w3f/schnorrkel#68 which works like sr25519 batch verification, but reduces signature size from 64 bytes to amortized 32 bytes, but requires block producers include data specific to the aggregation, and makes signatures non-relocatable. Is halving the signature storage size for signatures that do not require relocation worth the extra logic for you guys? I guess no.. or not anytime soon. |
The general question with batch verification is still if we want the fastest approach or the energy efficiency approach ;) The fastest approach is just to validate each signature as it comes in the background. While the energy efficiency approach is the one that uses batch verification. I'm still more for validating each signature as it comes, to have a faster sync ;) |
I only envisioned batch signature verification within blocks, so in particular block seals would never be batch verified. And the In effect, one 32 (1+n) byte I doubt this adds latency like you describe because we've so many storage Merkle proofs to check in other threads. I'd expect an STVF would return "structurally valid" very quickly, which then leaves the final valid result waiting upon num_cpu_cores crypto threads. And these mostly verify all the storage claims about Merkle proofs. We'd just add another couple threads doing batch signature verification. Now if our sr25519 verification time exceeds our Merkle proof verification time divided by num_cpu_cores-1 then ideally the block producer should've placed multiple We could batch verify block seals too, maybe what you meant, which yes adds latency. Worse, this opens a denial of service vector, and thus requires some fallback. I doubt the CPU savings warrant the complexity there. |
What storage merkle proofs we have to check in other threads? I'm confused. We don't use any threading at all from the runtime. As the block producer can not predict the number of extrinsics in the block, we can not push any In general we could probably rewrite the |
If I understand, our runtime code has no idea if its host mutates a real storage or checks Merkle proofs. In PoV verification, we have a minimized shadow of the storage attached, so parsing the block trigger another thread to check all the attached hashes, and then We could similarly make It's easier to do signatures in another thread of course. I'd previously thought Merkle proof verification should occupy more CPU than signature verification, but actually this sounds unlikely. |
Hey, is anyone still working on this? Due to the inactivity this issue has been automatically marked as stale. It will be closed if no further activity occurs. Thank you for your contributions. |
@bkchr Anything new on signature batch verification? |
As an aside, I've added a "half-aggregation" You cannot extract and replay individual signatures from blocks using
All this is not immediately relevant I think, but maybe worth discussing this option one day, especially once we get Sassafras back on track---it being off track is entirely my fault. |
Hey, we would be open for a pr for this. However, didn't you said that this doesn't work for your use case? Because you need to reconstruct the sender? |
Ok so maybe we'll propose one :)
Yes that's right, but we can extract the sender on the client side and add this information to the frontier extrinsic (made by the client from an ethereum transaction), so it's not blocking. |
Okay fine :) I think we should not use any ed25519 etc batch signature verification mechanisms for this here. While they are faster for doing batched signature verification, they require that you have all signatures "ready". Here in our case, it would just be much better to do the verification in the background. Maybe we should rename this entire feature to "Background signature verification". The advantage of this would be that we could run the extrinsic and do the signature verification at the same time :) |
I definitely agree. Batch verification - in the cryptographic sense - is a different feature. Ideally, we would like to have both in the long run, but in the short run we propose to contribute to the background verification only. |
@bkchr here is a first draft for the PR, I wrote some questions in the description, I hope you can answer them: paritytech/substrate#10353 |
* commit * rdy * Apply suggestions from code review Co-authored-by: philipstanislaus <6912756+philipstanislaus@users.noreply.github.com> * fixes Co-authored-by: philipstanislaus <6912756+philipstanislaus@users.noreply.github.com>
* Remove useless client param * Update version * Make compile happly
* relay receiving + processing confirmations * fmt && clippy * removed message processing race * remove more traces * generic args names
paritytech/substrate#6616 removed batch verification (the usage of it) as it introduced new host functions that are doing the batch verification.
We need to bring back the runtime implementation of the batch verification.
This requires a new trait:
This trait needs to be implemented for all the required types and we need to switch
UncheckedExtrinsic
to use this trait.The text was updated successfully, but these errors were encountered: