-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spark multi-evaluation library and contract #20
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: on the subject of public parameter size, this is dealt with properly (at least according to the math) in microsoft/Nova#203 : only snarks with computational commitments need a total_nz > 0.
Nit: the moniker "Multi" designates evaluations of sparse / dense multilinear polynomials (see also the Spartan paper), rather than multiple evaluations (i.e. a batched process)
See also https://github.com/privacy-scaling-explorations/Nova/pull/16/files (with a grain of salt: see comments in review)
Thanks, @huitseeker! This PR contains quite a bit of redundant code (data contracts) that provides an ability to run multi- evaluation as a unit-test. Nonetheless, I prefer merging it, as soon as it still contains piece of useful code that I will reuse in future, while deriving the complete contract that performs e2e verification. Once we have integration testing platform (#22), we can refuse from data contract and massively utilise global storage. Some ideas behind eventual (tentative) structure of the repository can be found in pp-spartan branch, where I'm working on alternative implementation. The unit-tests will serve just for isolated debugging and initial deriving the building blocks required by the ultimate verifier's smart contract, while actual e2e testing will be performed via public params and proof data loader + forge (basic Foundry tool for contract deployment) + anvil (Foundry node) + cast (tool for interacting with contracts also from Foundry) infrastructure. |
@huitseeker, after coming up to the final IPA step of pp-spartan, which is going to be replaced by more EVM-friendly KZG + Zeromorph, I realised that Spark multi-evaluation is not used there at all. Considering this, I currently think that we can close this PR, as it introduces building block which is unused anymore. Alternatively, if this multi-evaluation has its chance to be involved in some version of the e2e verifier's contract, we can postpone merging this PR until we have integration testing with Anvil self-hosted node wrapped into CI. Having this, I can refuse from redundant data contracts and just wrap this small contract specifically designed for just multi-evaluation testing. |
It was decided to keep this PR opened and update it by enabling integration testing (after solving #22) while recent discussion. Even if Spark multi evaluation is not used in pp-spartan, it is still important to keep it in the repository as soon as there is a non-zero chance of its appearing in Nova reference of the verifier |
Fixing current CI failure can be quite complicated with current infrastructure settings. The think is that Spark requires quite a lot of the Another alternative can be trying to use Foundry cheat codes that in theory allow altering the Anvil's state from client side: https://book.getfoundry.sh/reference/forge-std/std-storage. However, due to a lack of documentation, it requires spending a lot of time in digesting this. Since Spark building block is not too important for now (it is not used by pp-spartan), I'm closing this PR, while keeping the |
This PR introduces new important building block of Nova verifier - Spark multi-evaluation.
"Traditionally" it includes library that implements bare logic with unit-tests and data contracts (for primary and secondary parts of verification) along with Python script used for their generating. Since amount of the data used by this building block is quite large, we are switching to more powerful self-hosted runner on CI that makes feasible the compilation of big data contracts and unit-tests execution.
@huitseeker might be remembered that there was an additional issue with the large data volume required for Spark multi-evaluation. Essentially it required pushing the data from A, B and C matrices - stored inside the verifier key as (i, j, scalar) triples - into the VM memory just inside library, making the implementation impractical, since it is not possible to submit these data through parameters of function due to OutOfGas exception.
To workaround this, I spend some time playing with infrastructure of Foundry (and looking at forge-std), looking for a way of utilising the global storage (blockchain) instead of limited EVM memory. Fortunately, Foundry contains
anvil
which is a local EVM node with bunch of configurable options that allow manipulations with state and may other useful things. Thecast
tool allows interacting withanvil
through RPC and uploading the data to the deployed smart contract in a convenient way.So in context of this PR, the first version of integration testing is implemented, for the Spark building block case and it can be populated to the whole verifier eventually, when we upload the whole verifier-key and snark data to the global storage and can extract relevant part for particular steps of verification just inside the correspondent function of verifier contract (or multiple contracts).