Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transcript challenge generation optimizations #741

Closed
lucasxia01 opened this issue Oct 4, 2023 · 0 comments · Fixed by AztecProtocol/aztec-packages#8406
Closed

Transcript challenge generation optimizations #741

lucasxia01 opened this issue Oct 4, 2023 · 0 comments · Fixed by AztecProtocol/aztec-packages#8406

Comments

@lucasxia01
Copy link
Contributor

lucasxia01 commented Oct 4, 2023

We can truncate hashes to be 128 bits, so multiplication is easier.

Alternatively, we can generalize the transcript to take in a num_challenge_bits parameter, and then divide up each hash into num_challenge_bits chunks, so we maximize the number of challenges generated from a hash. In practice, since we want 128 bit security, we can create 2 challenges out of each hash.

Generalize the type of hash we use by taking that in as a parameter as well.

Edit: in the new transcript update, we do not truncate hashes. So one optimization would be to truncate the hash.

lucasxia01 added a commit to AztecProtocol/aztec-packages that referenced this issue Oct 5, 2023
This PR updates challenge generation to not be powers of each other,
fixing AztecProtocol/barretenberg#583.
Before, we generated multiple challenges by generating one by hashing
the round data and the previous challenge, and then taking powers of
this challenge as the other challenges. Now, we generate challenges by
taking a hash each time of the previous challenge (and the round data,
which is only nonempty in the first iteration of get_next_challenge_buffer()
in a round).

Using powers of one challenge does not lead to soundness bugs in how we
currently use challenges, but theoretically, we would want challenges to
be "independent" of each other.

Challenges are 128 bits. I truncate every 256-bit hash to 128 bits for 2
reasons. First, it is easier on the verifier to multiply with 128
challenges as opposed to 256 bits. Second, the 256-bit challenges were
not actually uniformly distributed in a nice way because the challenges
get modded by the 254-bit scalar field modulus. In this approach, the
challenges should be distributed more uniformly (over 128-bit numbers).

Optimization(AztecProtocol/barretenberg#741) -
I could split every hash into 2 challenges, so I use all 256 bits of a
hash (when we need it), instead of throwing away 128 bits every time.

# Checklist:
Remove the checklist to signal you've completed it. Enable auto-merge if
the PR is ready to merge.
- [x] If the pull request requires a cryptography review (e.g.
cryptographic algorithm implementations) I have added the 'crypto' tag.
- [x] I have reviewed my diff in github, line by line and removed
unexpected formatting changes, testing logs, or commented-out code.
- [x] Every change is related to the PR description.
- [x] I have
[linked](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)
this pull request to relevant issues (if any exist).
AztecBot pushed a commit that referenced this issue Oct 6, 2023
This PR updates challenge generation to not be powers of each other,
fixing #583.
Before, we generated multiple challenges by generating one by hashing
the round data and the previous challenge, and then taking powers of
this challenge as the other challenges. Now, we generate challenges by
taking a hash each time of the previous challenge (and the round data,
which is only nonempty in the first iteration of get_next_challenge_buffer()
in a round).

Using powers of one challenge does not lead to soundness bugs in how we
currently use challenges, but theoretically, we would want challenges to
be "independent" of each other.

Challenges are 128 bits. I truncate every 256-bit hash to 128 bits for 2
reasons. First, it is easier on the verifier to multiply with 128
challenges as opposed to 256 bits. Second, the 256-bit challenges were
not actually uniformly distributed in a nice way because the challenges
get modded by the 254-bit scalar field modulus. In this approach, the
challenges should be distributed more uniformly (over 128-bit numbers).

Optimization(#741) -
I could split every hash into 2 challenges, so I use all 256 bits of a
hash (when we need it), instead of throwing away 128 bits every time.

# Checklist:
Remove the checklist to signal you've completed it. Enable auto-merge if
the PR is ready to merge.
- [x] If the pull request requires a cryptography review (e.g.
cryptographic algorithm implementations) I have added the 'crypto' tag.
- [x] I have reviewed my diff in github, line by line and removed
unexpected formatting changes, testing logs, or commented-out code.
- [x] Every change is related to the PR description.
- [x] I have
[linked](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)
this pull request to relevant issues (if any exist).
AztecBot pushed a commit that referenced this issue Sep 11, 2024
This PR modifies our Transcript class to achieve the following:

every time a hash function is used to generate challenges, the output is
split into two 128-bit field elements to generate 2 challenges per hash

This change gives us the following benefits:

1. the amount of hashing required to fold/verifier proofs is reduced
2. where challenges map to Verifier scalar multiplications, those scalar
muls are now half-width and can be more efficiently evaluated (requires
additional code to support)

Closes #741.

---------

Co-authored-by: lucasxia01 <lucasxia01@gmail.com>
Co-authored-by: Maxim Vezenov <mvezenov@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant