Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: process note logs in aztec-nr #10651

Merged
merged 30 commits into from
Jan 16, 2025
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
a124c6a
Sketching out initial approach
nventuro Dec 12, 2024
ec92180
Success!
nventuro Dec 13, 2024
2f52d31
IT LIVES
nventuro Dec 13, 2024
a443446
Misc doc improvements
nventuro Dec 13, 2024
dae83d7
Some more minor comments
nventuro Dec 13, 2024
40d2dae
Remove old ts code
nventuro Dec 14, 2024
893ad80
noir formatting
nventuro Dec 14, 2024
4c88865
Merge branch 'master' into nv/process_note_logs
nventuro Jan 8, 2025
206444f
It works!
nventuro Jan 9, 2025
51a7f0b
Add some docs
nventuro Jan 9, 2025
212219a
Merge branch 'master' into nv/process_note_logs
nventuro Jan 9, 2025
646f5ff
Handle no note contracts
nventuro Jan 9, 2025
b771c98
Fix macro
nventuro Jan 9, 2025
ebf7412
Merge branch 'master' into nv/process_note_logs
nventuro Jan 9, 2025
eccd8b6
Fix import
nventuro Jan 9, 2025
da8408f
Remove extra file
nventuro Jan 10, 2025
d5fe202
Apply suggestions from code review
nventuro Jan 10, 2025
96af47d
Rename foreach
nventuro Jan 10, 2025
48fe292
Move files around
nventuro Jan 10, 2025
7f46d5a
Merge branch 'master' into nv/process_note_logs
nventuro Jan 10, 2025
30cbc8a
If I have to nargo fmt one more time
nventuro Jan 10, 2025
5205cc4
Oh god
nventuro Jan 10, 2025
76bbd1b
zzz
nventuro Jan 10, 2025
d44ec2e
kill me now
nventuro Jan 10, 2025
8b7d508
Add node methods to txe node
nventuro Jan 13, 2025
ff0127c
Merge branch 'master' into nv/process_note_logs
nventuro Jan 13, 2025
8f56981
Add sim prov
nventuro Jan 13, 2025
72ea7c4
Fix build error
nventuro Jan 13, 2025
36c29e8
fix: simulator oracle test
benesjan Jan 15, 2025
d8b24ab
Merge branch 'master' into nv/process_note_logs
benesjan Jan 16, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 19 additions & 16 deletions noir-projects/aztec-nr/aztec/src/note/note_interface.nr
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::context::PrivateContext;
use crate::note::note_header::NoteHeader;
use dep::protocol_types::traits::{Empty, Serialize};
use dep::protocol_types::traits::Empty;

pub trait NoteProperties<T> {
fn properties() -> T;
Expand All @@ -17,41 +17,44 @@ where
}

pub trait NullifiableNote {
// This function MUST be called with the correct note hash for consumption! It will otherwise silently fail and
// compute an incorrect value.
// The reason why we receive this as an argument instead of computing it ourselves directly is because the
// caller will typically already have computed this note hash, and we can reuse that value to reduce the total
// gate count of the circuit.
/// Returns the non-siloed nullifier, which will be later siloed by contract address by the kernels before being
/// committed to the state tree.
///
/// This function MUST be called with the correct note hash for consumption! It will otherwise silently fail and
/// compute an incorrect value. The reason why we receive this as an argument instead of computing it ourselves
/// directly is because the caller will typically already have computed this note hash, and we can reuse that value
/// to reduce the total gate count of the circuit.
///
/// This function receives the context since nullifier computation typically involves proving nullifying keys, and
/// we require the kernel's assistance to do this in order to prevent having to reveal private keys to application
/// circuits.
fn compute_nullifier(self, context: &mut PrivateContext, note_hash_for_nullify: Field) -> Field;

// Unlike compute_nullifier, this function does not take a note hash since it'll only be invoked in unconstrained
// contexts, where there is no gate count.
/// Same as compute_nullifier, but unconstrained. This version does not take a note hash because it'll only be
/// invoked in unconstrained contexts, where there is no gate count.
unconstrained fn compute_nullifier_without_context(self) -> Field;
}

// docs:start:note_interface
// Autogenerated by the #[note] macro

pub trait NoteInterface<let N: u32> {
// Autogenerated by the #[note] macro
fn serialize_content(self) -> [Field; N];

// Autogenerated by the #[note] macro
fn deserialize_content(fields: [Field; N]) -> Self;

// Autogenerated by the #[note] macro
fn get_header(self) -> NoteHeader;

// Autogenerated by the #[note] macro
fn set_header(&mut self, header: NoteHeader) -> ();

// Autogenerated by the #[note] macro
fn get_note_type_id() -> Field;

// Autogenerated by the #[note] macro
fn to_be_bytes(self, storage_slot: Field) -> [u8; N * 32 + 64];

// Autogenerated by the #[note] macro
/// Returns the non-siloed note hash, i.e. the inner hash computed by the contract during private execution. Note
/// hashes are later siloed by contract address and nonce by the kernels before being committed to the state tree.
///
/// This should be a commitment to the note contents, including the storage slot (for indexing) and some random
/// value (to prevent brute force trial-hashing attacks).
fn compute_note_hash(self) -> Field;
}
// docs:end:note_interface
163 changes: 163 additions & 0 deletions noir-projects/aztec-nr/aztec/src/oracle/management.nr
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
use std::static_assert;

use crate::{
context::unconstrained_context::UnconstrainedContext, note::note_header::NoteHeader,
utils::array,
};
use dep::protocol_types::{
address::AztecAddress,
constants::{MAX_NOTE_HASHES_PER_TX, PRIVATE_LOG_SIZE_IN_FIELDS},
hash::compute_note_hash_nonce,
};

// We reserve two fields in the note log that are not part of the note content: one for the storage slot, and one for
// the note type id.
global NOTE_LOG_RESERVED_FIELDS: u32 = 2;
global MAX_NOTE_SERIALIZED_LEN: u32 = PRIVATE_LOG_SIZE_IN_FIELDS - NOTE_LOG_RESERVED_FIELDS;

pub struct NoteHashesAndNullifier {
pub note_hash: Field,
pub siloed_note_hash: Field,
pub inner_nullifier: Field,
}

fn for_each_bounded_vec<T, let MaxLen: u32, Env>(
vec: BoundedVec<T, MaxLen>,
f: fn[Env](T, u32) -> (),
) {
for i in 0..MaxLen {
if i < vec.len() {
f(vec.get_unchecked(i), i);
}
}
}

/// Processes a log given its plaintext by trying to find notes encoded in it. This process involves the discovery of
/// the nonce of any such notes, which requires knowledge of the transaction hash in which the notes would've been
/// created, along with the list of siloed note hashes in said transaction.
///
/// Additionally, this requires a `compute_note_hash_and_nullifier` lambda that is able to compute these values for any
/// note in the contract given their contents. A typical implementation of such a function would look like this:
///
/// ```
/// |serialized_note_content, note_header, note_type_id| {
/// let hashes = if note_type_id == MyNoteType::get_note_type_id() {
/// assert(serialized_note_content.len() == MY_NOTE_TYPE_SERIALIZATION_LENGTH);
/// dep::aztec::note::utils::compute_note_hash_and_optionally_a_nullifier(
/// MyNoteType::deserialize_content,
/// note_header,
/// true,
/// serialized_note_content.storage(),
/// )
/// } else {
/// panic(f"Unknown note type id {note_type_id}")
/// };
///
/// Option::some(dep::aztec::oracle::management::NoteHashesAndNullifier {
/// note_hash: hashes[0],
/// siloed_note_hash: hashes[2],
/// inner_nullifier: hashes[3],
/// })
/// }
/// ```
pub unconstrained fn process_log<Env>(
context: UnconstrainedContext,
log_plaintext: BoundedVec<Field, PRIVATE_LOG_SIZE_IN_FIELDS>,
tx_hash: Field,
siloed_note_hashes_in_tx: BoundedVec<Field, MAX_NOTE_HASHES_PER_TX>,
recipient: AztecAddress,
compute_note_hash_and_nullifier: fn[Env](BoundedVec<Field, MAX_NOTE_SERIALIZED_LEN>, NoteHeader, Field) -> Option<NoteHashesAndNullifier>,
) {
let (storage_slot, note_type_id, serialized_note_content) =
destructure_log_plaintext(log_plaintext);

// We need to find the note's nonce, which is the one that results in one of the siloed note hashes from tx_hash
for_each_bounded_vec(
siloed_note_hashes_in_tx,
|expected_siloed_note_hash, i| {
let candidate_nonce = compute_note_hash_nonce(tx_hash, i);

let header = NoteHeader::new(context.this_address(), candidate_nonce, storage_slot);

// TODO: handle failed note_hash_and_nullifier computation
let hashes = compute_note_hash_and_nullifier(
serialized_note_content,
header,
note_type_id,
)
.unwrap();

if hashes.siloed_note_hash == expected_siloed_note_hash {
// TODO(#10726): push these into a vec to deliver all at once instead of having one oracle call per note
deliver_note(
context.this_address(), // TODO(#10727): allow other contracts to deliver notes
storage_slot,
candidate_nonce,
serialized_note_content,
hashes.note_hash,
hashes.inner_nullifier,
tx_hash,
recipient,
);

// We don't exit the loop - it is possible (though rare) for the same note content to be present
// multiple times in the same transaction with different nonces.
}
},
);
}

unconstrained fn destructure_log_plaintext(
log_plaintext: BoundedVec<Field, PRIVATE_LOG_SIZE_IN_FIELDS>,
) -> (Field, Field, BoundedVec<Field, MAX_NOTE_SERIALIZED_LEN>) {
assert(log_plaintext.len() >= NOTE_LOG_RESERVED_FIELDS);

static_assert(
NOTE_LOG_RESERVED_FIELDS == 2,
"unepxected value for NOTE_LOG_RESERVED_FIELDS",
);
let storage_slot = log_plaintext.get(0);
let note_type_id = log_plaintext.get(1);

let serialized_note_content = array::subbvec(log_plaintext, NOTE_LOG_RESERVED_FIELDS);

(storage_slot, note_type_id, serialized_note_content)
}

unconstrained fn deliver_note(
contract_address: AztecAddress,
storage_slot: Field,
nonce: Field,
content: BoundedVec<Field, MAX_NOTE_SERIALIZED_LEN>,
note_hash: Field,
nullifier: Field,
tx_hash: Field,
recipient: AztecAddress,
) {
// TODO(#10728): do something instead of failing (e.g. not advance tagging indices)
assert(
deliver_note_oracle(
contract_address,
storage_slot,
nonce,
content,
note_hash,
nullifier,
tx_hash,
recipient,
),
"Failed to deliver note",
);
}

#[oracle(deliverNote)]
unconstrained fn deliver_note_oracle(
contract_address: AztecAddress,
storage_slot: Field,
nonce: Field,
content: BoundedVec<Field, MAX_NOTE_SERIALIZED_LEN>,
note_hash: Field,
nullifier: Field,
tx_hash: Field,
recipient: AztecAddress,
) -> bool {}
1 change: 1 addition & 0 deletions noir-projects/aztec-nr/aztec/src/oracle/mod.nr
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ pub mod get_public_data_witness;
pub mod get_membership_witness;
pub mod keys;
pub mod key_validation_request;
pub mod management;
pub mod get_sibling_path;
pub mod random;
pub mod enqueue_public_function_call;
Expand Down
2 changes: 2 additions & 0 deletions noir-projects/aztec-nr/aztec/src/utils/array/mod.nr
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
mod collapse;
mod subarray;
mod subbvec;

pub use collapse::collapse;
pub use subarray::subarray;
pub use subbvec::subbvec;
24 changes: 13 additions & 11 deletions noir-projects/aztec-nr/aztec/src/utils/array/subarray.nr
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
/// Returns `DST_LEN` elements from a source array, starting at `offset`. `DST_LEN` must be large enough to hold all of
/// the elements past `offset`.
/// Returns `DST_LEN` elements from a source array, starting at `offset`. `DST_LEN` must not be larger than the number
/// of elements past `offset`.
///
/// Example:
/// Examples:
/// ```
/// let foo: [Field; 2] = subarray([1, 2, 3, 4, 5], 2);
/// assert_eq(foo, [3, 4]);
///
/// let bar: [Field; 5] = subarray([1, 2, 3, 4, 5], 2); // fails - we can't return 5 elements since only 3 remain
/// ```
pub fn subarray<let SRC_LEN: u32, let DST_LEN: u32>(
src: [Field; SRC_LEN],
pub fn subarray<T, let SRC_LEN: u32, let DST_LEN: u32>(
src: [T; SRC_LEN],
offset: u32,
) -> [Field; DST_LEN] {
assert(offset + DST_LEN <= SRC_LEN, "offset too large");
) -> [T; DST_LEN] {
assert(offset + DST_LEN <= SRC_LEN, "DST_LEN too large for offset");

let mut dst: [Field; DST_LEN] = std::mem::zeroed();
let mut dst: [T; DST_LEN] = std::mem::zeroed();
for i in 0..DST_LEN {
dst[i] = src[i + offset];
}
Expand All @@ -26,14 +28,14 @@ mod test {
#[test]
unconstrained fn subarray_into_empty() {
// In all of these cases we're setting DST_LEN to be 0, so we always get back an emtpy array.
assert_eq(subarray([], 0), []);
assert_eq(subarray::<Field, _, _>([], 0), []);
assert_eq(subarray([1, 2, 3, 4, 5], 0), []);
assert_eq(subarray([1, 2, 3, 4, 5], 2), []);
}

#[test]
unconstrained fn subarray_complete() {
assert_eq(subarray([], 0), []);
assert_eq(subarray::<Field, _, _>([], 0), []);
assert_eq(subarray([1, 2, 3, 4, 5], 0), [1, 2, 3, 4, 5]);
}

Expand All @@ -46,7 +48,7 @@ mod test {
assert_eq(subarray([1, 2, 3, 4, 5], 1), [2]);
}

#[test(should_fail)]
#[test(should_fail_with = "DST_LEN too large for offset")]
unconstrained fn subarray_offset_too_large() {
// With an offset of 1 we can only request up to 4 elements
let _: [_; 5] = subarray([1, 2, 3, 4, 5], 1);
Expand Down
92 changes: 92 additions & 0 deletions noir-projects/aztec-nr/aztec/src/utils/array/subbvec.nr
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
use crate::utils::array;

/// Returns `DST_MAX_LEN` elements from a source BoundedVec, starting at `offset`. `offset` must not be larger than the
/// original length, and `DST_LEN` must not be larger than the total number of elements past `offset` (including the
/// zeroed elements past `len()`).
///
/// Only elements at the beginning of the vector can be removed: it is not possible to also remove elements at the end
/// of the vector by passing a value for `DST_LEN` that is smaller than `len() - offset`.
///
/// Examples:
/// ```
/// let foo = BoundedVec::<_, 10>::from_array([1, 2, 3, 4, 5]);
/// assert_eq(subbvec(foo, 2), BoundedVec::<_, 8>::from_array([3, 4, 5]));
///
/// let bar: BoundedVec<_, 1> = subbvec(foo, 2); // fails - we can't return just 1 element since 3 remain
/// let baz: BoundedVec<_, 10> = subbvec(foo, 3); // fails - we can't return 10 elements since only 7 remain
/// ```
pub fn subbvec<T, let SRC_MAX_LEN: u32, let DST_MAX_LEN: u32>(
vec: BoundedVec<T, SRC_MAX_LEN>,
offset: u32,
) -> BoundedVec<T, DST_MAX_LEN> {
// from_parts_unchecked does not verify that the elements past len are zeroed, but that is not an issue in our case
// because we're constructing the new storage array as a subarray of the original one (which should have zeroed
// storage past len), guaranteeing correctness. This is because `subarray` does not allow extending arrays past
// their original length.
BoundedVec::from_parts_unchecked(array::subarray(vec.storage(), offset), vec.len() - offset)
}

mod test {
use super::subbvec;

#[test]
unconstrained fn subbvec_empty() {
let bvec = BoundedVec::<Field, 0>::from_array([]);
assert_eq(subbvec(bvec, 0), bvec);
}

#[test]
unconstrained fn subbvec_complete() {
let bvec = BoundedVec::<_, 10>::from_array([1, 2, 3, 4, 5]);
assert_eq(subbvec(bvec, 0), bvec);

let smaller_capacity = BoundedVec::<_, 5>::from_array([1, 2, 3, 4, 5]);
assert_eq(subbvec(bvec, 0), smaller_capacity);
}

#[test]
unconstrained fn subbvec_partial() {
let bvec = BoundedVec::<_, 10>::from_array([1, 2, 3, 4, 5]);

assert_eq(subbvec(bvec, 2), BoundedVec::<_, 8>::from_array([3, 4, 5]));
assert_eq(subbvec(bvec, 2), BoundedVec::<_, 3>::from_array([3, 4, 5]));
}

#[test]
unconstrained fn subbvec_into_empty() {
let bvec: BoundedVec<_, 10> = BoundedVec::from_array([1, 2, 3, 4, 5]);
assert_eq(subbvec(bvec, 5), BoundedVec::<_, 5>::from_array([]));
}

#[test(should_fail)]
unconstrained fn subbvec_offset_past_len() {
let bvec = BoundedVec::<_, 10>::from_array([1, 2, 3, 4, 5]);
let _: BoundedVec<_, 1> = subbvec(bvec, 6);
}

#[test(should_fail)]
unconstrained fn subbvec_insufficient_dst_len() {
let bvec = BoundedVec::<_, 10>::from_array([1, 2, 3, 4, 5]);

// We're not providing enough space to hold all of the items inside the original BoundedVec. subbvec can cause
// for the capacity to reduce, but not the length (other than by len - offset).
let _: BoundedVec<_, 1> = subbvec(bvec, 2);
}

#[test(should_fail_with = "DST_LEN too large for offset")]
unconstrained fn subbvec_dst_len_causes_enlarge() {
let bvec = BoundedVec::<_, 10>::from_array([1, 2, 3, 4, 5]);

// subbvec does not supprt capacity increases
let _: BoundedVec<_, 11> = subbvec(bvec, 0);
}

#[test(should_fail_with = "DST_LEN too large for offset")]
unconstrained fn subbvec_dst_len_too_large_for_offset() {
let bvec = BoundedVec::<_, 10>::from_array([1, 2, 3, 4, 5]);

// This effectively requests a capacity increase, since there'd be just one element plus the 5 empty slots,
// which is less than 7.
let _: BoundedVec<_, 7> = subbvec(bvec, 4);
}
}
Loading