Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute columns in post-PeerDAS checkpoint sync #6760

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from

Conversation

dapplion
Copy link
Collaborator

@dapplion dapplion commented Jan 7, 2025

Issue Addressed

Addresses #6026.

Post-PeerDAS the DB expects to have data columns for the finalized block.

Proposed Changes

Instead of forcing the user to submit the columns, this PR computes the columns from the blobs that we can already fetch from the checkpointz server or with the existing CLI options.

Note 1: (EDIT) Pruning concern addressed

Note 2: I have not tested this feature

Note 3: @michaelsproul an alternative I recall is to not require the blobs / columns at this point and expect backfill to populate the finalized block

@dapplion dapplion added the das Data Availability Sampling label Jan 7, 2025
@jimmygchen jimmygchen self-assigned this Jan 7, 2025
@jimmygchen jimmygchen added the ready-for-review The code is ready for review label Jan 7, 2025
Copy link
Member

@jimmygchen jimmygchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice and simple! Should have done this ages ago 😅

Just a TODO comment to address, but we can probably start testing with this!

@jimmygchen jimmygchen added ready-for-merge This PR is ready to merge. and removed ready-for-review The code is ready for review ready-for-merge This PR is ready to merge. labels Jan 9, 2025
@jimmygchen
Copy link
Member

@michaelsproul I think this approach is simple and maintains same behavour as Deneb - but keen to hear if you have any preference or benefit of going with the alternatives discussed earlier (breaking the existing db invariant and populate the columns via backfill / p2p etc)?

Note 1: We should only store the columns that are meant to be in custody. Otherwise, later pruning won't delete the extras. However, accessing the NodeID and computing the custody indices is quite cumbersome as the logic is inside the network init. Ideas?

Yeah it is a bit of a pain, this code is executed before network is initialised so we don't have the node ID yet, and I'm not sure if it's worth doing a big refactor for this. Why wouldn't blob pruning delete the extras? Looks like it iterate through all columns for the root, unless i'm looking at the wrong thing?

let indices = self.get_data_column_keys(block_root)?;
if !indices.is_empty() {
trace!(
self.log,
"Pruning data columns of block";
"slot" => slot,
"block_root" => ?block_root,
);
last_pruned_block_root = Some(block_root);
ops.push(StoreOp::DeleteDataColumns(block_root, indices));
}

@dapplion I think this should work - would you mind doing a quick test this locally before we merge?

@jimmygchen jimmygchen added the under-review A reviewer has only partially completed a review. label Jan 9, 2025
@jimmygchen
Copy link
Member

@michaelsproul I think this approach is simple and maintains same behavour as Deneb - but keen to hear if you have any preference or benefit of going with the alternatives discussed earlier (breaking the existing db invariant and populate the columns via backfill / p2p etc)?

Oh I think the downside with this approach is that the checkpoint server would have to serve blobs, i.e. be a supernode. The latter approach above would allow syncing columns from peers instead, and checkpointz server doesn't have to store all columns (if we have 16mb blobs per block, that'd be ~2TB of blob data that the checkpointz server have to store).

Or if we're ok with checkpointz server serving blobs, we could potentially introduce a --checkpoint-server flag to for servers to store checkpoint blobs only (so it can serve them) instead of having to store all columns for 18 days?

@jimmygchen
Copy link
Member

This PR addresses #6026

@mrabino1
Copy link

@jimmygchen forgive my ignorance on the specs here (and indeed the existing implementation).. quick question. instead of storing the blobs on a checkpoint server, wouldn't it be easier to store the hash only such that the clients upon launch would p2p query / gossip those newly requested blobs? From my understanding, the objective of the checkpoint server is to allow a newly launched node to get going with minimal effort... but just like the state, LH still had to backfill... wouldn't a similar logic apply here? thx .

@jimmygchen
Copy link
Member

Hi @mrabino1

Yes it is possible to download the blobs from peers, with the alternative @dapplion mentioned:

Note 3: @michaelsproul an alternative I recall is to not require the blobs / columns at this point and expect backfill to populate the finalized block

The proposed solution in this PR behaves similarly to Mainnet today - we download and store the checkpoint state, block and blobs - so this approach requires minimal changes in Lighthouse, and will likely work without additional effort - it may not be the final solution but could be useful for testing now.

We could consider the alternative to download from peers - it means we'd have to break the current invariant of storing complete block and blobs in the database, and will need to have some extra logic to populate the finalized block when backfill completes.

@realbigsean realbigsean requested review from jimmygchen and realbigsean and removed request for jimmygchen and realbigsean January 12, 2025 22:38
@michaelsproul
Copy link
Member

I think breaking the invariant that we have all blobs/columns for blocks probably makes sense, seeing as there are multiple cases now where this is useful. The other main use case is importing partial blob history, e.g. all txns related to some L2. This is being worked on in:

I think it makes sense to restructure the DB storage as well so that we don't index by block_root -> list of blobs/data columns. For blobs it's too late (we need a schema upgrade), but we should change the data column schema before we ship PeerDAS on a permanent testnet (we can break schema versions between devnets without issue IMO).

@michaelsproul
Copy link
Member

I think this PR is fine for now until we get our ducks in a row

@realbigsean realbigsean requested review from realbigsean and removed request for realbigsean January 13, 2025 00:16
@dapplion
Copy link
Collaborator Author

@jimmygchen

Why wouldn't blob pruning delete the extras? Looks like it iterate through all columns for the root, unless i'm looking at the wrong thing?

You are right, pruning does clean them up.

@dapplion
Copy link
Collaborator Author

@michaelsproul

I think it makes sense to restructure the DB storage as well so that we don't index by block_root -> list of blobs/data columns

Current DB indexes columns by [root:column_index], what do you propose to index with?

@jimmygchen jimmygchen requested review from jimmygchen and removed request for jimmygchen January 14, 2025 07:31
@jimmygchen jimmygchen added waiting-on-author The reviewer has suggested changes and awaits thier implementation. and removed under-review A reviewer has only partially completed a review. labels Jan 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
das Data Availability Sampling waiting-on-author The reviewer has suggested changes and awaits thier implementation.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants