-
Notifications
You must be signed in to change notification settings - Fork 784
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Import expired blobs #5391
Comments
we should try to include an option for this in the next release because I think it'll be an important feature once blobs start expiring on mainnet. I'm not sure which path is easier/more useful so would appreciate feedback. We could end up implementing both, but I think starting with one for the next release makes sense RPC download
DB Import export When lighthouse is shut off:
On startup:
|
Any updates regarding this? |
@jonathanudd Not yet sorry. We've been blocked a bit on implementing PeerDAS, but now that the bulk of that code is in One complication is that with PeerDAS most nodes will cease to store whole blobs, so only supernodes (nodes that opt to store all blobs) will be able to implement the HTTP API for fetching/exporting blobs. Every other node will just have fragments ("columns"). Another issue is partial blob storage. At the moment Lighthouse uses a single marker to track which blobs are available: the One way forward might be:
Down the line we can:
|
Mac and I are going to start working on this |
Some feedback based on the initial MVP:
|
New plan:
Things to check:
|
Notes from today's call:
|
Description
min_epochs_for_blob_sidecars_requests
tou64::MAX
--blob-provider
flag that points to an archive node's beacon APIThe text was updated successfully, but these errors were encountered: