Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add set code tx support #43

Open
wants to merge 51 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 49 commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
286f209
add initial CodecV6 and daBatchV6
jonastheis Dec 27, 2024
6767845
feat: add codecv5 and codecv6 for Euclid fork
omerfirmak Jan 7, 2025
cc9561b
implement blob encoding and decoding according to new blob layout
jonastheis Jan 21, 2025
8c2a5cc
rename to CodecV7
jonastheis Jan 21, 2025
9117170
add NewDABatchFromParams
jonastheis Jan 22, 2025
4ef7bfc
add DecodeBlob to Codec
jonastheis Jan 22, 2025
bf16156
Update da.go
omerfirmak Jan 27, 2025
2817674
Update interfaces.go
omerfirmak Jan 27, 2025
7a60b34
Merge remote-tracking branch 'origin/omerfirmak/euclid' into feat/cod…
jonastheis Jan 28, 2025
64133ef
fixes after merge
jonastheis Jan 28, 2025
1dde89a
address review comments
jonastheis Jan 29, 2025
c9c1a44
add sanity checks for blob payload generation
jonastheis Jan 30, 2025
e980b3d
fix few small bugs uncovered by unit tests
jonastheis Jan 31, 2025
0e930c6
upgrade to latest l2geth version and add correct getter for CodecV7 i…
jonastheis Jan 31, 2025
5d200f3
fix linter warnings
jonastheis Jan 31, 2025
5292e3c
add unit tests
jonastheis Jan 31, 2025
3cfed43
go mod tidy
jonastheis Jan 31, 2025
eed341f
fix linter warnings
jonastheis Jan 31, 2025
be6b422
add function MessageQueueV2ApplyL1MessagesFromBlocks to compute the L…
jonastheis Feb 3, 2025
d77916b
fix lint and unit test errors
Feb 3, 2025
b71c047
call checkCompressedDataCompatibility only once -> constructBlobPaylo…
jonastheis Feb 4, 2025
cbed8b2
address review comments
jonastheis Feb 4, 2025
392b6ff
update BlobEnvelopeV7 documentation
jonastheis Feb 4, 2025
edaf5d2
add CodecV7 to general util functions
jonastheis Feb 5, 2025
894a93b
add InitialL1MessageQueueHash and LastL1MessageQueueHash to encoding.…
jonastheis Feb 5, 2025
f3271d9
Merge remote-tracking branch 'origin/main' into feat/codec-v6
jonastheis Feb 7, 2025
2611ae1
go mod tidy
jonastheis Feb 7, 2025
4d46aad
upgrade go-ethereum dependency to latest develop
jonastheis Feb 7, 2025
f4b274c
implement estimate functions
jonastheis Feb 7, 2025
3c106a2
update TestMain and run go mod tidy
Thegaram Feb 7, 2025
538036b
add NewDAChunk to CodecV7 for easier use in relayer
jonastheis Feb 9, 2025
14d07e7
Merge branch 'feat/codec-v6' of github.com:scroll-tech/da-codec into …
jonastheis Feb 9, 2025
cfb316b
add daChunkV7 type to calculate chunk hash
jonastheis Feb 9, 2025
c6ae41e
allow batch.chunks but check consistency with batch.blocks
jonastheis Feb 10, 2025
d028c53
fix off-by-one error with L1 messages
jonastheis Feb 10, 2025
8fa5e27
Fix: rolling hash implementation (#42)
roynalnaruto Feb 14, 2025
4f13363
Apply suggestions from code review
jonastheis Feb 18, 2025
bcad556
rename initialL1MessageQueueHash -> prevL1MessageQueueHash and lastL1…
jonastheis Feb 18, 2025
7522931
address review comments
jonastheis Feb 18, 2025
32f5b49
address review comments
jonastheis Feb 18, 2025
0247443
add challenge digest computation for batch
jonastheis Feb 18, 2025
2043787
remove InitialL1MessageIndex from CodecV7
jonastheis Feb 19, 2025
de09af4
address review comments
jonastheis Feb 19, 2025
f9608ed
fix tests
jonastheis Feb 19, 2025
01bd9b5
refactoring to minimize duplicate code and increase maintainability
jonastheis Feb 20, 2025
fca406c
fix nil pointer
jonastheis Feb 20, 2025
836dd1e
feat: add setcode tx support
colinlyguo Feb 20, 2025
5b19a27
add AccessList and AuthList
colinlyguo Feb 20, 2025
05c0bbc
go mod tidy
colinlyguo Feb 20, 2025
6ab79d4
Merge branch 'main' into feat/set-code-tx-encoding
colinlyguo Feb 21, 2025
273e28e
fix conflict fix bugs
colinlyguo Feb 21, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions encoding/codecv0.go
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,10 @@ func (d *DACodecV0) DecodeTxsFromBlob(blob *kzg4844.Blob, chunks []*DAChunkRawTx
return nil
}

func (d *DACodecV0) DecodeBlob(blob *kzg4844.Blob) (DABlobPayload, error) {
return nil, nil
}

// NewDABatch creates a DABatch from the provided Batch.
func (d *DACodecV0) NewDABatch(batch *Batch) (DABatch, error) {
// this encoding can only support a fixed number of chunks per batch
Expand Down Expand Up @@ -223,6 +227,10 @@ func (d *DACodecV0) NewDABatchFromBytes(data []byte) (DABatch, error) {
), nil
}

func (d *DACodecV0) NewDABatchFromParams(_ uint64, _, _ common.Hash) (DABatch, error) {
return nil, nil
}

// EstimateBlockL1CommitCalldataSize calculates the calldata size in l1 commit for this block approximately.
func (d *DACodecV0) EstimateBlockL1CommitCalldataSize(b *Block) (uint64, error) {
var size uint64
Expand Down
361 changes: 361 additions & 0 deletions encoding/codecv7.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,361 @@
package encoding

import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"math"

"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/core/types"
"github.com/scroll-tech/go-ethereum/crypto/kzg4844"
"github.com/scroll-tech/go-ethereum/log"

"github.com/scroll-tech/da-codec/encoding/zstd"
)

type DACodecV7 struct{}

// Version returns the codec version.
func (d *DACodecV7) Version() CodecVersion {
return CodecV7
}

// MaxNumChunksPerBatch returns the maximum number of chunks per batch.
func (d *DACodecV7) MaxNumChunksPerBatch() int {
return math.MaxInt
}

// NewDABlock creates a new DABlock from the given Block and the total number of L1 messages popped before.
func (d *DACodecV7) NewDABlock(block *Block, totalL1MessagePoppedBefore uint64) (DABlock, error) {
return newDABlockV7FromBlockWithValidation(block, &totalL1MessagePoppedBefore)
}

// NewDAChunk creates a new DAChunk from the given Chunk and the total number of L1 messages popped before.
// Note: In DACodecV7 there is no notion of chunks. Blobs contain the entire batch data without any information of Chunks within.
// However, for compatibility reasons this function is implemented to create a DAChunk from a Chunk.
// This way we can still uniquely identify a set of blocks and their L1 messages.
func (d *DACodecV7) NewDAChunk(chunk *Chunk, totalL1MessagePoppedBefore uint64) (DAChunk, error) {
if chunk == nil {
return nil, errors.New("chunk is nil")
}

if len(chunk.Blocks) == 0 {
return nil, errors.New("number of blocks is 0")
}

if len(chunk.Blocks) > math.MaxUint16 {
return nil, fmt.Errorf("number of blocks (%d) exceeds maximum allowed (%d)", len(chunk.Blocks), math.MaxUint16)
}

blocks := make([]DABlock, 0, len(chunk.Blocks))
txs := make([][]*types.TransactionData, 0, len(chunk.Blocks))

if err := iterateAndVerifyBlocksAndL1Messages(chunk.PrevL1MessageQueueHash, chunk.PostL1MessageQueueHash, chunk.Blocks, &totalL1MessagePoppedBefore, func(initialBlockNumber uint64) {}, func(block *Block, daBlock *daBlockV7) error {
blocks = append(blocks, daBlock)
txs = append(txs, block.Transactions)

return nil
}); err != nil {
return nil, fmt.Errorf("failed to iterate and verify blocks and L1 messages: %w", err)
}

daChunk := newDAChunkV7(
blocks,
txs,
)

return daChunk, nil
}

// NewDABatch creates a DABatch including blob from the provided Batch.
func (d *DACodecV7) NewDABatch(batch *Batch) (DABatch, error) {
if len(batch.Blocks) == 0 {
return nil, errors.New("batch must contain at least one block")
}

if err := checkBlocksBatchVSChunksConsistency(batch); err != nil {
return nil, fmt.Errorf("failed to check blocks batch vs chunks consistency: %w", err)
}

blob, blobVersionedHash, blobBytes, err := d.constructBlob(batch)
if err != nil {
return nil, fmt.Errorf("failed to construct blob: %w", err)
}

daBatch, err := newDABatchV7(CodecV7, batch.Index, blobVersionedHash, batch.ParentBatchHash, blob, blobBytes)
if err != nil {
return nil, fmt.Errorf("failed to construct DABatch: %w", err)
}

return daBatch, nil
}

func (d *DACodecV7) constructBlob(batch *Batch) (*kzg4844.Blob, common.Hash, []byte, error) {
blobBytes := make([]byte, blobEnvelopeV7OffsetPayload)

payloadBytes, err := d.constructBlobPayload(batch)
if err != nil {
return nil, common.Hash{}, nil, fmt.Errorf("failed to construct blob payload: %w", err)
}

compressedPayloadBytes, enableCompression, err := d.checkCompressedDataCompatibility(payloadBytes)
if err != nil {
return nil, common.Hash{}, nil, fmt.Errorf("failed to check batch compressed data compatibility: %w", err)
}

isCompressedFlag := uint8(0x0)
if enableCompression {
isCompressedFlag = 0x1
payloadBytes = compressedPayloadBytes
}

sizeSlice := encodeSize3Bytes(uint32(len(payloadBytes)))

blobBytes[blobEnvelopeV7OffsetVersion] = uint8(CodecV7)
copy(blobBytes[blobEnvelopeV7OffsetByteSize:blobEnvelopeV7OffsetCompressedFlag], sizeSlice)
blobBytes[blobEnvelopeV7OffsetCompressedFlag] = isCompressedFlag
blobBytes = append(blobBytes, payloadBytes...)

if len(blobBytes) > maxEffectiveBlobBytes {
log.Error("ConstructBlob: Blob payload exceeds maximum size", "size", len(blobBytes), "blobBytes", hex.EncodeToString(blobBytes))
return nil, common.Hash{}, nil, fmt.Errorf("blob exceeds maximum size: got %d, allowed %d", len(blobBytes), maxEffectiveBlobBytes)
}

// convert raw data to BLSFieldElements
blob, err := makeBlobCanonical(blobBytes)
if err != nil {
return nil, common.Hash{}, nil, fmt.Errorf("failed to convert blobBytes to canonical form: %w", err)
}

// compute blob versioned hash
c, err := kzg4844.BlobToCommitment(blob)
if err != nil {
return nil, common.Hash{}, nil, fmt.Errorf("failed to create blob commitment: %w", err)
}
blobVersionedHash := kzg4844.CalcBlobHashV1(sha256.New(), &c)

return blob, blobVersionedHash, blobBytes, nil
}

func (d *DACodecV7) constructBlobPayload(batch *Batch) ([]byte, error) {
blobPayload := blobPayloadV7{
prevL1MessageQueueHash: batch.PrevL1MessageQueueHash,
postL1MessageQueueHash: batch.PostL1MessageQueueHash,
blocks: batch.Blocks,
}

return blobPayload.Encode()
}

// NewDABatchFromBytes decodes the given byte slice into a DABatch.
// Note: This function only populates the batch header, it leaves the blob-related fields empty.
func (d *DACodecV7) NewDABatchFromBytes(data []byte) (DABatch, error) {
daBatch, err := decodeDABatchV7(data)
if err != nil {
return nil, fmt.Errorf("failed to decode DA batch: %w", err)
}

if daBatch.version != CodecV7 {
return nil, fmt.Errorf("codec version mismatch: expected %d but found %d", CodecV7, daBatch.version)
}

return daBatch, nil
}

func (d *DACodecV7) NewDABatchFromParams(batchIndex uint64, blobVersionedHash, parentBatchHash common.Hash) (DABatch, error) {
return newDABatchV7(CodecV7, batchIndex, blobVersionedHash, parentBatchHash, nil, nil)
}

func (d *DACodecV7) DecodeDAChunksRawTx(_ [][]byte) ([]*DAChunkRawTx, error) {
return nil, errors.New("DecodeDAChunksRawTx is not implemented for DACodecV7, use DecodeBlob instead")
}

func (d *DACodecV7) DecodeBlob(blob *kzg4844.Blob) (DABlobPayload, error) {
rawBytes := bytesFromBlobCanonical(blob)

// read the blob envelope header
version := rawBytes[blobEnvelopeV7OffsetVersion]
if CodecVersion(version) != CodecV7 {
return nil, fmt.Errorf("codec version mismatch: expected %d but found %d", CodecV7, version)
}

// read the data size
blobPayloadSize := decodeSize3Bytes(rawBytes[blobEnvelopeV7OffsetByteSize:blobEnvelopeV7OffsetCompressedFlag])
if blobPayloadSize+blobEnvelopeV7OffsetPayload > uint32(len(rawBytes)) {
return nil, fmt.Errorf("blob envelope size exceeds the raw data size: %d > %d", blobPayloadSize, len(rawBytes))
}

payloadBytes := rawBytes[blobEnvelopeV7OffsetPayload : blobEnvelopeV7OffsetPayload+blobPayloadSize]

// read the compressed flag and decompress if needed
compressed := rawBytes[blobEnvelopeV7OffsetCompressedFlag]
if compressed != 0x0 && compressed != 0x1 {
return nil, fmt.Errorf("invalid compressed flag: %d", compressed)
}
if compressed == 0x1 {
var err error
if payloadBytes, err = decompressV7Bytes(payloadBytes); err != nil {
return nil, fmt.Errorf("failed to decompress blob payload: %w", err)
}
}

// read the payload
payload, err := decodeBlobPayloadV7(payloadBytes)
if err != nil {
return nil, fmt.Errorf("failed to decode blob payload: %w", err)
}

return payload, nil
}

func (d *DACodecV7) DecodeTxsFromBlob(blob *kzg4844.Blob, chunks []*DAChunkRawTx) error {
return nil
}

// checkCompressedDataCompatibility checks the compressed data compatibility for a batch.
// It constructs a blob payload, compresses the data, and checks the compressed data compatibility.
func (d *DACodecV7) checkCompressedDataCompatibility(payloadBytes []byte) ([]byte, bool, error) {
compressedPayloadBytes, err := zstd.CompressScrollBatchBytes(payloadBytes)
if err != nil {
return nil, false, fmt.Errorf("failed to compress blob payload: %w", err)
}

if err = checkCompressedDataCompatibility(compressedPayloadBytes); err != nil {
log.Warn("Compressed data compatibility check failed", "err", err, "payloadBytes", hex.EncodeToString(payloadBytes), "compressedPayloadBytes", hex.EncodeToString(compressedPayloadBytes))
return nil, false, nil
}

// check if compressed data is bigger or equal to the original data -> no need to compress
if len(compressedPayloadBytes) >= len(payloadBytes) {
log.Warn("Compressed data is bigger or equal to the original data", "payloadBytes", hex.EncodeToString(payloadBytes), "compressedPayloadBytes", hex.EncodeToString(compressedPayloadBytes))
return nil, false, nil
}

return compressedPayloadBytes, true, nil
}

// CheckChunkCompressedDataCompatibility checks the compressed data compatibility for a batch built from a single chunk.
// Note: For DACodecV7, this function is not implemented since there is no notion of DAChunk in this version. Blobs
// contain the entire batch data, and it is up to a prover to decide the chunk sizes.
func (d *DACodecV7) CheckChunkCompressedDataCompatibility(_ *Chunk) (bool, error) {
return true, nil
}

// CheckBatchCompressedDataCompatibility checks the compressed data compatibility for a batch.
func (d *DACodecV7) CheckBatchCompressedDataCompatibility(b *Batch) (bool, error) {
if len(b.Blocks) == 0 {
return false, errors.New("batch must contain at least one block")
}

if err := checkBlocksBatchVSChunksConsistency(b); err != nil {
return false, fmt.Errorf("failed to check blocks batch vs chunks consistency: %w", err)
}

payloadBytes, err := d.constructBlobPayload(b)
if err != nil {
return false, fmt.Errorf("failed to construct blob payload: %w", err)
}

_, compatible, err := d.checkCompressedDataCompatibility(payloadBytes)
if err != nil {
return false, fmt.Errorf("failed to check batch compressed data compatibility: %w", err)
}

return compatible, nil
}

func (d *DACodecV7) estimateL1CommitBatchSizeAndBlobSize(batch *Batch) (uint64, uint64, error) {
blobBytes := make([]byte, blobEnvelopeV7OffsetPayload)

payloadBytes, err := d.constructBlobPayload(batch)
if err != nil {
return 0, 0, fmt.Errorf("failed to construct blob payload: %w", err)
}

compressedPayloadBytes, enableCompression, err := d.checkCompressedDataCompatibility(payloadBytes)
if err != nil {
return 0, 0, fmt.Errorf("failed to check batch compressed data compatibility: %w", err)
}

if enableCompression {
blobBytes = append(blobBytes, compressedPayloadBytes...)
} else {
blobBytes = append(blobBytes, payloadBytes...)
}

return blobEnvelopeV7OffsetPayload + uint64(len(payloadBytes)), calculatePaddedBlobSize(uint64(len(blobBytes))), nil
}

// EstimateChunkL1CommitBatchSizeAndBlobSize estimates the L1 commit batch size and blob size for a single chunk.
func (d *DACodecV7) EstimateChunkL1CommitBatchSizeAndBlobSize(chunk *Chunk) (uint64, uint64, error) {
return d.estimateL1CommitBatchSizeAndBlobSize(&Batch{
Blocks: chunk.Blocks,
PrevL1MessageQueueHash: chunk.PrevL1MessageQueueHash,
PostL1MessageQueueHash: chunk.PostL1MessageQueueHash,
})
}

// EstimateBatchL1CommitBatchSizeAndBlobSize estimates the L1 commit batch size and blob size for a batch.
func (d *DACodecV7) EstimateBatchL1CommitBatchSizeAndBlobSize(batch *Batch) (uint64, uint64, error) {
return d.estimateL1CommitBatchSizeAndBlobSize(batch)
}

// EstimateBlockL1CommitCalldataSize calculates the calldata size in l1 commit for this block approximately.
// Note: For CodecV7 calldata is constant independently of how many blocks or batches are submitted.
func (d *DACodecV7) EstimateBlockL1CommitCalldataSize(block *Block) (uint64, error) {
return 0, nil
}

// EstimateChunkL1CommitCalldataSize calculates the calldata size needed for committing a chunk to L1 approximately.
// Note: For CodecV7 calldata is constant independently of how many blocks or batches are submitted. There is no notion
// of chunks in this version.
func (d *DACodecV7) EstimateChunkL1CommitCalldataSize(chunk *Chunk) (uint64, error) {
return 0, nil
}

// EstimateBatchL1CommitCalldataSize calculates the calldata size in l1 commit for this batch approximately.
// Note: For CodecV7 calldata is constant independently of how many blocks or batches are submitted.
// Version + BatchHeader
func (d *DACodecV7) EstimateBatchL1CommitCalldataSize(batch *Batch) (uint64, error) {
return 1 + daBatchV7EncodedLength, nil
}

// EstimateChunkL1CommitGas calculates the total L1 commit gas for this chunk approximately.
// Note: For CodecV7 calldata is constant independently of how many blocks or batches are submitted. There is no notion
// of chunks in this version.
func (d *DACodecV7) EstimateChunkL1CommitGas(chunk *Chunk) (uint64, error) {
return 0, nil
}

// EstimateBatchL1CommitGas calculates the total L1 commit gas for this batch approximately.
func (d *DACodecV7) EstimateBatchL1CommitGas(batch *Batch) (uint64, error) {
// TODO: adjust this after contracts are implemented
var totalL1CommitGas uint64

// Add extra gas costs
totalL1CommitGas += extraGasCost // constant to account for ops like _getAdmin, _implementation, _requireNotPaused, etc
totalL1CommitGas += 4 * coldSloadGas // 4 one-time cold sload for commitBatch
totalL1CommitGas += sstoreGas // 1 time sstore
totalL1CommitGas += baseTxGas // base gas for tx
totalL1CommitGas += calldataNonZeroByteGas // version in calldata

return totalL1CommitGas, nil
}

// JSONFromBytes converts the bytes to a DABatch and then marshals it to JSON.
func (d *DACodecV7) JSONFromBytes(data []byte) ([]byte, error) {
batch, err := d.NewDABatchFromBytes(data)
if err != nil {
return nil, fmt.Errorf("failed to decode DABatch from bytes: %w", err)
}

jsonBytes, err := json.Marshal(batch)
if err != nil {
return nil, fmt.Errorf("failed to marshal DABatch to JSON, version %d, hash %s: %w", batch.Version(), batch.Hash(), err)
}

return jsonBytes, nil
}
Loading