Skip to content

Commit

Permalink
Merge pull request #8 from Layr-Labs/epociask--chore-cleanup-dead-code
Browse files Browse the repository at this point in the history
chore(op-plasma-eigenda): Remove dead and unecessary code
  • Loading branch information
ethenotethan authored May 21, 2024
2 parents 1b50ef2 + 5cfb4fc commit 3fb8d8e
Show file tree
Hide file tree
Showing 26 changed files with 275 additions and 2,551 deletions.
45 changes: 45 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Server listening address
EIGEN_PLASMA_SERVER_ADDR=127.0.0.1

# Server listening port (default: 3100)
EIGEN_PLASMA_SERVER_PORT=3100

# Directory path to SRS tables
EIGEN_PLASMA_SERVER_EIGENDA_CACHE_PATH=

# Directory path to g1.point file
EIGEN_PLASMA_SERVER_EIGENDA_KZG_G1_PATH=

# Directory path to g2.point.powerOf2 file
EIGEN_PLASMA_SERVER_EIGENDA_G2_TAU_PATH=

# RPC endpoint of the EigenDA disperser
EIGEN_PLASMA_SERVER_EIGENDA_RPC=

# Wait time between retries of EigenDA blob status queries (default: 5s)
EIGEN_PLASMA_SERVER_EIGENDA_STATUS_QUERY_INTERVAL=5s

# Timeout for aborting an EigenDA blob dispersal (default: 25m0s)
EIGEN_PLASMA_SERVER_EIGENDA_STATUS_QUERY_TIMEOUT=25m0s

# Use TLS when connecting to the EigenDA disperser (default: true)
EIGEN_PLASMA_SERVER_EIGENDA_GRPC_USE_TLS=true

# Color the log output if in terminal mode (default: false)
EIGEN_PLASMA_SERVER_LOG_COLOR=false

# Format the log output (default: text)
# Supported formats: 'text', 'terminal', 'logfmt', 'json', 'json-pretty'
EIGEN_PLASMA_SERVER_LOG_FORMAT=text

# The lowest log level that will be output (default: INFO)
EIGEN_PLASMA_SERVER_LOG_LEVEL=INFO

# Metrics listening address (default: 0.0.0.0)
EIGEN_PLASMA_SERVER_METRICS_ADDR=0.0.0.0

# Enable the metrics server (default: false)
EIGEN_PLASMA_SERVER_METRICS_ENABLED=false

# Metrics listening port (default: 7300)
EIGEN_PLASMA_SERVER_METRICS_PORT=7300
2 changes: 2 additions & 0 deletions .github/workflows/actions.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ jobs:
- name: Run Unit Tests
id: unit
run: |
make submodules &&
make srs &&
make test

gosec:
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
go.work

/bin
.env

## kzg cache
test/resources/SRSTables/
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "operator-setup"]
path = operator-setup
url = https://github.com/Layr-Labs/eigenda-operator-setup.git
13 changes: 11 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ clean:
test:
go test -v ./... -test.skip ".*E2E.*"

e2e-test:
e2e-test: submodules srs
go test -timeout 50m -v ./test/e2e_test.go

.PHONY: lint
lint:
@if ! command -v golangci-lint &> /dev/null; \
@if ! test -f &> /dev/null; \
then \
echo "golangci-lint command could not be found...."; \
echo "\nTo install, please run $(GET_LINT_CMD)"; \
Expand All @@ -42,6 +42,15 @@ gosec:
@echo "$(GREEN) Running security scan with gosec...$(COLOR_END)"
gosec ./...

submodules:
git submodule update --init --recursive


srs:
if ! test -f /operator-setup/resources/g1.point; then \
cd operator-setup && ./srs_setup.sh; \
fi

.PHONY: \
op-batcher \
clean \
Expand Down
42 changes: 26 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
# EigenDA Plasma DA Server

## Introduction

This simple DA server implementation supports ephemeral storage via EigenDA.

## EigenDA Configuration
Expand All @@ -11,25 +10,20 @@ Additional cli args are provided for targeting an EigenDA network backend:
- `--eigenda-status-query-retry-interval`: (default: 5s) How often a client will attempt a retry when awaiting network blob finalization.
- `--eigenda-use-tls`: (default: true) Whether or not to use TLS for grpc communication with disperser.
- `eigenda-g1-path`: Directory path to g1.point file
- `eigenda-g2-path`: Directory path to g2.point file
- `eigenda-g2-power-of-tau`: Directory path to g2.point.powerOf2 file
- `eigenda-cache-path`: Directory path to dump cached SRS tables

## Running Locally
1. Compile binary: `make da-server`
2. Run binary; e.g: `./bin/da-server --addr 127.0.0.1 --port 5050 --eigenda-rpc 127.0.0.1:443 --eigenda-status-query-timeout 45m --eigenda-g1-path test/resources/g1.point --eigenda-g2-path test/resources/g2.point --eigenda-g2-tau-path test/resources/g2.point.powerOf2 --eigenda-use-tls true`

## Breaking changes from existing OP-Stack
2. Run binary; e.g: `./bin/da-server --addr 127.0.0.1 --port 5050 --eigenda-rpc 127.0.0.1:443 --eigenda-status-query-timeout 45m --eigenda-g1-path test/resources/g1.point --eigenda-g2-tau-path test/resources/g2.point.powerOf2 --eigenda-use-tls true`

### Server / Client
Unlike the keccak256 DA server implementation where commitments can be generated by the batcher via hashing, EigenDA commitments are represented as a constituent tuple `(blob_certificate, commitment)`. Certificates only derivable from the network **once** a blob has been successfully finalized (i.e, dispersed, confirmed, and submitted within a batch to Ethereum). The existing `op-plasma` schema in the monorepo of having a precomputed key was broken in the following ways:
* POST `/put` endpoint was modified to remove the `commitment` query param and return the generated `commitment` value in the response body
* Modified `DaClient` to use an alternative request/response flows with server for inserting and fetching preimages

**NOTE:** Optimism has planned support for the aforementioned client-->server interaction scheme within plasma. These changes will eventually be rebased accordingly.
**Env File**
An env file can be provided to the binary for runtime process ingestion; e.g:
1. Create env: `cp .env.example .env`
2. Pass into binary: `ENV_PATH=.env ./bin/da-server`

### Commitment Schemas
An `EigenDACommitment` layer type has been added that supports verification against its respective pre-images. Otherwise this logic is pseudo-identical to the existing `Keccak256` commitment type. The commitment is encoded via the following byte array:
An `EigenDACommitment` layer type has been added that supports verification against its respective pre-images. The commitment is encoded via the following byte array:
```
0 1 2 3 4 N
|--------|--------|--------|--------|-----------------|
Expand All @@ -38,7 +32,7 @@ An `EigenDACommitment` layer type has been added that supports verification agai

```

The raw commitment for EigenDA is encoding the following certificate and kzg fields:
The `raw commitment` for EigenDA is encoding the following certificate and kzg fields:
```go
type Cert struct {
BatchHeaderHash []byte
Expand All @@ -49,17 +43,33 @@ type Cert struct {
}
```

**NOTE:** Commitments are cryptographically verified against the data fetched from EigenDA for all `/get` calls. The server will respond with status `500` in the event where EigenDA were to lie and provide falsified data thats irrespective of the client provided commitment. This feature isn't flag guarded and is part of standard operation.

## Testing
Some unit tests have been introduced to assert correctness of encoding/decoding logic and mocked server interactions. These can be ran via `make test`.
Some unit tests have been introduced to assert the correctness of:
* DA Certificate encoding/decoding logic
* commitment verification logic

Unit tests can be ran via `make test`.

Otherwise E2E tests (`test/e2e_test.go`) exists which asserts that a commitment can be generated when inserting some arbitrary data to the server and can be read using the commitment for a key lookup via the client. These can be ran via `make e2e-test`. Please **note** that this test uses the EigenDA Holesky network which is subject to rate-limiting and slow confirmation times *(i.e, >10 minutes per blob confirmation)*. Please advise EigenDA's [inabox](https://github.com/Layr-Labs/eigenda/tree/master/inabox#readme) if you'd like to spin-up a local DA network for quicker iteration testing.


## Downloading SRS
KZG commitment verification requires constructing the SRS string from the proper trusted setup values (g1, g2, g2.power_of_tau). These values can be downloaded locally using the [srs_setup](https://github.com/Layr-Labs/eigenda-operator-setup/blob/master/srs_setup.sh) script in the operator setup repo.
## Downloading Mainnet SRS
KZG commitment verification requires constructing the SRS string from the proper trusted setup values (g1, g2, g2.power_of_tau). These values can be downloaded locally using the [operator-setup](https://github.com/Layr-Labs/eigenda-operator-setup) submodule via the following commands.

1. `make submodules`
2. `make srs`

## Hardware Requirements
The following specs are recommended for running on a single production server:
* 12 GB SSD (assuming SRS values are stored on instance)
* 16 GB RAM
* 1-2 cores CPU

## Resources
- [op-stack](https://github.com/ethereum-optimism/optimism)
- [plasma spec](https://specs.optimism.io/experimental/plasma.html)
- [eigen da](https://github.com/Layr-Labs/eigenda)


1,678 changes: 0 additions & 1,678 deletions bindings/dataavailabilitychallenge.go

This file was deleted.

73 changes: 0 additions & 73 deletions cli.go

This file was deleted.

47 changes: 16 additions & 31 deletions cmd/daserver/entrypoint.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,40 +28,25 @@ func StartDAServer(cliCtx *cli.Context) error {
log := oplog.NewLogger(oplog.AppOut(cliCtx), oplog.ReadCLIConfig(cliCtx)).New("role", "eigenda_plasma_server")
oplog.SetGlobalLogHandler(log.Handler())

log.Info("Initializing EigenDA Plasma DA server with config ...")
log.Info("Initializing EigenDA Plasma DA server...")

var store plasma.PlasmaStore
daCfg := cfg.EigenDAConfig

if cfg.FileStoreEnabled() {
log.Info("Using file storage", "path", cfg.FileStoreDirPath)
store = plasma_store.NewFileStore(cfg.FileStoreDirPath)
} else if cfg.S3Enabled() {
log.Info("Using S3 storage", "bucket", cfg.S3Bucket)
s3, err := plasma_store.NewS3Store(cliCtx.Context, cfg.S3Bucket)
if err != nil {
return fmt.Errorf("failed to create S3 store: %w", err)
}
store = s3
} else if cfg.EigenDAEnabled() {
daCfg := cfg.EigenDAConfig

v, err := verify.NewVerifier(daCfg.KzgConfig())
if err != nil {
return err
}
v, err := verify.NewVerifier(daCfg.KzgConfig())
if err != nil {
return err
}

eigenda, err := plasma_store.NewEigenDAStore(
cliCtx.Context,
eigenda.NewEigenDAClient(
log,
daCfg,
),
v,
)
if err != nil {
return fmt.Errorf("failed to create EigenDA store: %w", err)
}
store = eigenda
store, err := plasma_store.NewEigenDAStore(
cliCtx.Context,
eigenda.NewEigenDAClient(
log,
daCfg,
),
v,
)
if err != nil {
return fmt.Errorf("failed to create EigenDA store: %w", err)
}
server := plasma.NewDAServer(cliCtx.String(ListenAddrFlagName), cliCtx.Int(PortFlagName), store, log, m)

Expand Down
Loading

0 comments on commit 3fb8d8e

Please sign in to comment.