diff --git a/ecosystem/opUSDC/op-usdc.md b/ecosystem/opUSDC/op-usdc.md index ba9aac4c..3bdcea23 100644 --- a/ecosystem/opUSDC/op-usdc.md +++ b/ecosystem/opUSDC/op-usdc.md @@ -67,7 +67,7 @@ Once everything is set, the adapters are ready to transfer USDC between domains. ### 2) Deposits & withdrawals -For an user to make a deposit, the process remains as simple as follows: +For a user to make a deposit, the process remains as simple as follows: 1. Users approve the `L1OpUSDCBridgeAdapter` to spend USDC. 2. Users proceed to deposit USDC by calling the contract. @@ -110,4 +110,4 @@ By doing so, Circle is fully onboarded in the OP Chain, and the old system is de # Challenges ahead 1. **Interop-compatibility**: A solution is still needed for cross-chain transfers between `BridgedUSDC` and other USDC versions implemented on other OP Chains, such as native ones. An intent-based design promises to be a viable solution for moving USDC across the Superchain. -2. **Token pathfinding:** OP Chains must collaborate with bridge UI maintainers, third-party operators, and token list maintainers to provide accurate token bridging paths. This is important when transferring from one OP Chain to another where the USDC used is not fungible between them. Users would utilize a combination of CCTP and `OpUSDCBridgeAdapter`, passing through L1 to reach the destination and receive the exact amount bridged. However, L2-L2 transfers can be economically more convenient through a combination of both CCTP and Interop via intents. A well-designed token pathfinding system should ultimately help users save costs and avoid potential mistakes. \ No newline at end of file +2. **Token pathfinding:** OP Chains must collaborate with bridge UI maintainers, third-party operators, and token list maintainers to provide accurate token bridging paths. This is important when transferring from one OP Chain to another where the USDC used is not fungible between them. Users would utilize a combination of CCTP and `OpUSDCBridgeAdapter`, passing through L1 to reach the destination and receive the exact amount bridged. However, L2-L2 transfers can be economically more convenient through a combination of both CCTP and Interop via intents. A well-designed token pathfinding system should ultimately help users save costs and avoid potential mistakes. diff --git a/ecosystem/sendRawTransactionConditional/proposal.md b/ecosystem/sendRawTransactionConditional/proposal.md index 60625c20..5697ebbb 100644 --- a/ecosystem/sendRawTransactionConditional/proposal.md +++ b/ecosystem/sendRawTransactionConditional/proposal.md @@ -4,7 +4,7 @@ To fully unlock ERC-4337 on the OP-Stack. # Summary -By providing an auxilliary transaction submission mechanism, [eth_sendRawTransactionConditional](https://notes.ethereum.org/@yoav/SkaX2lS9j), we can enable Bundlers to submit [Entrypoint](https://eips.ethereum.org/EIPS/eip-4337#entrypoint-definition) transactions with stronger guarantees, avoiding costly inadvertent reverts. The endpoint is an extension to `eth_sendRawTransaction` with the added ability to attach a conditional. The conditional are a set of options that specify requirements for inclusion, otherwise rejected out of protocol. +By providing an auxiliary transaction submission mechanism, [eth_sendRawTransactionConditional](https://notes.ethereum.org/@yoav/SkaX2lS9j), we can enable Bundlers to submit [Entrypoint](https://eips.ethereum.org/EIPS/eip-4337#entrypoint-definition) transactions with stronger guarantees, avoiding costly inadvertent reverts. The endpoint is an extension to `eth_sendRawTransaction` with the added ability to attach a conditional. The conditional are a set of options that specify requirements for inclusion, otherwise rejected out of protocol. We can implement this endpoint in op-geth with an additional layer of external validation to ensure this endpoint is safe from DoS, does not supersede `eth_sendRawTransaction`, and can be deprecated if native account abstraction efforts in the future absolves the need for this endpoint. @@ -15,7 +15,7 @@ We can implement this endpoint in op-geth with an additional layer of external v Account Abstraction is a [growing ecosystem](https://dune.com/sixdegree/account-abstraction-overview) and solution to many wallet UX issues. Polygon currently dominates the market on the number of active smart accounts and we want to ensure the op-stack enables this growth in the superchain as the ecosystem continues to evolve. -Bundlers aggregate [UserOp](https://eips.ethereum.org/EIPS/eip-4337#useroperation)s from an external mempool into a single transaction sent to the enshrined 4337 [Entrypoint](https://eips.ethereum.org/EIPS/eip-4337#entrypoint-definition) contract. It is the Bundler responsibility to ensure every UserOp can succesfully execute its validation step, otherwise resulting in a reverted top-level transaction. With private Bundler mempools, the likelihood of these reverts are small as Bundlers are operating over different sets of UserOps. Unless a smart account frontruns a submitted UserOp to the Entrypoint, a Bundler doesn't have to worry about changes in network state such as an incremented smart account nonce invalidating their entire transaction during the building process. +Bundlers aggregate [UserOp](https://eips.ethereum.org/EIPS/eip-4337#useroperation)s from an external mempool into a single transaction sent to the enshrined 4337 [Entrypoint](https://eips.ethereum.org/EIPS/eip-4337#entrypoint-definition) contract. It is the Bundler responsibility to ensure every UserOp can successfully execute its validation step, otherwise resulting in a reverted top-level transaction. With private Bundler mempools, the likelihood of these reverts is small as Bundlers are operating over different sets of UserOps. Unless a smart account frontruns a submitted UserOp to the Entrypoint, a Bundler doesn't have to worry about changes in network state such as an incremented smart account nonce invalidating their entire transaction during the building process. The account abstraction [roadmap](https://notes.ethereum.org/@yoav/AA-roadmap-May-2024) is chugging along and shared 4337 mempools are coming into [production](https://medium.com/etherspot/decentralized-future-erc-4337-shared-mempool-launches-on-ethereum-b6c860072f41), launched on Ethereum and some L2 testnets, decentralizing 4337 infrastructure. Multiple Bundlers operating on this mempool can create transactions including the same UserOp, increasing the likelihood of reverts due to changed account states, like account nonces. These reverts are too high of a cost for bundlers to operate. @@ -25,18 +25,18 @@ This problem is worked around on L1 through special block builders like Flashbot ## Do Nothing -4337 account abstraction is currently live on optimism. Dapps utilize the bundler endpoints that are come with a vendor-specific SDK -- Alchemy, Pimilico, Thirdweb, etc -- each with their own mempools. As 4337 infrastructure becomes more permissionless, we will later have to play catch-up to ensure the op-stack remains compatible while other L2 offerings have already moved towards supporting `eth_sendRawTransactionConditional`. +4337 account abstraction is currently live on optimism. Dapps utilize the bundler endpoints that come with a vendor-specific SDK -- Alchemy, Pimilico, Thirdweb, etc -- each with their own mempools. As 4337 infrastructure becomes more permissionless, we will later have to play catch-up to ensure the op-stack remains compatible while other L2 offerings have already moved towards supporting `eth_sendRawTransactionConditional`. ## Verticalize the OP-Stack -Rather than externalize the 4337 mempool, the op-stack could natively offer a UserOp mempool alongside the regular tx mempool. When creating a new block, the sequencer can pull from the two, ensuring the bundled UserOps do not conflict with the latest network state. However, this adds additional complexity to the stack, where the proposer-builder seperation in 4337 nicely keeps these concerns seperate. +Rather than externalize the 4337 mempool, the op-stack could natively offer a UserOp mempool alongside the regular tx mempool. When creating a new block, the sequencer can pull from the two, ensuring the bundled UserOps do not conflict with the latest network state. However, this adds additional complexity to the stack, where the proposer-builder separation in 4337 nicely keeps these concerns separate. Verticalization is possible in the proposed solution by configuring the allowlist of the authenticated `eth_sendRawTransactionConditional` endpoint to either a self-managed bundler or that of a partner, achieving the same outcome as native mempool without the complexity of a deeper change in the op-stack. # Proposed Solution -1. Implement `eth_sendRawTransactionConditional` in op-geth with support for the conditionals described in the [spec](https://notes.ethereum.org/@yoav/SkaX2lS9j), for which a draft implementation [exists](https://github.com/ethereum/go-ethereum/compare/master...tynes:go-ethereum:eip4337) but requires a refresh. The conditional attached to the transaction is checked against the latest unsafe head the prior to mempool submisison and re-checked when included in the block being built. +1. Implement `eth_sendRawTransactionConditional` in op-geth with support for the conditionals described in the [spec](https://notes.ethereum.org/@yoav/SkaX2lS9j), for which a draft implementation [exists](https://github.com/ethereum/go-ethereum/compare/master...tynes:go-ethereum:eip4337) but requires a refresh. The conditional attached to the transaction is checked against the latest unsafe head prior to mempool submission and re-checked when included in the block being built. * There exists implementations for [Arbitrum](https://github.com/OffchainLabs/go-ethereum/blob/da4c975e354648c7be814ab9667b42f1c19cdc0f/arbitrum/conditionaltx.go#L25) and [Polygon](https://github.com/maticnetwork/bor/blob/b8ad00095a9e3e508517d802c5358a5ce3e81ed3/internal/ethapi/bor_api.go#L70) conforming to the [spec](https://notes.ethereum.org/@yoav/SkaX2lS9j). On Polygon, the API is authenticated under the` bor` namespace but public on Arbitrum under the `eth` namespace. @@ -48,13 +48,13 @@ Verticalization is possible in the proposed solution by configuring the allowlis * **Only 4337 Entrypoint Contract Support**: `tx.to() == entrypoint_contract_address` - The rationale is to make it easier to rollback if deprecated in the future due to native account abstraction or better solutions. Otherwise new uses case might create unwanted dependencies on this endpoint. + The rationale is to make it easier to rollback if deprecated in the future due to native account abstraction or better solutions. Otherwise new use cases might create unwanted dependencies on this endpoint. There does exist different [versions](https://github.com/eth-infinitism/account-abstraction/releases) of the Entrypoint contract as the 4337 spec is iterated on. We'll need to stay up to date with these version as a part of op-stack [preinstalls](https://docs.optimism.io/builders/chain-operators/features/preinstalls) and pass through calls to all the supported versions of `EntryPoint`. * **Authentication**: Allowlist Policy - Requests to this endpoint MUST be authenticated with a secp256K1 keypair, similar to [flashbots authentication](https://docs.flashbots.net/flashbots-auction/advanced/rpc-endpoint#authentication). The [EIP-191](https://eips.ethereum.org/EIPS/eip-191) hash of the json-rpc payload must be signed and included in the the `X-Optimism-Signature` header of the request in a `:` format. + Requests to this endpoint MUST be authenticated with a secp256K1 keypair, similar to [flashbots authentication](https://docs.flashbots.net/flashbots-auction/advanced/rpc-endpoint#authentication). The [EIP-191](https://eips.ethereum.org/EIPS/eip-191) hash of the json-rpc payload must be signed and included in the the `X-Optimism-Signature` header of the request in a `:` format. With the public key of the caller, we can implement an allowlist policy module, allowing the chain operator to verticalize by running their own bundler or delegating to partners. This allowlist module does NOT have to be enabled for permissionless bundler participation. @@ -85,13 +85,13 @@ With this initial set of validation rules, we should be in a good position to sa * **conditional mempool latency**: understanding of how long conditional transactions are sitting in the mempool. We would expect failed inclusion the longer a conditional tx remains in the mempool due to state changes since submission. Elevated latencies in combination with a low inclusion success rate will indicate if the proxy should be enforcing a higher minimum fee for these transactions to minimize mempool time. -The public keys of known bundlers should be collected and registered. With alerts setup on the metrics above, when in a state of degradation, the allowlist policy should first be enabled to avoid 4337 downtime while assessing next steps. If still in a degradaded state, the endpoint should then be fully shutoff, having bundlers revert to `sendRawTransaction` until further iteration. Both of these actions should occur in tandem with public comms. +The public keys of known bundlers should be collected and registered. With alerts setup on the metrics above, when in a state of degradation, the allowlist policy should first be enabled to avoid 4337 downtime while assessing next steps. If still in a degraded state, the endpoint should then be fully shutoff, having bundlers revert to `sendRawTransaction` until further iteration. Both of these actions should occur in tandem with public comms. -Additional validation rules can be applied to boost performance of this endpoint. Here are a some extra applicable validation rules: +Additional validation rules can be applied to boost performance of this endpoint. Here are some extra applicable validation rules: * **Elevated minimum fee** - The longer a conditional transaction is in the mempool, the more likely it is to fail when included in a block due to state changes since submission. To minimize this latency, we may want to monitor the base fee of the network and add a premium for conditional transactions in order to minimize the time spent the mempool. + The longer a conditional transaction is in the mempool, the more likely it is to fail when included in a block due to state changes since submission. To minimize this latency, we may want to monitor the base fee of the network and add a premium for conditional transactions in order to minimize the time spent in the mempool. * **Local Rate Limiting** @@ -107,6 +107,6 @@ Additional validation rules can be applied to boost performance of this endpoint **Risk 2: Implemented validation isn't enough for permissionless bundler participation.** The listed validation rules are a starting point and there's room for exploration in horizontally scalable validation. However we can fallback to a permissioned allowlist for this endpoint which still enables 4337 shared mempools and likely makes no difference to dapp developers which already use a small subset of known infra providers. -**Risk 3: Generalized External Validation.** Validation policies should be DRY'd between interop, eth_sendRawTransactionConditional, and any future use cases. These policies that are implementated should work well between these usecases as this approach is adopted and scales. The tech-debt here can grow quickly if each solution has it's own methods of preventing DoS and validation, especially operationally. +**Risk 3: Generalized External Validation.** Validation policies should be DRY'd between interop, eth_sendRawTransactionConditional, and any future use cases. These policies that are implemented should work well between these use cases as this approach is adopted and scales. The tech-debt here can grow quickly if each solution has its own methods of preventing DoS and validation, especially operationally. -**Risk 4: Excessive Compute/Operational Requirements**. This endpoint is a feature provided out of protocol by the block builder -- the sequencer. With failed conditional transactions, the sequencer is not compensated with charged gas like when processing a reverted transaction, nor for the addtional checks of successful conditional transactions. There's also the added overhead of managing new services to mitigate DoS and increases the surface area where manual intervention will be required. The uncompensated compute or inability to effectively mitigate DoS may be a reason to rollback this feature. +**Risk 4: Excessive Compute/Operational Requirements**. This endpoint is a feature provided out of protocol by the block builder -- the sequencer. With failed conditional transactions, the sequencer is not compensated with charged gas like when processing a reverted transaction, nor for the additional checks of successful conditional transactions. There's also the added overhead of managing new services to mitigate DoS and increases the surface area where manual intervention will be required. The uncompensated compute or inability to effectively mitigate DoS may be a reason to rollback this feature. diff --git a/governance/delegation-interop.md b/governance/delegation-interop.md index 74394544..19811e01 100644 --- a/governance/delegation-interop.md +++ b/governance/delegation-interop.md @@ -10,12 +10,12 @@ There will be two main changes to the existing `GovernanceDelegation` [contract] As described in the [advanced delegation](advanced-delegation.md) design document, the `GovernanceToken` contract supports advanced delegation by integrating with the `GovernanceDelegation` contract. However, the `GovernanceDelegation` contract does not support interoperability, which means that when the `GovernanceToken` gets deployed across networks in the Superchain, the total voting supply in OP Mainnet may decrease as the token would be fragmented across other networks. -Therefore, this design document aims to modify the `GovernanceDelegation` contract to support cross-chain delegations of voting power, maximazing user experience and ensuring that relayers have the proper incentives to handle cross-chain messages of delegations. +Therefore, this design document aims to modify the `GovernanceDelegation` contract to support cross-chain delegations of voting power, maximizing user experience and ensuring that relayers have the proper incentives to handle cross-chain messages of delegations. messages. # Alternatives Considered -When originally considering this [problem](https://github.com/ethereum-optimism/specs/blob/5046a5b7f95e7a238cbfabc2b353709c9737b50b/specs/governance/alligator-interop.md), the idea was to create a hook during the `afterTokenTransfer` event. As alluded to above, this do not provide some of the desired functionality.It also makes a few assumptions about the behaviors of delegates that receive tokens and stifles their ability to create partial delegations in a low-cost manner. Moreover, if there was a want to later batch updates of voting power to mainnet it would be costly to those operating governance. +When originally considering this [problem](https://github.com/ethereum-optimism/specs/blob/5046a5b7f95e7a238cbfabc2b353709c9737b50b/specs/governance/alligator-interop.md), the idea was to create a hook during the `afterTokenTransfer` event. As alluded to above, this does not provide some of the desired functionality. It also makes a few assumptions about the behaviors of delegates that receive tokens and stifles their ability to create partial delegations in a low-cost manner. Moreover, if there was a want to later batch updates of voting power to mainnet it would be costly to those operating governance. Another active consideration is the use of block timestamp of each L2 instead of block number while implementing this new solution. The primary reason for block numbers to be used in its stead, stems from a concern of manipulation by sequencers and the possible delays that might need to be enforced between events to ensure correctness. @@ -53,8 +53,8 @@ The implementation should maintain the following invariants: # Risks & Uncertainties -1. The biggest risk in implementing this change is ensuring that all existing voting power is preserved such that the checkpoints that already exist on the contract is accessible. +1. The biggest risk in implementing this change is ensuring that all existing voting power is preserved such that the checkpoints that already exist on the contract are accessible. 2. Indexers must now accept a delay in updates if wishing to give an accurate voting status for a particular delegate. 3. Batch ordering of checkpoints across all L2 chains. Mainly with exposing a function that allows the Governor contract to batch process messages being received in the L2Crosschain Inbox. -4. `GovToken` holders on for each chain will need to submit delegations on that native chain they exist on due to the fact that voting power is now tracked seperately. +4. `GovToken` holders on for each chain will need to submit delegations on that native chain they exist on due to the fact that voting power is now tracked separately. 5. Another potential complexity is when conducting a vote that will eventually be reflected on OP main net the block number must be within a certain range/threshold for checkpointing purposes. diff --git a/protocol/deputy-pause-module.md b/protocol/deputy-pause-module.md index e4c86749..8a683464 100644 --- a/protocol/deputy-pause-module.md +++ b/protocol/deputy-pause-module.md @@ -18,7 +18,7 @@ same configuration as the original Foundation Safe was made to be the Deputy Gua Even with this second Foundation Safe, pre-signed pauses are invalidated on a regular basis whenever an upgrade touches the `DeputyGuardianModule`. Gripes with the pre-signed pause system -could fill a whole role of toilet paper and are not just limited to the issues noted above. +could fill a whole roll of toilet paper and are not just limited to the issues noted above. ## Proposed Solution @@ -36,14 +36,14 @@ Foundation Operations Safe entirely, generally simplifying our multisig setup. We propose using an Externally Owned Account (EOA) instead of a smart contract as the deputy here. Using an EOA is simpler and easier to reason about. The private key for this EOA can be stored -securely and made accessible to a limited set of security personel. +securely and made accessible to a limited set of security personnel. ### Single Account vs Mapping We propose having the `DeputyPauseModule` use a single account instead of a mapping of accounts that are able to act as the deputy. A single account is easier to keep track of and having multiple accounts does not decrease the risk involved with this module, it simply spreads it across more -private keys. Having multiple keys be able to act as the deputy here might have some slighty +private keys. Having multiple keys be able to act as the deputy here might have some slight benefits but this begins to scope creep beyond the original intention of replacing the pre-signed pause functionality. diff --git a/protocol/disableInitializers-in-constructor.md b/protocol/disableInitializers-in-constructor.md index 4dee6299..6b39d533 100644 --- a/protocol/disableInitializers-in-constructor.md +++ b/protocol/disableInitializers-in-constructor.md @@ -48,4 +48,4 @@ No other alternatives considered. ### Consideration -- Another rule that might help this proposal is asserting via Semgrep that all `initialize()` functions have an `external` modifier and there's no occurence of `this.initialize()`. This would help in enforcing this rule. Is there any scenario where `initialize()` functions need to be called from within a contract or an inheriting contract? +- Another rule that might help this proposal is asserting via Semgrep that all `initialize()` functions have an `external` modifier and there's no occurrence of `this.initialize()`. This would help in enforcing this rule. Is there any scenario where `initialize()` functions need to be called from within a contract or an inheriting contract? diff --git a/protocol/dispute-game-creators.md b/protocol/dispute-game-creators.md index 567d8cb3..233eca89 100644 --- a/protocol/dispute-game-creators.md +++ b/protocol/dispute-game-creators.md @@ -10,7 +10,7 @@ The `DisputeGameFactory.sol` should be upgraded to allow for multiple "Creator" The Fault Dispute Game contracts are currently made up of a couple of core contracts, `AnchorStateRegistry.sol`, `DelayedWETH.sol`, `DisputeGameFactory.sol`, and `FaultDisputeGame.sol/PermissionedFaultDisputeGame.sol`. When deploying "standard" rollups we currently need to deploy a new set of all of these contracts because of specific immutable arguments in the implementation of `FaultDisputeGame.sol`, which in turn requires a specific `FaultDisputeGameFactory.sol` and in turn `AnchorStateRegistry.sol`. If `FaultDisputeGame.sol` implementations did not have chain-specific immutables, then they could simply be added at the time of cloning on a per-chain basis, requiring only one set of deployments and making the process simpler. -An important nuance here is that each `FaultDisputeGame.sol` deployment has both a minimal proxy with immutable args deployed that points at an implementation specific to the `FaultDisputeGameFactory.sol`. The two sets of immutable args are first specific to the fault dipuste game, then the second set on the implementation are specific to the rollup. +An important nuance here is that each `FaultDisputeGame.sol` deployment has both a minimal proxy with immutable args deployed that points at an implementation specific to the `FaultDisputeGameFactory.sol`. The two sets of immutable args are first specific to the fault dispute game, then the second set on the implementation are specific to the rollup. # Proposed Solution diff --git a/protocol/external-block-production.md b/protocol/external-block-production.md index 7bf54eb9..d7d53f0c 100644 --- a/protocol/external-block-production.md +++ b/protocol/external-block-production.md @@ -12,7 +12,7 @@ The purpose of this design-doc is to propose and get buy-in in on a first step t # Summary -This document proposes a sidecar to `op-node` for requesting block production from an external party. This sidecar has two roles: 1) obfuscate the presence of builder software from the `op-node` and `op-geth` software and 2) manage communication with a block builder and handle block delivery to `op-node`. The first role is achieved via the sidecar forwarding all API calls to it's local `op-geth` and delivering 1 block exactly for each block request from `op-node`. The second role is achieved by the sidecar implementing the communication protocol with the builder, including authentication, and payload selection rules. +This document proposes a sidecar to `op-node` for requesting block production from an external party. This sidecar has two roles: 1) obfuscate the presence of builder software from the `op-node` and `op-geth` software and 2) manage communication with a block builder and handle block delivery to `op-node`. The first role is achieved via the sidecar forwarding all API calls to its local `op-geth` and delivering 1 block exactly for each block request from `op-node`. The second role is achieved by the sidecar implementing the communication protocol with the builder, including authentication, and payload selection rules. By decoupling the block construction process from the Sequencer's Execution Engine, operators can tailor transaction sequencing rules without diverging from the standard Optimism Protocol Client. This flexibility allows individual chains to experiment on sequencing features, providing a means for differentiation. This minimum viable design also includes a local block production fallback as a training wheel to ensure liveness and network performance in the event of local Block Builder failure. @@ -24,7 +24,7 @@ The tight coupling of proposer and sequencer roles in the `op-node` limits the a ## Context -As of September 2024, the `op-node` sofware in `sequencer` mode performs both the role of "proposer" and "sequencer". As the "proposer", the `op-node` propagates a proposal, with full authority, for the next block in the canonical L2 chain to the network. Unlike a layer 1 "proposer", it does not have a "vote" in the finality of that block, other than by committing it to the L1 chain. As a "sequencer", it is also responsible for the ordering of transactions in the L2 block's it proposes. Today, it uses a `op-geth`, a diff-minimized fork of the Layer 1 Execution client `geth`, and it's stock transaction ordering algorithm. +As of September 2024, the `op-node` software in `sequencer` mode performs both the role of "proposer" and "sequencer". As the "proposer", the `op-node` propagates a proposal, with full authority, for the next block in the canonical L2 chain to the network. Unlike a layer 1 "proposer", it does not have a "vote" in the finality of that block, other than by committing it to the L1 chain. As a "sequencer", it is also responsible for the ordering of transactions in the L2 block's it proposes. Today, it uses a `op-geth`, a diff-minimized fork of the Layer 1 Execution client `geth`, and its stock transaction ordering algorithm. On Ethereum Layer 1, a concept known as "Proposer Builder Separation" has become popularized as a client architecture decision to purposefully enable the "proposer" to request a block from an external party. These parties run modified versions of `geth` and newer clients like `reth` to build blocks with numerous ordering algorithms and features. On Layer 1, the communication between the proposer and the builder is achieved via the [`mev-boost` software](https://github.com/flashbots/mev-boost). @@ -74,7 +74,7 @@ Preemptively sending these API calls from the sidecar, instead of waiting for th This approach doubles the amount of bandwidth needed for sending a block to the `op-node` by accepting an external block from the block builder. However, the sidecar only forwards one payload to `op-node` based on its selection criteria. -## Software Maintence +## Software Maintenance Flashbots will develop and maintain the initial versions of this software in a modular and contributor friendly manner to the standards of our existing Ethereum L1 `mev-boost` sidecar. We will take a crawl, walk, run approach with this software by trial'ing it with one OP-stack chain outside of local testing. From there, we can decide if the feature set is standardized enough to begin efforts to merge into the OP-stack, or in the event we want to delay this decision further, Flashbots will contribute this sidecar to the docker compose setup of OP stack and assist in ensuring smooth operation during any hardfork related work as we have historically done on Ethereum L1. @@ -92,16 +92,16 @@ This solution provides a balance between enabling external block production and ### Costs 1. It breaks any existing and future assumptions around there being 1 execution layer for each consensus layer client in the OP Stack. -2. Adding software between a source and desintination will always incur some latency hit. -3. Without propoer illumination, it could make portions of the protocol opaque to the user, but this may be true of any custom ordering rule. +2. Adding software between a source and destination will always incur some latency hit. +3. Without proper illumination, it could make portions of the protocol opaque to the user, but this may be true of any custom ordering rule. 4. A working solution could delay an in-protocol solution indefinitely due to lack of urgency to merge in. ## Resource Usage -This approach doubles the amount of bandwidth needed for sending a block to the `op-node` by accepting an external block from the block builder. Bu the sidecar only forwards one payload to `op-node` based on it's selection criteria. +This approach doubles the amount of bandwidth needed for sending a block to the `op-node` by accepting an external block from the block builder. But the sidecar only forwards one payload to `op-node` based on its selection criteria. # Alternatives Considered -A variety of alternate desgns we're considered and some implemented. +A variety of alternate designs we're considered and some implemented. 1. Proposer `op-node` <> `builder-op-geth` (payload attributes stream): - Proposer's` op-node` requests block from builder's op-geth. @@ -123,7 +123,7 @@ A variety of alternate desgns we're considered and some implemented. 4. Proposer `op-geth` <> Builder `op-geth`: - Proposer's op-geth requests block directly from builder's op-geth. - Pros: likely the fastest approach since proposers `op-geth` is the ultimate executor of the payload. - - Cons: requires modification to `op-geth` which is in some ways more sacred than `op-geth` due to it's policy of a minized code diff to upstream geth. + - Cons: requires modification to `op-geth` which is in some ways more sacred than `op-geth` due to its policy of a minimized code diff to upstream geth. # Risks & Uncertainties diff --git a/protocol/standard-l2-genesis.md b/protocol/standard-l2-genesis.md index 219fe8b9..26df6a66 100644 --- a/protocol/standard-l2-genesis.md +++ b/protocol/standard-l2-genesis.md @@ -101,13 +101,13 @@ from the `SuperchainConfig` allows for simple management of this very important stage 1 status. This is meant to simplify operations by removing the aliased L1 `ProxyAdmin` owner being set as the L2 `ProxyAdmin`. -Since the the L1 and L2 `ProxyAdmin` contracts are intended to have the same owner, an additional +Since the L1 and L2 `ProxyAdmin` contracts are intended to have the same owner, an additional improvement (which may be excluded to limit scope creep), would be to remove the `Ownable` dependency on the L1 `ProxyAdmin` contract, and instead have it read the `SuperchainConfig.upgrader()` role to authorize upgrades. This would also be in alignment with the Superchain strategy, as the Security Council should not manage the pause of chains which they are not also responsible for upgrading, and participation in the pause is a benefit that chains get when -they join the Superchain ecosystem.Regardless, in order to preserve the existing auth model we MUST +they join the Superchain ecosystem. Regardless, in order to preserve the existing auth model we MUST ensure that the `upgrader` is the same account as the current L1 ProxyAdmin owner. The `data` and `gasLimit` are allowed to be specified since we don't fully know what sorts of calls we may have to do. @@ -204,7 +204,7 @@ will binary search over the possible function selectors, consuming a bit more ga ### L2Genesis Generation When a new predeploy release is created, the bytcode from each predeploy should be placed into a -an new autogenerated library which resembles the following: +a new autogenerated library which resembles the following: ```solidity library HolocenePredeploys { diff --git a/protocol/supervisor-dataflow.md b/protocol/supervisor-dataflow.md index c504c8fa..918552ae 100644 --- a/protocol/supervisor-dataflow.md +++ b/protocol/supervisor-dataflow.md @@ -28,7 +28,7 @@ Leading up to this document were two documents and a meeting: # Proposed Solution(s) -This document contains a colleciton of moderate sized refactoring tasks for the OP Node and OP Supervisor which seek to make our abstractions and patterns better suited to the goals of Interop. +This document contains a collection of moderate sized refactoring tasks for the OP Node and OP Supervisor which seek to make our abstractions and patterns better suited to the goals of Interop. ## Distinguishing Node Relationships First, we should define two different modes of association an OP Node can have with an OP Supervisor: @@ -41,7 +41,7 @@ Owned Nodes are special OP Nodes which are paired to the Supervisor. There must ## Specifying External Node Architecture External Nodes have minimal bearing on the complexity of a Supervisor. That is because the External Nodes *only* make simple queries, and data held by that Node is never considered by the Supervisor. Their connections should be made from the Node to the Supervisor, and a one-way, unauthenticated RPC connection is fine. -However, the OP Node *does* need to take protective measures if it is using an External Supervisor. It is possible the the OP Node in question is deriving its chain based on data which is *not consistent* with the data the External Supervisor is using. When this occurs, the OP Node **must** halt. There is no way to attribute fault, and so the Supervisor is effectively unusable by the Node for this period. The system would resume operation when one side or the other handles a reorg which re-aligns the views. +However, the OP Node *does* need to take protective measures if it is using an External Supervisor. It is possible that the OP Node in question is deriving its chain based on data which is *not consistent* with the data the External Supervisor is using. When this occurs, the OP Node **must** halt. There is no way to attribute fault, and so the Supervisor is effectively unusable by the Node for this period. The system would resume operation when one side or the other handles a reorg which re-aligns the views. ## Specifying Owned Node Architecture For Owned Nodes, there are several architecture decisions being made in this document: @@ -50,7 +50,7 @@ For Owned Nodes, there are several architecture decisions being made in this doc For Owned Nodes, the Supervisor should initiate the connection. The connection should be an authenticated two-way RPC which the units can communicate over. This gives us strong connectivity and instant signaling mechanisms from the Supervisor down to the OP Node. ### Supervisor as Orchestrator for Sync -In the current architecture, OP Nodes decide to send their updated heads to the Supervisor as part of Derivation. When this happens, the Supervisor reaches out and fetches receipt data. Both of these activites load data into the Supervisor's database, and this presents a problem -- how can the Supervisor interact with multiple Nodes without suffering redundant or conflicting writes? +In the current architecture, OP Nodes decide to send their updated heads to the Supervisor as part of Derivation. When this happens, the Supervisor reaches out and fetches receipt data. Both of these activities load data into the Supervisor's database, and this presents a problem -- how can the Supervisor interact with multiple Nodes without suffering redundant or conflicting writes? To solve this, we propose that the Supervisor should be in control of Synchronization. The Supervisor would now take an approach like this: @@ -76,7 +76,7 @@ Instead, we should use the Supervisor as the source of L1 data which Owned Nodes - Establish an L1 Provider component of the Supervisor which uses the L1 fetching OP Nodes already do. Match existing data formats such that the Supervisor acts mostly as a proxy, and does not introspect the data. - For Owned Nodes, disable L1 fetching for Derivation. - Add an RPC API to Owned Nodes which accepts the L1 data and starts Derivation on it. -- When the Supervisor recieves a new L1 payload, it calls it down to the Owned Nodes to start their Derivation against the data. +- When the Supervisor receives a new L1 payload, it calls it down to the Owned Nodes to start their Derivation against the data. By doing this, we ensure that the data used to construct the Supervisor Database is consistent, and we allow the OP Node to still perform derivation per usual. In addition, by putting Supervisor in control of the data administration, it is able to better predict and control updates to Safe data. Any time it sends new L1 data down, it can immediately listen back for the updates from that data. @@ -121,4 +121,4 @@ We could try to continue to chase stability and consistency issues into our curr ###### There are no risks, this team can do anything. There is only upside; WAGMI. -* Edge cases could inflate the complexities described above and make achieving this plan more difficult. We have seen in development of this system that edge cases can significantly increase our complexity. However, as this document describes a *reduction* of complexity though an increase of explicit control, we should have this well in-hand. \ No newline at end of file +* Edge cases could inflate the complexities described above and make achieving this plan more difficult. We have seen in development of this system that edge cases can significantly increase our complexity. However, as this document describes a *reduction* of complexity through an increase of explicit control, we should have this well in-hand.