diff --git a/404.html b/404.html index 2559b8921..00a3876a4 100644 --- a/404.html +++ b/404.html @@ -91,7 +91,7 @@ diff --git a/approved/0001-agile-coretime.html b/approved/0001-agile-coretime.html index 53d0960dd..0694994e3 100644 --- a/approved/0001-agile-coretime.html +++ b/approved/0001-agile-coretime.html @@ -90,7 +90,7 @@ diff --git a/approved/0005-coretime-interface.html b/approved/0005-coretime-interface.html index e7ff33acd..454b5872b 100644 --- a/approved/0005-coretime-interface.html +++ b/approved/0005-coretime-interface.html @@ -90,7 +90,7 @@ diff --git a/approved/0007-system-collator-selection.html b/approved/0007-system-collator-selection.html index 76ead4e2b..b37c27c12 100644 --- a/approved/0007-system-collator-selection.html +++ b/approved/0007-system-collator-selection.html @@ -90,7 +90,7 @@ diff --git a/approved/0008-parachain-bootnodes-dht.html b/approved/0008-parachain-bootnodes-dht.html index ba3293523..cc5598ee5 100644 --- a/approved/0008-parachain-bootnodes-dht.html +++ b/approved/0008-parachain-bootnodes-dht.html @@ -90,7 +90,7 @@ diff --git a/approved/0012-process-for-adding-new-collectives.html b/approved/0012-process-for-adding-new-collectives.html index 7dd3043ea..2db674a60 100644 --- a/approved/0012-process-for-adding-new-collectives.html +++ b/approved/0012-process-for-adding-new-collectives.html @@ -90,7 +90,7 @@ diff --git a/approved/0014-improve-locking-mechanism-for-parachains.html b/approved/0014-improve-locking-mechanism-for-parachains.html index 87f44a8c8..a3ffa2f00 100644 --- a/approved/0014-improve-locking-mechanism-for-parachains.html +++ b/approved/0014-improve-locking-mechanism-for-parachains.html @@ -90,7 +90,7 @@ diff --git a/approved/0022-adopt-encointer-runtime.html b/approved/0022-adopt-encointer-runtime.html index 66fed49d0..1065380c2 100644 --- a/approved/0022-adopt-encointer-runtime.html +++ b/approved/0022-adopt-encointer-runtime.html @@ -90,7 +90,7 @@ diff --git a/approved/0032-minimal-relay.html b/approved/0032-minimal-relay.html index 57b45582d..81203fdc9 100644 --- a/approved/0032-minimal-relay.html +++ b/approved/0032-minimal-relay.html @@ -90,7 +90,7 @@ diff --git a/approved/0050-fellowship-salaries.html b/approved/0050-fellowship-salaries.html index b6ec1fea1..859831722 100644 --- a/approved/0050-fellowship-salaries.html +++ b/approved/0050-fellowship-salaries.html @@ -90,7 +90,7 @@ diff --git a/approved/0056-one-transaction-per-notification.html b/approved/0056-one-transaction-per-notification.html index d44bbb6aa..c23e96a11 100644 --- a/approved/0056-one-transaction-per-notification.html +++ b/approved/0056-one-transaction-per-notification.html @@ -90,7 +90,7 @@ diff --git a/index.html b/index.html index dd1acaa4a..b2ab91e05 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@ diff --git a/introduction.html b/introduction.html index dd1acaa4a..b2ab91e05 100644 --- a/introduction.html +++ b/introduction.html @@ -90,7 +90,7 @@ diff --git a/print.html b/print.html index d26bb7759..d4292a980 100644 --- a/print.html +++ b/print.html @@ -91,7 +91,7 @@ @@ -1600,9 +1600,6 @@

General flow
  • The metadata is converted into lean modular form (vector of chunks)
  • A Merkle tree is constructed from the metadata chunks
  • -
  • A root of tree (as a left element) is merged with Metadata Descriptor (as a right element)
  • +
  • A root of tree is merged with the hash of the MetadataDescriptor
  • Resulting value is a constant to be included in additionalSigned to prove that the metadata seen by cold device is genuine
  • Metadata modularization

    -

    Structure of types in shortened metadata exactly matches structure of types in scale-info, but doc field is always empty

    +

    Structure of types in shortened metadata exactly matches structure of types in scale-info at MetadataV14 state, but doc field is always empty

    struct Type {
       path: Path, // vector of strings
       type_params: Vec<TypeParams>,
    @@ -1791,6 +1788,26 @@ 

    Right node and then left node is popped from the front of the nodes queue and merged; the result is sent to the end of the queue.
  • Step (4) is repeated until only one node remains; this is tree root.
  • +
    queue = empty_queue
    +
    +while (leaves.length>1) {
    +  right = leaves.pop_last
    +  left = leaves.pop_last
    +  queue.push_back(merge(left, right))
    +}
    +
    +if leaves.length == 1 {
    +  queue.push_front(leaves.last)
    +}
    +
    +while queue.len() > 1 {
    +  right = queue.pop_front
    +  left = queue.pop_front
    +  queue.push_back(merge(left, right))
    +}
    +
    +return queue.pop
    +
    Resulting tree for metadata consisting of 5 nodes (numbered from 0 to 4):
     
            root
    @@ -1808,14 +1825,8 @@ 

    Digest

  • Root hash of this tree (left) is merged with metadata descriptor blake3 hash (right); this is metadata digest.
  • Version number and corresponding resulting metadata digest MUST be included into Signed Extensions as specified in Chain Verification section below.

    -

    Shortening

    -

    For shortening, an attempt to decode transaction completely using provided metadata is performed with the same algorithm that would be used on the cold side. All chunks are associated with their leaf indices. An example of this protocol is proposed in metadata-shortener that is based on substrate-parser decoding protocol; any decoding protocol could be used here as long as cold signer's design finds it appropriate for given security model.

    -

    Transmission

    -

    Shortened metadata chunks MAY be trasmitted into cold device together with Merkle proof in its entirety or in parts, depending on memory capabilities of the cold device and it ability to reconstruct larger fraction of tree. This document does not specify the manner of transmission. The order of metadata chunks MAY be arbitrary, the only requirement is that indices of leaf nodes in Merkle tree corresponding to chunks MUST be communicated. Community MAY handle proof format standartization independently.

    -

    Offline verification

    -

    The transmitted metadata chunks are hashed together with proof lemmas to obtain root that MAY be transmitted along with the rest of payload. Verification that the root transmitted with message matches with calculated root is optional; the transmitted root SHOULD NOT be used in signature, calculated root MUST be used; however, there is no mechanism to enforce this - it should be done during cold signers code audit.

    Chain verification

    -

    The root of metadata computed by cold device MAY be included into Signed Extensions; this way the transaction will pass as valid iff hash of metadata as seen by cold storage device is identical to consensus hash of metadata, ensuring fair signing protocol.

    +

    The root of metadata computed by cold device MAY be included into Signed Extensions; if it is included, the transaction will pass as valid iff hash of metadata as seen by cold storage device is identical to consensus hash of metadata, ensuring fair signing protocol.

    The Signed Extension representing metadata digest is a single byte representing both digest vaule inclusion and shortening protocol version; this MUST be included in Signed Extensions set. Depending on its value, a digest value is included as additionalSigned to signature computation according to following specification:

    @@ -2131,6 +2142,119 @@

    Appendix A

    occupying. However, considering it's part of an unimported fork, the validator cannot call a runtime API on that block.

    Adding the core_index to the CandidateReceipt would solve this problem and would enable systematic recovery for all dispute scenarios.

    +

    (source)

    +

    Table of Contents

    + +

    RFC-0059: Add a discovery mechanism for nodes based on their capabilities

    +
    signed extension valuedigest valuecomment
    0x00digest is not included
    + + + +
    Start Date2023-12-18
    DescriptionNodes having certain capabilities register themselves in the DHT to be discoverable
    AuthorsPierre Krieger
    +
    +

    Summary

    +

    This RFC proposes to make the mechanism of RFC #8 more generic by introducing the concept of "capabilities".

    +

    Implementations can implement certain "capabilities", such as serving old block headers or being a parachain bootnode.

    +

    The discovery mechanism of RFC #8 is extended to be able to discover nodes of specific capabilities.

    +

    Motivation

    +

    The Polkadot peer-to-peer network is made of nodes. Not all these nodes are equal. Some nodes store only the headers of recently blocks, some nodes store all the block headers and bodies since the genesis, some nodes store the storage of all blocks since the genesis, and so on.

    +

    It is currently not possible to know ahead of time (without connecting to it and asking) which nodes have which data available, and it is not easily possible to build a list of nodes that have a specific piece of data available.

    +

    If you want to download for example the header of block 500, you have to connect to a randomly-chosen node, ask it for block 500, and if it says that it doesn't have the block, disconnect and try another randomly-chosen node. +In certain situations such as downloading the storage of old blocks, nodes that have the information are relatively rare, and finding through trial and error a node that has the data can take a long time.

    +

    This RFC attempts to solve this problem by giving the possibility to build a list of nodes that are capable of serving specific data.

    +

    Stakeholders

    +

    Low-level client developers. +People interested in accessing the archive of the chain.

    +

    Explanation

    +

    Reading RFC #8 first might help with comprehension, as this RFC is very similar.

    +

    Please keep in mind while reading that everything below applies for both relay chains and parachains, except mentioned otherwise.

    +

    Capabilities

    +

    This RFC defines a list of so-called capabilities:

    +
      +
    • Head of chain provider. An implementation with this capability must be able to serve to other nodes block headers, block bodies, justifications, calls proofs, and storage proofs of "recent" (see below) blocks, and, for relay chains, to serve to other nodes warp sync proofs where the starting block is a session change block and must participate in Grandpa and Beefy gossip.
    • +
    • History provider. An implementation with this capability must be able to serve to other nodes block headers and block bodies of any block since the genesis, and must be able to serve to other nodes justifications of any session change block since the genesis up until and including their currently finalized block.
    • +
    • Archive provider. This capability is a superset of History provider. In addition to the requirements of History provider, an implementation with this capability must be able to serve call proofs and storage proof requests of any block since the genesis up until and including their currently finalized block.
    • +
    • Parachain bootnode (only for relay chains). An implementation with this capability must be able to serve the network request described in RFC 8.
    • +
    +

    In the context of the head of chain provider, the word "recent" means: any not-finalized-yet block that is equal to or an ancestor of a block that it has announced through a block announce, and any finalized block whose height is superior to its current finalized block minus 16. +This does not include blocks that have been pruned because they're not a descendant of its current finalized block. In other words, blocks that aren't a descendant of the current finalized block can be thrown away. +A gap of blocks is required due to race conditions: when a node finalizes a block, it takes some time for its peers to be made aware of this, during which they might send requests concerning older blocks. The exact gap is arbitrary.

    +

    Substrate is currently by default a head of chain provider provider. After it has finished warp syncing, it downloads the list of old blocks, after which it becomes a history provider. +If Substrate is instead configured as an archive node, then it downloads the state of all blocks since the genesis, after which it becomes an archive provider, history provider, and head of chain provider. +If blocks pruning is enabled and the chain is a relay chain, then Substrate unfortunately doesn't implement any of these capabilities, not even head of chain provider. This is considered as a bug that should be fixed, see https://github.com/paritytech/polkadot-sdk/issues/2733.

    +

    DHT provider registration

    +

    This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here.

    +

    Implementations that have the history provider capability should register themselves as providers under the key sha256(concat("history", randomness)).

    +

    Implementations that have the archive provider capability should register themselves as providers under the key sha256(concat("archive", randomness)).

    +

    Implementations that have the parachain bootnode capability should register themselves as provider under the key sha256(concat(scale_compact(para_id), randomness)), as described in RFC 8.

    +

    "Register themselves as providers" consists in sending ADD_PROVIDER requests to nodes close to the key, as described in the Content provider advertisement section of the specification.

    +

    The value of randomness can be found in the randomness field when calling the BabeApi_currentEpoch function.

    +

    In order to avoid downtimes when the key changes, nodes should also register themselves as a secondary key that uses a value of randomness equal to the randomness field when calling BabeApi_nextEpoch.

    +

    Implementers should be aware that their implementation of Kademlia might already hash the key before XOR'ing it. The key is not meant to be hashed twice.

    +

    Implementations must not register themselves if they don't fulfill the capability yet. For example, a node configured to be an archive node but that is still building its archive state in the background must register itself only after it has finished building its archive.

    +

    Secondary DHTs

    +

    Implementations that have the history provider capability must also participate in a secondary DHT that comprises only of nodes with that capability. The protocol name of that secondary DHT must be /<genesis-hash>/kad/history.

    +

    Similarly, implementations that have the archive provider capability must also participate in a secondary DHT that comprises only of nodes with that capability and whose protocol name is /<genesis-hash>/kad/archive.

    +

    Just like implementations must not register themselves if they don't fulfill their capability yet, they must also not participate in the secondary DHT if they don't fulfill their capability yet.

    +

    Head of the chain providers

    +

    Implementations that have the head of the chain provider capability do not register themselves as providers, but instead are the nodes that participate in the main DHT. In other words, they are the nodes that serve requests of the /<genesis_hash>/kad protocol.

    +

    Any implementation that isn't a head of the chain provider (read: light clients) must not participate in the main DHT. This is already presently the case.

    +

    Implementations must not participate in the main DHT if they don't fulfill the capability yet. For example, a node that is still in the process of warp syncing must not participate in the main DHT. However, assuming that warp syncing doesn't last more than a few seconds, it is acceptable to ignore this requirement in order to avoid complicating implementations too much.

    +

    Drawbacks

    +

    None that I can see.

    +

    Testing, Security, and Privacy

    +

    The content of this section is basically the same as the one in RFC 8.

    +

    This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms.

    +

    Due to the way Kademlia works, it would become the responsibility of the 20 Polkadot nodes whose sha256(peer_id) is closest to the key (described in the explanations section) to store the list of nodes that have specific capabilities. +Furthermore, when a large number of providers are registered, only the providers closest to the key are kept, up to a certain implementation-defined limit.

    +

    For this reason, an attacker can abuse this mechanism by randomly generating libp2p PeerIds until they find the 20 entries closest to the key representing the target capability. They are then in control of the list of nodes with that capability. While doing this can in no way be actually harmful, it could lead to eclipse attacks.

    +

    Because the key changes periodically and isn't predictable, and assuming that the Polkadot DHT is sufficiently large, it is not realistic for an attack like this to be maintained in the long term.

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    +

    The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

    +

    Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

    +

    Assuming 1000 nodes with a specific capability, the 20 Polkadot full nodes corresponding to that capability will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

    +

    Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the nodes with a capability. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

    +

    Ergonomics

    +

    Irrelevant.

    +

    Compatibility

    +

    Irrelevant.

    +

    Prior Art and References

    +

    Unknown.

    +

    Unresolved Questions

    +

    While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

    + +

    This RFC would make it possible to reliably discover archive nodes, which would make it possible to reliably send archive node requests, something that isn't currently possible. This could solve the problem of finding archive RPC node providers by migrating archive-related request to using the native peer-to-peer protocol rather than JSON-RPC.

    +

    If we ever decide to break backwards compatibility, we could divide the "history" and "archive" capabilities in two, between nodes capable of serving older blocks and nodes capable of serving newer blocks. +We could even add to the peer-to-peer network nodes that are only capable of serving older blocks (by reading from a database) but do not participate in the head of the chain, and that just exist for historical purposes.

    (source)

    Table of Contents

      @@ -2167,19 +2291,19 @@

      Summary

      +

      Summary

      Currently, substrate runtime use an simple allocator defined by host side. Every runtime MUST import these allocator functions for normal execution. This situation make runtime code not versatile enough.

      So this RFC proposes to define a new spec for allocator part to make substrate runtime more generic.

      -

      Motivation

      +

      Motivation

      Since this RFC define a new way for allocator, we now regard the old one as legacy allocator. As we all know, since the allocator implementation details are defined by the substrate client, parachain/parathread cannot customize memory allocator algorithm, so the new specification allows the runtime to customize memory allocation, and then export the allocator function according to the specification for the client side to use. Another benefit is that some new host functions can be designed without allocating memory on the client, which may have potential performance improvements. Also it will help provide a unified and clean specification if substrate runtime support multi-targets(e.g. RISC-V). There is also a potential benefit. Many programming languages that support compilation to wasm may not be friendly to supporting external allocator. This is beneficial for other programming languages ​​to enter the substrate runtime ecosystem. The last and most important benefit is that for offchain context execution, the runtime can fully support pure wasm. What this means here is that all imported host functions could not actually be called (as stub functions), then the various verification logic of the runtime can be converted into pure wasm, which provides the possibility for the substrate runtime to run block verification in other environments (such as in browsers and other non-substrate environments).

      -

      Stakeholders

      +

      Stakeholders

      No attempt was made at convincing stakeholders.

      -

      Explanation

      +

      Explanation

      Runtime side spec

      This section contains a list of functions should be exported by substrate runtime.

      We define the spec as version 1, so the following dummy function v1 MUST be exported to hint @@ -2218,32 +2342,32 @@

      Client side allocator.

    Detail-heavy explanation of the RFC, suitable for explanation to an implementer of the changeset. This should address corner cases in detail and provide justification behind decisions, and provide rationale for how the design meets the solution requirements.

    -

    Drawbacks

    +

    Drawbacks

    The allocator inside of the runtime will make code size bigger, but it's not obvious. The allocator inside of the runtime maybe slow down(or speed up) the runtime, still not obvious.

    We could ignore these drawbacks since they are not prominent. And the execution efficiency is highly decided by runtime developer. We could not prevent a poor efficiency if developer want to do it.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Keep the legacy allocator runtime test cases, and add new feature to compile test cases for v1 allocator spec. And then update the test asserts.

    Update template runtime to enable v1 spec. Once the dev network runs well, it seems that the spec is implmented correctly.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    As the above says, not obvious impact about performance. And polkadot-sdk could offer the best practice allocator for all chains. Third party also could customized by theirself. So the performance could be improved over time.

    -

    Ergonomics

    +

    Ergonomics

    Only for runtime developer, Just need to import a new crate and enable a new feature. Maybe it's convienient for other wasm-target language to implment.

    -

    Compatibility

    +

    Compatibility

    It's 100% compatible. Only Some runtime configs and executor configs need to be depreacted.

    For support new runtime spec, we MUST upgrade the client binary to support new spec of client part firstly.

    We SHALL add an optional primtive crate to enable the version 1 spec and disable the legacy allocator by cargo feature. For the first year, we SHALL disable the v1 by default, and enable it by default start in the next year.

    -

    Prior Art and References

    +

    Prior Art and References

    -

    Unresolved Questions

    +

    Unresolved Questions

    None at this time.

    - +

    The content discussed with RFC-0004 is basically orthogonal, but it could still be considered together, and it is preferred that this rfc be implmentented first.

    This feature could make substrate runtime be easier supported by other languages and integreted into other ecosystem.

    (source)

    @@ -2293,7 +2417,7 @@

    AuthorsSourabh Niyogi -

    Summary

    +

    Summary

    This RFC proposes to add the two dominant smart contract programming languages in the Polkadot ecosystem to AssetHub: EVM + ink!/Coreplay. The objective is to increase DOT Revenue by making AssetHub accessible to (1) Polkadot Rollups; @@ -2302,7 +2426,7 @@

    Summary

    These changes in AssetHub are enabled by key Polkadot 2.0 technologies: PolkaVM supporting Coreplay, and hyper data availability in Blobs Chain.

    -

    Motivation

    +

    Motivation

    EVM Contracts are pervasive in the Web3 blockchain ecosystem, while Polkadot 2.0's Coreplay aims to surpass EVM Contracts in ease-of-use using PolkaVM's RISC architecture.

    Asset Hub for Polkadot does not have smart contract capabilities, @@ -2362,7 +2486,7 @@

    here).

    We believe AssetHub should support ink! as a precursor to support CorePlay's capabilities as soon as possible.
    To the best of our knowledge, release times of this are unknown but having ink! inside AssetHub would be natural for Polkadot 2.0.

    -

    Stakeholders

    +

    Stakeholders

    • Asset Hub Users: Those who call any extrinsic on Asset Hub for Polkadot.
    • DOT Token Holders: Those who hold DOT on any chain in the Polkadot ecosystem.
    • @@ -2370,7 +2494,7 @@

      StakeholdersEthereum Rollups: Rollups who utilize Ethereum as a settlement layer and interactive fraud proofs or ZK proofs to secure their rollup and utilize Ethereum DA to record transactions, provide security for their rollup, and have rollup users settle on Ethereum.
    • Polkadot Rollups: Rollups who utilize AssetHub as a settlement layer and interactive fraud proofs or ZK proofs on Assethub and Blobs to record rollup transactions, provide security for their rollup, and have rollup users settle on AssetHub for Polkadot.
    -

    Explanation

    +

    Explanation

    Limit Smart Contract Weight allocation

    AssetHub is a major component of the Polkadot 2.0 Minimal Relay Chain architecture. It is critical that smart contract developers not be able to clog AssetHub's blockspace for other mission critical applications, such as Staking and Governance.

    As such, it is proposed that at most 50% of the available weight in AssetHub for Polkadot blocks be allocated to smart contracts pallets (EVM, ink! and/or Coreplay). While to date AssetHub has limited usage, it is believed (see here) that imposing this limit on smart contracts pallet would limit the effect on non-smart contract usage. A excessively small weight limit like 10% or 20% may limit the attractiveness of Polkadot as a platform for Polkadot rollups and EVM Contracts. A excessively large weight like 90% or 100% may threaten AssetHub usage.

    @@ -2423,25 +2547,25 @@

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    Testing the mapping between assetIDs and EVM Contracts thoroughly will be critical.

    Having a complete working OP Stack chain using AssetHub for Kusama (1000) and Blobs on Kusama (3338) would be highly desirable, but is unlikely to be required.

    -

    Performance, Ergonomics, and Compatibility

    -

    Performance

    +

    Performance, Ergonomics, and Compatibility

    +

    Performance

    The weight limit of 50% is expected to be adequate to limit excess smart contract usage at this time.

    Storage bloat is expected to kept to a minimum with the nominal 0.01 Existential Deposit.

    -

    Ergonomics

    +

    Ergonomics

    Note that the existential deposit is not 0 DOT but being lowered from 0.1 DOT to 0.01 DOT, which may pose problems for some developers.
    Many developers routinely deploy their EVM contracts on many different EVM Chains in parallel. This non-zero ED may pose problems for some developers

    The 0.01 DOT (worth $0.075 USD) is unlikely to pose significant issue.

    -

    Compatibility

    +

    Compatibility

    It is believed that EVM pallet (as deployed on Moonbeam + Astar) is sufficiently compatible with Ethereum, and that the ED of 0.01 DOT pose negligible issues.

    The messaging architecture for rollups are not compatible with Polkadot XCM.
    It is not clear if leading rollup platforms (OP Stack, Arbitrum Orbit, Polygon zkEVM) could be made compatible with XCM.

    -

    Unresolved Questions

    +

    Unresolved Questions

    It is highly desirable to know the throughput of Polkadot DA with popular rollup architectures OP Stack and Arbitrum Orbit.
    This would enable CEXs and EVM L2 builders to choose Polkadot over Ethereum.

    - +

    If accepted, this RFC could pave the way for CorePlay on Asset Hub for Polkadot/Kusama, a major component of Polkadot 2.0's smart contract future.

    The importance of precompiles should

    (source)

    @@ -2484,9 +2608,9 @@

    RFC AuthorsGavin Wood -

    Summary

    +

    Summary

    This proposes a periodic, sale-based method for assigning Polkadot Coretime, the analogue of "block space" within the Polkadot Network. The method takes into account the need for long-term capital expenditure planning for teams building on Polkadot, yet also provides a means to allow Polkadot to capture long-term value in the resource which it sells. It supports the possibility of building rich and dynamic secondary markets to optimize resource allocation and largely avoids the need for parameterization.

    -

    Motivation

    +

    Motivation

    Present System

    The Polkadot Ubiquitous Computer, or just Polkadot UC, represents the public service provided by the Polkadot Network. It is a trust-free, WebAssembly-based, multicore, internet-native omnipresent virtual machine which is highly resilient to interference and corruption.

    The present system of allocating the limited resources of the Polkadot Ubiquitous Computer is through a process known as parachain slot auctions. This is a parachain-centric paradigm whereby a single core is long-term allocated to a single parachain which itself implies a Substrate/Cumulus-based chain secured and connected via the Relay-chain. Slot auctions are on-chain candle auctions which proceed for several days and result in the core being assigned to the parachain for six months at a time up to 24 months in advance. Practically speaking, we only see two year periods being bid upon and leased.

    @@ -2507,7 +2631,7 @@

    RequirementsThe solution SHOULD avoid creating additional dependencies on functionality which the Relay-chain need not strictly provide for the delivery of the Polkadot UC.

    Furthermore, the design SHOULD be implementable and deployable in a timely fashion; three months from the acceptance of this RFC should not be unreasonable.

    -

    Stakeholders

    +

    Stakeholders

    Primary stakeholder sets are:

    • Protocol researchers and developers, largely represented by the Polkadot Fellowship and Parity Technologies' Engineering division.
    • @@ -2516,7 +2640,7 @@

      Stakeholders

      Socialization:

      The essensials of this proposal were presented at Polkadot Decoded 2023 Copenhagen on the Main Stage. A small amount of socialization at the Parachain Summit preceeded it and some substantial discussion followed it. Parity Ecosystem team is currently soliciting views from ecosystem teams who would be key stakeholders.

      -

      Explanation

      +

      Explanation

      Overview

      Upon implementation of this proposal, the parachain-centric slot auctions and associated crowdloans cease. Instead, Coretime on the Polkadot UC is sold by the Polkadot System in two separate formats: Bulk Coretime and Instantaneous Coretime.

      When a Polkadot Core is utilized, we say it is dedicated to a Task rather than a "parachain". The Task to which a Core is dedicated may change at every Relay-chain block and while one predominant type of Task is to secure a Cumulus-based blockchain (i.e. a parachain), other types of Tasks are envisioned.

      @@ -2948,16 +3072,16 @@

      Rollout

    • Governance upgrade proposal(s).
    • Monitoring of the upgrade process.
    • -

      Performance, Ergonomics and Compatibility

      +

      Performance, Ergonomics and Compatibility

      No specific considerations.

      Parachains already deployed into the Polkadot UC must have a clear plan of action to migrate to an agile Coretime market.

      While this proposal does not introduce documentable features per se, adequate documentation must be provided to potential purchasers of Polkadot Coretime. This SHOULD include any alterations to the Polkadot-SDK software collection.

      -

      Testing, Security and Privacy

      +

      Testing, Security and Privacy

      Regular testing through unit tests, integration tests, manual testnet tests, zombie-net tests and fuzzing SHOULD be conducted.

      A regular security review SHOULD be conducted prior to deployment through a review by the Web3 Foundation economic research group.

      Any final implementation MUST pass a professional external security audit.

      The proposal introduces no new privacy concerns.

      - +

      RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

      RFC-5 proposes the API for interacting with Relay-chain.

      Additional work should specify the interface for the instantaneous market revenue so that the Coretime-chain can ensure Bulk Coretime placed in the instantaneous market is properly compensated.

      @@ -2973,7 +3097,7 @@

      Prior Art and References

      +

      Prior Art and References

      Robert Habermeier initially wrote on the subject of Polkadot blockspace-centric in the article Polkadot Blockspace over Blockchains. While not going into details, the article served as an early reframing piece for moving beyond one-slot-per-chain models and building out secondary market infrastructure for resource allocation.

      (source)

      Table of Contents

      @@ -3006,10 +3130,10 @@

      Summary

      +

      Summary

      In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.

      This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.

      -

      Motivation

      +

      Motivation

      The background motivation for this interface is splitting out coretime allocation functions and secondary markets from the Relay-chain onto System parachains. A well-understood and general interface is necessary for ensuring the Relay-chain receives coretime allocation instructions from one or more System chains without introducing dependencies on the implementation details of either side.

      Requirements

        @@ -3021,7 +3145,7 @@

        RequirementsThe interface MUST allow for the allocating chain to instruct changes to the number of cores which it is able to allocate.
      • The interface MUST allow for the allocating chain to be notified of changes to the number of cores which are able to be allocated by the allocating chain.
      -

      Stakeholders

      +

      Stakeholders

      Primary stakeholder sets are:

      • Developers of the Relay-chain core-management logic.
      • @@ -3029,7 +3153,7 @@

        Stakeholders

        Socialization:

        This content of this RFC was discussed in the Polkdot Fellows channel.

        -

        Explanation

        +

        Explanation

        The interface has two sections: The messages which the Relay-chain is able to receive from the allocating parachain (the UMP message types), and messages which the Relay-chain is able to send to the allocating parachain (the DMP message types). These messages are expected to be able to be implemented in a well-known pallet and called with the XCM Transact instruction.

        Future work may include these messages being introduced into the XCM standard.

        UMP Message Types

        @@ -3104,17 +3228,17 @@

        Realistic Limits of the Usage

        For request_revenue_info, a successful request should be possible if when is no less than the Relay-chain block number on arrival of the message less 100,000.

        For assign_core, a successful request should be possible if begin is no less than the Relay-chain block number on arrival of the message plus 10 and workload contains no more than 100 items.

        -

        Performance, Ergonomics and Compatibility

        +

        Performance, Ergonomics and Compatibility

        No specific considerations.

        -

        Testing, Security and Privacy

        +

        Testing, Security and Privacy

        Standard Polkadot testing and security auditing applies.

        The proposal introduces no new privacy concerns.

        - +

        RFC-1 proposes a means of determining allocation of Coretime using this interface.

        RFC-3 proposes a means of implementing the high-level allocations within the Relay-chain.

        Drawbacks, Alternatives and Unknowns

        None at present.

        -

        Prior Art and References

        +

        Prior Art and References

        None.

        (source)

        Table of Contents

        @@ -3160,13 +3284,13 @@

        Summary

        +

        Summary

        As core functionality moves from the Relay Chain into system chains, so increases the reliance on the liveness of these chains for the use of the network. It is not economically scalable, nor necessary from a game-theoretic perspective, to pay collators large rewards. This RFC proposes a mechanism -- part technical and part social -- for ensuring reliable collator sets that are resilient to attemps to stop any subsytem of the Polkadot protocol.

        -

        Motivation

        +

        Motivation

        In order to guarantee access to Polkadot's system, the collators on its system chains must propose blocks (provide liveness) and allow all transactions to eventually be included. That is, some collators may censor transactions, but there must exist one collator in the set who will include a @@ -3202,12 +3326,12 @@

        RequirementsCollators selected by governance SHOULD have a reasonable expectation that the Treasury will reimburse their operating costs.

      -

      Stakeholders

      +

      Stakeholders

      • Infrastructure providers (people who run validator/collator nodes)
      • Polkadot Treasury
      -

      Explanation

      +

      Explanation

      This protocol builds on the existing Collator Selection pallet and its notion of Invulnerables. Invulnerables are collators (identified by their AccountIds) who @@ -3243,27 +3367,27 @@

      Set Size

    • of which 15 are Invulnerable, and
    • five are elected by bond.
    -

    Drawbacks

    +

    Drawbacks

    The primary drawback is a reliance on governance for continued treasury funding of infrastructure costs for Invulnerable collators.

    -

    Testing, Security, and Privacy

    +

    Testing, Security, and Privacy

    The vast majority of cases can be covered by unit testing. Integration test should ensure that the Collator Selection UpdateOrigin, which has permission to modify the Invulnerables and desired number of Candidates, can handle updates over XCM from the system's governance location.

    -

    Performance, Ergonomics, and Compatibility

    +

    Performance, Ergonomics, and Compatibility

    This proposal has very little impact on most users of Polkadot, and should improve the performance of system chains by reducing the number of missed blocks.

    -

    Performance

    +

    Performance

    As chains have strict PoV size limits, care must be taken in the PoV impact of the session manager. Appropriate benchmarking and tests should ensure that conservative limits are placed on the number of Invulnerables and Candidates.

    -

    Ergonomics

    +

    Ergonomics

    The primary group affected is Candidate collators, who, after implementation of this RFC, will need to compete in a bond-based election rather than a race to claim a Candidate spot.

    -

    Compatibility

    +

    Compatibility

    This RFC is compatible with the existing implementation and can be handled via upgrades and migration.

    -

    Prior Art and References

    +

    Prior Art and References

    Written Discussions

    • GitHub: Collator Selection Roadmap
    • @@ -3278,9 +3402,9 @@

      Unresolved Questions

      +

      Unresolved Questions

      None at this time.

      - +

      There may exist in the future system chains for which this model of collator selection is not appropriate. These chains should be evaluated on a case-by-case basis.

      (source)

      @@ -3319,10 +3443,10 @@

      AuthorsPierre Krieger -

      Summary

      +

      Summary

      The full nodes of the Polkadot peer-to-peer network maintain a distributed hash table (DHT), which is currently used for full nodes discovery and validators discovery purposes.

      This RFC proposes to extend this DHT to be used to discover full nodes of the parachains of Polkadot.

      -

      Motivation

      +

      Motivation

      The maintenance of bootnodes has long been an annoyance for everyone.

      When a bootnode is newly-deployed or removed, every chain specification must be updated in order to take the update into account. This has lead to various non-optimal solutions, such as pulling chain specifications from GitHub repositories. When it comes to RPC nodes, UX developers often have trouble finding up-to-date addresses of parachain RPC nodes. With the ongoing migration from RPC nodes to light clients, similar problems would happen with chain specifications as well.

      @@ -3331,14 +3455,14 @@

      MotivationBecause the list of bootnodes in chain specifications is so annoying to modify, the consequence is that the number of bootnodes is rather low (typically between 2 and 15). In order to better resist downtimes and DoS attacks, a better solution would be to use every node of a certain chain as potential bootnode, rather than special-casing some specific nodes.

      While this RFC doesn't solve these problems for relay chains, it aims at solving it for parachains by storing the list of all the full nodes of a parachain on the relay chain DHT.

      Assuming that this RFC is implemented, and that light clients are used, deploying a parachain wouldn't require more work than registering it onto the relay chain and starting the collators. There wouldn't be any need for special infrastructure nodes anymore.

      -

      Stakeholders

      +

      Stakeholders

      This RFC has been opened on my own initiative because I think that this is a good technical solution to a usability problem that many people are encountering and that they don't realize can be solved.

      -

      Explanation

      +

      Explanation

      The content of this RFC only applies for parachains and parachain nodes that are "Substrate-compatible". It is in no way mandatory for parachains to comply to this RFC.

      Note that "Substrate-compatible" is very loosely defined as "implements the same mechanisms and networking protocols as Substrate". The author of this RFC believes that "Substrate-compatible" should be very precisely specified, but there is controversy on this topic.

      While a lot of this RFC concerns the implementation of parachain nodes, it makes use of the resources of the Polkadot chain, and as such it is important to describe them in the Polkadot specification.

      This RFC adds two mechanisms: a registration in the DHT, and a new networking protocol.

      -

      DHT provider registration

      +

      DHT provider registration

      This RFC heavily relies on the functionalities of the Kademlia DHT already in use by Polkadot. You can find a link to the specification here.

      Full nodes of a parachain registered on Polkadot should register themselves onto the Polkadot DHT as the providers of a key corresponding to the parachain that they are serving, as described in the Content provider advertisement section of the specification. This uses the ADD_PROVIDER system of libp2p-kademlia.

      @@ -3370,10 +3494,10 @@

      Drawbacks

      +

      Drawbacks

      The peer_id and addrs fields are in theory not strictly needed, as the PeerId and addresses could be always equal to the PeerId and addresses of the node being registered as the provider and serving the response. However, the Cumulus implementation currently uses two different networking stacks, one of the parachain and one for the relay chain, using two separate PeerIds and addresses, and as such the PeerId and addresses of the other networking stack must be indicated. Asking them to use only one networking stack wouldn't feasible in a realistic time frame.

      The values of the genesis_hash and fork_id fields cannot be verified by the requester and are expected to be unused at the moment. Instead, a client that desires connecting to a parachain is expected to obtain the genesis hash and fork ID of the parachain from the parachain chain specification. These fields are included in the networking protocol nonetheless in case an acceptable solution is found in the future, and in order to allow use cases such as discovering parachains in a not-strictly-trusted way.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      Because not all nodes want to be used as bootnodes, implementers are encouraged to provide a way to disable this mechanism. However, it is very much encouraged to leave this mechanism on by default for all parachain nodes.

      This mechanism doesn't add or remove any security by itself, as it relies on existing mechanisms. However, if the principle of chain specification bootnodes is entirely replaced with the mechanism described in this RFC (which is the objective), then it becomes important whether the mechanism in this RFC can be abused in order to make a parachain unreachable.

      @@ -3382,22 +3506,22 @@

      Performance, Ergonomics, and Compatibility

      -

      Performance

      +

      Performance, Ergonomics, and Compatibility

      +

      Performance

      The DHT mechanism generally has a low overhead, especially given that publishing providers is done only every 24 hours.

      Doing a Kademlia iterative query then sending a provider record shouldn't take more than around 50 kiB in total of bandwidth for the parachain bootnode.

      Assuming 1000 parachain full nodes, the 20 Polkadot full nodes corresponding to a specific parachain will each receive a sudden spike of a few megabytes of networking traffic when the key rotates. Again, this is relatively negligible. If this becomes a problem, one can add a random delay before a parachain full node registers itself to be the provider of the key corresponding to BabeApi_next_epoch.

      Maybe the biggest uncertainty is the traffic that the 20 Polkadot full nodes will receive from light clients that desire knowing the bootnodes of a parachain. Light clients are generally encouraged to cache the peers that they use between restarts, so they should only query these 20 Polkadot full nodes at their first initialization. If this every becomes a problem, this value of 20 is an arbitrary constant that can be increased for more redundancy.

      -

      Ergonomics

      +

      Ergonomics

      Irrelevant.

      -

      Compatibility

      +

      Compatibility

      Irrelevant.

      -

      Prior Art and References

      +

      Prior Art and References

      None.

      -

      Unresolved Questions

      +

      Unresolved Questions

      While it fundamentally doesn't change much to this RFC, using BabeApi_currentEpoch and BabeApi_nextEpoch might be inappropriate. I'm not familiar enough with good practices within the runtime to have an opinion here. Should it be an entirely new pallet?

      - +

      It is possible that in the future a client could connect to a parachain without having to rely on a trusted parachain specification.

      (source)

      Table of Contents

      @@ -3427,13 +3551,13 @@

      AuthorsJoe Petrowski -

      Summary

      +

      Summary

      Since the introduction of the Collectives parachain, many groups have expressed interest in forming new -- or migrating existing groups into -- on-chain collectives. While adding a new collective is relatively simple from a technical standpoint, the Fellowship will need to merge new pallets into the Collectives parachain for each new collective. This RFC proposes a means for the network to ratify a new collective, thus instructing the Fellowship to instate it in the runtime.

      -

      Motivation

      +

      Motivation

      Many groups have expressed interest in representing collectives on-chain. Some of these include:

      • Parachain technical fellowship (new)
      • @@ -3449,12 +3573,12 @@

        Motivation -

        Stakeholders

        +

        Stakeholders

        • Polkadot stakeholders who would like to organize on-chain.
        • Technical Fellowship, in its role of maintaining system runtimes.
        -

        Explanation

        +

        Explanation

        The group that wishes to operate an on-chain collective should publish the following information:

        -

        Stakeholders

        +

        Stakeholders

        • Parachain teams
        • Parachain users
        -

        Explanation

        +

        Explanation

        Status quo

        A parachain can either be locked or unlocked3. With parachain locked, the parachain manager does not have any privileges. With parachain unlocked, the parachain manager can perform following actions with the paras_registrar pallet:

          @@ -3610,31 +3734,31 @@

          Migration

        • Parachain never produced a block. Including from expired leases.
        • Parachain manager never explicitly lock the parachain.
        -

        Drawbacks

        +

        Drawbacks

        Parachain locks are designed in such way to ensure the decentralization of parachains. If parachains are not locked when it should be, it could introduce centralization risk for new parachains.

        For example, one possible scenario is that a collective may decide to launch a parachain fully decentralized. However, if the parachain is unable to produce block, the parachain manager will be able to replace the wasm and genesis without the consent of the collective.

        It is considered this risk is tolerable as it requires the wasm/genesis to be invalid at first place. It is not yet practically possible to develop a parachain without any centralized risk currently.

        Another case is that a parachain team may decide to use crowdloan to help secure a slot lease. Previously, creating a crowdloan will lock a parachain. This means crowdloan participants will know exactly the genesis of the parachain for the crowdloan they are participating. However, this actually providers little assurance to crowdloan participants. For example, if the genesis block is determined before a crowdloan is started, it is not possible to have onchain mechanism to enforce reward distributions for crowdloan participants. They always have to rely on the parachain team to fulfill the promise after the parachain is alive.

        Existing operational parachains will not be impacted.

        -

        Testing, Security, and Privacy

        +

        Testing, Security, and Privacy

        The implementation of this RFC will be tested on testnets (Rococo and Westend) first.

        An audit maybe required to ensure the implementation does not introduce unwanted side effects.

        There is no privacy related concerns.

        -

        Performance

        +

        Performance

        This RFC should not introduce any performance impact.

        -

        Ergonomics

        +

        Ergonomics

        This RFC should improve the developer experiences for new and existing parachain teams

        -

        Compatibility

        +

        Compatibility

        This RFC is fully compatibility with existing interfaces.

        -

        Prior Art and References

        +

        Prior Art and References

        • Parachain Slot Extension Story: https://github.com/paritytech/polkadot/issues/4758
        • Allow parachain to renew lease without actually run another parachain: https://github.com/paritytech/polkadot/issues/6685
        • Always treat parachain that never produced block for a significant amount of time as unlocked: https://github.com/paritytech/polkadot/issues/7539
        -

        Unresolved Questions

        +

        Unresolved Questions

        None at this stage.

        - +

        This RFC is only intended to be a short term solution. Slots will be removed in future and lock mechanism is likely going to be replaced with a more generalized parachain manage & recovery system in future. Therefore long term impacts of this RFC are not considered.

        1

        https://github.com/paritytech/cumulus/issues/377 @@ -3668,19 +3792,19 @@

        Summary

        +

        Summary

        Encointer is a system chain on Kusama since Jan 2022 and has been developed and maintained by the Encointer association. This RFC proposes to treat Encointer like any other system chain and include it in the fellowship repo with this PR.

        -

        Motivation

        +

        Motivation

        Encointer does not seek to be in control of its runtime repository. As a decentralized system, the fellowship has a more suitable structure to maintain a system chain runtime repo than the Encointer association does.

        Also, Encointer aims to update its runtime in batches with other system chains in order to have consistency for interoperability across system chains.

        -

        Stakeholders

        +

        Stakeholders

        • Fellowship: Will continue to take upon them the review and auditing work for the Encointer runtime, but the process is streamlined with other system chains and therefore less time-consuming compared to the separate repo and CI process we currently have.
        • Kusama Network: Tokenholders can easily see the changes of all system chains in one place.
        • Encointer Association: Further decentralization of the Encointer Network necessities like devops.
        • Encointer devs: Being able to work directly in the Fellowship runtimes repo to streamline and synergize with other developers.
        -

        Explanation

        +

        Explanation

        Our PR has all details about our runtime and how we would move it into the fellowship repo.

        Noteworthy: All Encointer-specific pallets will still be located in encointer's repo for the time being: https://github.com/encointer/pallets

        It will still be the duty of the Encointer team to keep its runtime up to date and provide adequate test fixtures. Frequent dependency bumps with Polkadot releases would be beneficial for interoperability and could be streamlined with other system chains but that will not be a duty of fellowship. Whenever possible, all system chains could be upgraded jointly (including Encointer) with a batch referendum.

        @@ -3689,17 +3813,17 @@

        Explanation
      • Encointer will publish all its crates crates.io
      • Encointer does not carry out external auditing of its runtime nor pallets. It would be beneficial but not a requirement from our side if Encointer could join the auditing process of other system chains.
      -

      Drawbacks

      +

      Drawbacks

      Other than all other system chains, development and maintenance of the Encointer Network is mainly financed by the KSM Treasury and possibly the DOT Treasury in the future. Encointer is dedicated to maintaining its network and runtime code for as long as possible, but there is a dependency on funding which is not in the hands of the fellowship. The only risk in the context of funding, however, is that the Encointer runtime will see less frequent updates if there's less funding.

      -

      Testing, Security, and Privacy

      +

      Testing, Security, and Privacy

      No changes to the existing system are proposed. Only changes to how maintenance is organized.

      -

      Performance, Ergonomics, and Compatibility

      +

      Performance, Ergonomics, and Compatibility

      No changes

      -

      Prior Art and References

      +

      Prior Art and References

      Existing Encointer runtime repo

      -

      Unresolved Questions

      +

      Unresolved Questions

      None identified

      - +

      More info on Encointer: encointer.org

      (source)

      Table of Contents

      @@ -3741,11 +3865,11 @@

      AuthorsJoe Petrowski, Gavin Wood -

      Summary

      +

      Summary

      The Relay Chain contains most of the core logic for the Polkadot network. While this was necessary prior to the launch of parachains and development of XCM, most of this logic can exist in parachains. This is a proposal to migrate several subsystems into system parachains.

      -

      Motivation

      +

      Motivation

      Polkadot's scaling approach allows many distinct state machines (known generally as parachains) to operate with common guarantees about the validity and security of their state transitions. Polkadot provides these common guarantees by executing the state transitions on a strict subset (a backing @@ -3757,13 +3881,13 @@

      MotivationBy minimising state transition logic on the Relay Chain by migrating it into "system chains" -- a set of parachains that, with the Relay Chain, make up the Polkadot protocol -- the Polkadot Ubiquitous Computer can maximise its primary offering: secure blockspace.

      -

      Stakeholders

      +

      Stakeholders

      • Parachains that interact with affected logic on the Relay Chain;
      • Core protocol and XCM format developers;
      • Tooling, block explorer, and UI developers.
      -

      Explanation

      +

      Explanation

      The following pallets and subsystems are good candidates to migrate from the Relay Chain:

      • Identity
      • @@ -3909,36 +4033,36 @@

        Kusama

        Staking is the subsystem most constrained by PoV limits. Ensuring that elections, payouts, session changes, offences/slashes, etc. work in a parachain on Kusama -- with its larger validator set -- will give confidence to the chain's robustness on Polkadot.

        -

        Drawbacks

        +

        Drawbacks

        These subsystems will have reduced resources in cores than on the Relay Chain. Staking in particular may require some optimizations to deal with constraints.

        -

        Testing, Security, and Privacy

        +

        Testing, Security, and Privacy

        Standard audit/review requirements apply. More powerful multi-chain integration test tools would be useful in developement.

        -

        Performance, Ergonomics, and Compatibility

        +

        Performance, Ergonomics, and Compatibility

        Describe the impact of the proposal on the exposed functionality of Polkadot.

        -

        Performance

        +

        Performance

        This is an optimization. The removal of public/user transactions on the Relay Chain ensures that its primary resources are allocated to system performance.

        -

        Ergonomics

        +

        Ergonomics

        This proposal alters very little for coretime users (e.g. parachain developers). Application developers will need to interact with multiple chains, making ergonomic light client tools particularly important for application development.

        For existing parachains that interact with these subsystems, they will need to configure their runtimes to recognize the new locations in the network.

        -

        Compatibility

        +

        Compatibility

        Implementing this proposal will require some changes to pallet APIs and/or a pub-sub protocol. Application developers will need to interact with multiple chains in the network.

        -

        Prior Art and References

        +

        Prior Art and References

        -

        Unresolved Questions

        +

        Unresolved Questions

        There remain some implementation questions, like how to use balances for both Staking and Governance. See, for example, Moving Staking off the Relay Chain.

        - +

        Ideally the Relay Chain becomes transactionless, such that not even balances are represented there. With Staking and Governance off the Relay Chain, this is not an unreasonable next step.

        With Identity on Polkadot, Kusama may opt to drop its People Chain.

        @@ -3978,10 +4102,10 @@

        Summary

        +

        Summary

        The Fellowship Manifesto states that members should receive a monthly allowance on par with gross income in OECD countries. This RFC proposes concrete amounts.

        -

        Motivation

        +

        Motivation

        One motivation for the Technical Fellowship is to provide an incentive mechanism that can induct and retain technical talent for the continued progress of the network.

        In order for members to uphold their commitment to the network, they should receive support to @@ -3991,12 +4115,12 @@

        Motivation

        Note: Goals of the Fellowship, expectations for each Dan, and conditions for promotion and demotion are all explained in the Manifesto. This RFC is only to propose concrete values for allowances.

        -

        Stakeholders

        +

        Stakeholders

        • Fellowship members
        • Polkadot Treasury
        -

        Explanation

        +

        Explanation

        This RFC proposes agreeing on salaries relative to a single level, the III Dan. As such, changes to the amount or asset used would only be on a single value, and all others would adjust relatively. A III Dan is someone whose contributions match the expectations of a full-time individual contributor. @@ -4056,19 +4180,19 @@

        Projections

        Updates

        Updates to these levels, whether relative ratios, the asset used, or the amount, shall be done via RFC.

        -

        Drawbacks

        +

        Drawbacks

        By not using DOT for payment, the protocol relies on the stability of other assets and the ability to acquire them. However, the asset of choice can be changed in the future.

        -

        Testing, Security, and Privacy

        +

        Testing, Security, and Privacy

        N/A.

        -

        Performance, Ergonomics, and Compatibility

        -

        Performance

        +

        Performance, Ergonomics, and Compatibility

        +

        Performance

        N/A

        -

        Ergonomics

        +

        Ergonomics

        N/A

        -

        Compatibility

        +

        Compatibility

        N/A

        -

        Prior Art and References

        +

        Prior Art and References

        -

        Unresolved Questions

        +

        Unresolved Questions

        None at present.

        (source)

        Table of Contents

        @@ -4109,11 +4233,11 @@

        Summary

        +

        Summary

        When two peers connect to each other, they open (amongst other things) a so-called "notifications protocol" substream dedicated to gossiping transactions to each other.

        Each notification on this substream currently consists in a SCALE-encoded Vec<Transaction> where Transaction is defined in the runtime.

        This RFC proposes to modify the format of the notification to become (Compact(1), Transaction). This maintains backwards compatibility, as this new format decodes as a Vec of length equal to 1.

        -

        Motivation

        +

        Motivation

        There exists three motivations behind this change:

        • @@ -4126,9 +4250,9 @@

          MotivationIt makes the implementation way more straight-forward by not having to repeat code related to back-pressure. See explanations below.

        -

        Stakeholders

        +

        Stakeholders

        Low-level developers.

        -

        Explanation

        +

        Explanation

        To give an example, if you send one notification with three transactions, the bytes that are sent on the wire are:

        concat(
             leb128(total-size-in-bytes-of-the-rest),
        @@ -4148,23 +4272,23 @@ 

        Explanation This is equivalent to forcing the Vec<Transaction> to always have a length of 1, and I expect the Substrate implementation to simply modify the sending side to add a for loop that sends one notification per item in the Vec.

        As explained in the motivation section, this allows extracting scale(transaction) items without having to know how to decode them.

        By "flattening" the two-steps hierarchy, an implementation only needs to back-pressure individual notifications rather than back-pressure notifications and transactions within notifications.

        -

        Drawbacks

        +

        Drawbacks

        This RFC chooses to maintain backwards compatibility at the cost of introducing a very small wart (the Compact(1)).

        An alternative could be to introduce a new version of the transactions notifications protocol that sends one Transaction per notification, but this is significantly more complicated to implement and can always be done later in case the Compact(1) is bothersome.

        -

        Testing, Security, and Privacy

        +

        Testing, Security, and Privacy

        Irrelevant.

        -

        Performance, Ergonomics, and Compatibility

        -

        Performance

        +

        Performance, Ergonomics, and Compatibility

        +

        Performance

        Irrelevant.

        -

        Ergonomics

        +

        Ergonomics

        Irrelevant.

        -

        Compatibility

        +

        Compatibility

        The change is backwards compatible if done in two steps: modify the sender to always send one transaction per notification, then, after a while, modify the receiver to enforce the new format.

        -

        Prior Art and References

        +

        Prior Art and References

        Irrelevant.

        -

        Unresolved Questions

        +

        Unresolved Questions

        None.

        - +

        None. This is a simple isolated change.

        (source)

        Table of Contents

        @@ -4194,16 +4318,16 @@

        Summary

        +

        Summary

        Update the runtime-host interface to no longer make use of a host-side allocator.

        -

        Motivation

        +

        Motivation

        The heap allocation of the runtime is currently controlled by the host using a memory allocator on the host side.

        The API of many host functions consists in allocating a buffer. For example, when calling ext_hashing_twox_256_version_1, the host allocates a 32 bytes buffer using the host allocator, and returns a pointer to this buffer to the runtime. The runtime later has to call ext_allocator_free_version_1 on this pointer in order to free the buffer.

        Even though no benchmark has been done, it is pretty obvious that this design is very inefficient. To continue with the example of ext_hashing_twox_256_version_1, it would be more efficient to instead write the output hash to a buffer that was allocated by the runtime on its stack and passed by pointer to the function. Allocating a buffer on the stack in the worst case scenario simply consists in decreasing a number, and in the best case scenario is free. Doing so would save many Wasm memory reads and writes by the allocator, and would save a function call to ext_allocator_free_version_1.

        Furthermore, the existence of the host-side allocator has become questionable over time. It is implemented in a very naive way, and for determinism and backwards compatibility reasons it needs to be implemented exactly identically in every client implementation. Runtimes make substantial use of heap memory allocations, and each allocation needs to go twice through the runtime <-> host boundary (once for allocating and once for freeing). Moving the allocator to the runtime side, while it would increase the size of the runtime, would be a good idea. But before the host-side allocator can be deprecated, all the host functions that make use of it need to be updated to not use it.

        -

        Stakeholders

        +

        Stakeholders

        No attempt was made at convincing stakeholders.

        -

        Explanation

        +

        Explanation

        New host functions

        This section contains a list of new host functions to introduce.

        (func $ext_storage_read_version_2
        @@ -4406,11 +4530,11 @@ 

        Other changes
      • ext_allocator_free_version_1
      • ext_offchain_network_state_version_1
      -

      Drawbacks

      +

      Drawbacks

      This RFC might be difficult to implement in Substrate due to the internal code design. It is not clear to the author of this RFC how difficult it would be.

      Prior Art

      The API of these new functions was heavily inspired by API used by the C programming language.

      -

      Unresolved Questions

      +

      Unresolved Questions

      The changes in this RFC would need to be benchmarked. This involves implementing the RFC and measuring the speed difference.

      It is expected that most host functions are faster or equal speed to their deprecated counterparts, with the following exceptions:

        @@ -4467,10 +4591,10 @@

        LicenseMIT -

        Summary

        +

        Summary

        This RFC proposes a dynamic pricing model for the sale of Bulk Coretime on the Polkadot UC. The proposed model updates the regular price of cores for each sale period, by taking into account the number of cores sold in the previous sale, as well as a limit of cores and a target number of cores sold. It ensures a minimum price and limits price growth to a maximum price increase factor, while also giving govenance control over the steepness of the price change curve. It allows governance to address challenges arising from changing market conditions and should offer predictable and controlled price adjustments.

        Accompanying visualizations are provided at [1].

        -

        Motivation

        +

        Motivation

        RFC-1 proposes periodic Bulk Coretime Sales as a mechanism to sell continouos regions of blockspace (suggested to be 4 weeks in length). A number of Blockspace Regions (compare RFC-1 & RFC-3) are provided for sale to the Broker-Chain each period and shall be sold in a way that provides value-capture for the Polkadot network. The exact pricing mechanism is out of scope for RFC-1 and shall be provided by this RFC.

        A dynamic pricing model is needed. A limited number of Regions are offered for sale each period. The model needs to find the price for a period based on supply and demand of the previous period.

        The model shall give Coretime consumers predictability about upcoming price developments and confidence that Polkadot governance can adapt the pricing model to changing market conditions.

        @@ -4482,7 +4606,7 @@

        RequirementsThe solution SHOULD provide a maximum factor of price increase should the limit of Regions sold per period be reached.
      • The solution should allow governance to control the steepness of the price function
      • -

        Stakeholders

        +

        Stakeholders

        The primary stakeholders of this RFC are:

        -

        Explanation

        +

        Explanation

        Overview

        The dynamic pricing model sets the new price based on supply and demand in the previous period. The model is a function of the number of Regions sold, piecewise-defined by two power functions.

    -

    Drawbacks

    +

    Drawbacks

    None at present.

    -

    Prior Art and References

    +

    Prior Art and References

    This pricing model is based on the requirements from the basic linear solution proposed in RFC-1, which is a simple dynamic pricing model and only used as proof. The present model adds additional considerations to make the model more adaptable under real conditions.

    Future Possibilities

    This RFC, if accepted, shall be implemented in conjunction with RFC-1.

    @@ -4630,9 +4754,9 @@

    Summary

    +

    Summary

    Improve the networking messages that query storage items from the remote, in order to reduce the bandwidth usage and number of round trips of light clients.

    -

    Motivation

    +

    Motivation

    Clients on the Polkadot peer-to-peer network can be divided into two categories: full nodes and light clients. So-called full nodes are nodes that store the content of the chain locally on their disk, while light clients are nodes that don't. In order to access for example the balance of an account, a full node can do a disk read, while a light client needs to send a network message to a full node and wait for the full node to reply with the desired value. This reply is in the form of a Merkle proof, which makes it possible for the light client to verify the exactness of the value.

    Unfortunately, this network protocol is suffering from some issues:

    @@ -328,11 +325,11 @@

    General flow
  • The metadata is converted into lean modular form (vector of chunks)
  • A Merkle tree is constructed from the metadata chunks
  • -
  • A root of tree (as a left element) is merged with Metadata Descriptor (as a right element)
  • +
  • A root of tree is merged with the hash of the MetadataDescriptor
  • Resulting value is a constant to be included in additionalSigned to prove that the metadata seen by cold device is genuine
  • Metadata modularization

    -

    Structure of types in shortened metadata exactly matches structure of types in scale-info, but doc field is always empty

    +

    Structure of types in shortened metadata exactly matches structure of types in scale-info at MetadataV14 state, but doc field is always empty

    struct Type {
       path: Path, // vector of strings
       type_params: Vec<TypeParams>,
    @@ -386,6 +383,26 @@ 

    Right node and then left node is popped from the front of the nodes queue and merged; the result is sent to the end of the queue.
  • Step (4) is repeated until only one node remains; this is tree root.
  • +
    queue = empty_queue
    +
    +while (leaves.length>1) {
    +  right = leaves.pop_last
    +  left = leaves.pop_last
    +  queue.push_back(merge(left, right))
    +}
    +
    +if leaves.length == 1 {
    +  queue.push_front(leaves.last)
    +}
    +
    +while queue.len() > 1 {
    +  right = queue.pop_front
    +  left = queue.pop_front
    +  queue.push_back(merge(left, right))
    +}
    +
    +return queue.pop
    +
    Resulting tree for metadata consisting of 5 nodes (numbered from 0 to 4):
     
            root
    @@ -403,14 +420,8 @@ 

    Digest

  • Root hash of this tree (left) is merged with metadata descriptor blake3 hash (right); this is metadata digest.
  • Version number and corresponding resulting metadata digest MUST be included into Signed Extensions as specified in Chain Verification section below.

    -

    Shortening

    -

    For shortening, an attempt to decode transaction completely using provided metadata is performed with the same algorithm that would be used on the cold side. All chunks are associated with their leaf indices. An example of this protocol is proposed in metadata-shortener that is based on substrate-parser decoding protocol; any decoding protocol could be used here as long as cold signer's design finds it appropriate for given security model.

    -

    Transmission

    -

    Shortened metadata chunks MAY be trasmitted into cold device together with Merkle proof in its entirety or in parts, depending on memory capabilities of the cold device and it ability to reconstruct larger fraction of tree. This document does not specify the manner of transmission. The order of metadata chunks MAY be arbitrary, the only requirement is that indices of leaf nodes in Merkle tree corresponding to chunks MUST be communicated. Community MAY handle proof format standartization independently.

    -

    Offline verification

    -

    The transmitted metadata chunks are hashed together with proof lemmas to obtain root that MAY be transmitted along with the rest of payload. Verification that the root transmitted with message matches with calculated root is optional; the transmitted root SHOULD NOT be used in signature, calculated root MUST be used; however, there is no mechanism to enforce this - it should be done during cold signers code audit.

    Chain verification

    -

    The root of metadata computed by cold device MAY be included into Signed Extensions; this way the transaction will pass as valid iff hash of metadata as seen by cold storage device is identical to consensus hash of metadata, ensuring fair signing protocol.

    +

    The root of metadata computed by cold device MAY be included into Signed Extensions; if it is included, the transaction will pass as valid iff hash of metadata as seen by cold storage device is identical to consensus hash of metadata, ensuring fair signing protocol.

    The Signed Extension representing metadata digest is a single byte representing both digest vaule inclusion and shortening protocol version; this MUST be included in Signed Extensions set. Depending on its value, a digest value is included as additionalSigned to signature computation according to following specification:

    diff --git a/proposed/0047-assignment-of-availability-chunks.html b/proposed/0047-assignment-of-availability-chunks.html index 3575ac101..bce477ff6 100644 --- a/proposed/0047-assignment-of-availability-chunks.html +++ b/proposed/0047-assignment-of-availability-chunks.html @@ -90,7 +90,7 @@ @@ -467,7 +467,7 @@

    Appendix A

    - @@ -481,7 +481,7 @@

    Appendix A

    - diff --git a/stale/0059-nodes-capabilities-discovery.html b/proposed/0059-nodes-capabilities-discovery.html similarity index 84% rename from stale/0059-nodes-capabilities-discovery.html rename to proposed/0059-nodes-capabilities-discovery.html index 8d4bfb37b..99a83947d 100644 --- a/stale/0059-nodes-capabilities-discovery.html +++ b/proposed/0059-nodes-capabilities-discovery.html @@ -90,7 +90,7 @@ @@ -292,10 +292,13 @@
    signed extension valuedigest valuecomment
    0x00digest is not included