Skip to content

Commit

Permalink
Messages v2 included migration (#1738)
Browse files Browse the repository at this point in the history
# Goal
The goal of this PR is to propose and implement messages v2 compatible
with PoV

Closes #198 

# Discussion
- Refactored Messages to minimize used PoV
- Added storage migration (single block)

# Migration Details
- Based on data used in rococo and main-net and calculations we don't
need to do a multi-block migration. (only around 15%) of the block is
being used.
- Was not able to test with upgrading on local due to getting errors
when running relay nodes
- Was able to successfully run try-run-time cli tool against rococo

# Checklist
- [x] Chain spec updated
- [x] Design doc(s) updated
- [x] Tests added
- [x] Benchmarks added
- [x] Weights updated
  • Loading branch information
aramikm authored Nov 10, 2023
1 parent 8f55a11 commit 85b3c2e
Show file tree
Hide file tree
Showing 17 changed files with 407 additions and 242 deletions.
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

47 changes: 47 additions & 0 deletions designdocs/message_storage_v2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# On Chain Message Storage

## Context and Scope
The proposed feature consists of changes that is going to be one (or more) pallet(s) in runtime of a
Substrate based blockchain, and it will be used in all environments including production.

## Problem Statement
After introduction of **Proof of Validity** or **PoV** in runtime weights, all pallets should be
re-evaluated and refactored if necessary to minimize the usage of **PoV**. This is to ensure all
important operations are scalable.
This document tries to propose some changes on **Messages** pallet to optimize the **PoV** size.

## Goals
- Minimizing Weights including **execution times** and **PoV** size.

## Proposal
Storing messages on chain using **BlockNumber** and **SchemaId** and **MessageIndex** as main and secondary
and tertiary keys using [StorageNMap](https://paritytech.github.io/substrate/master/frame_support/storage/trait.StorageNMap.html) data structure provided in Substrate.

### Main Storage types
- **MessagesV2**
- _Type_: `StorageNMap<(BlockNumber, SchemaId, MessageIndex), Message>`
- _Purpose_: Main structure To store all messages for a certain block number and schema id and
index


### On Chain Structure
Following is a proposed data structure for storing a Message on chain.
```rust
/// only `index` is removed from old structure
pub struct Message<AccountId> {
pub payload: Vec<u8>, // Serialized data in a user-defined schemas format
pub provider_key: AccountId, // Signature of the signer
pub msa_id: u64, // Message source account id (the original source of the message)
}
```
## Description

The idea is to use existing **whitelisted** storage with `BlockMessageIndex` type to store and get
the index of each message to be able to use it as our third key for `StorageNMap`.

We would store each message separately into `StorageNMap` with following keys
- primary key would be `block_number`
- secondary key would be `schema_id`
- tertiary key would be the `index` of the message for current block which starts from 0


4 changes: 2 additions & 2 deletions e2e/capacity/transactions.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ describe('Capacity Transactions', function () {

const { eventMap } = await call.payWithCapacity();
assertEvent(eventMap, 'capacity.CapacityWithdrawn');
assertEvent(eventMap, 'messages.MessagesStored');
assertEvent(eventMap, 'messages.MessagesInBlock');
});

it('successfully pays with Capacity for eligible transaction - addOnchainMessage', async function () {
Expand All @@ -227,7 +227,7 @@ describe('Capacity Transactions', function () {
const call = ExtrinsicHelper.addOnChainMessage(capacityKeys, dummySchemaId, '0xdeadbeef');
const { eventMap } = await call.payWithCapacity();
assertEvent(eventMap, 'capacity.CapacityWithdrawn');
assertEvent(eventMap, 'messages.MessagesStored');
assertEvent(eventMap, 'messages.MessagesInBlock');
const get = await ExtrinsicHelper.apiPromise.rpc.messages.getBySchemaId(dummySchemaId, {
from_block: starting_block,
from_index: 0,
Expand Down
8 changes: 2 additions & 6 deletions e2e/messages/addIPFSMessage.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -107,9 +107,7 @@ describe('Add Offchain Message', function () {
const f = ExtrinsicHelper.addIPFSMessage(keys, schemaId, ipfs_cid_64, ipfs_payload_len);
const { target: event } = await f.fundAndSend(fundingSource);

assert.notEqual(event, undefined, 'should have returned a MessagesStored event');
assert.deepEqual(event?.data.schemaId, schemaId, 'schema ids should be equal');
assert.notEqual(event?.data.blockNumber, undefined, 'should have a block number');
assert.notEqual(event, undefined, 'should have returned a MessagesInBlock event');
});

it('should successfully retrieve added message and returned CID should have Base32 encoding', async function () {
Expand All @@ -130,9 +128,7 @@ describe('Add Offchain Message', function () {
const f = ExtrinsicHelper.addOnChainMessage(keys, dummySchemaId, '0xdeadbeef');
const { target: event } = await f.fundAndSend(fundingSource);

assert.notEqual(event, undefined, 'should have returned a MessagesStored event');
assert.deepEqual(event?.data.schemaId, dummySchemaId, 'schema ids should be equal');
assert.notEqual(event?.data.blockNumber, undefined, 'should have a block number');
assert.notEqual(event, undefined, 'should have returned a MessagesInBlock event');

const get = await ExtrinsicHelper.apiPromise.rpc.messages.getBySchemaId(dummySchemaId, {
from_block: starting_block,
Expand Down
2 changes: 1 addition & 1 deletion e2e/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions e2e/scaffolding/extrinsicHelpers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -483,7 +483,7 @@ export class ExtrinsicHelper {
return new Extrinsic(
() => ExtrinsicHelper.api.tx.messages.addIpfsMessage(schemaId, cid, payload_length),
keys,
ExtrinsicHelper.api.events.messages.MessagesStored
ExtrinsicHelper.api.events.messages.MessagesInBlock
);
}

Expand Down Expand Up @@ -668,7 +668,7 @@ export class ExtrinsicHelper {
return new Extrinsic(
() => ExtrinsicHelper.api.tx.messages.addOnchainMessage(null, schemaId, payload),
keys,
ExtrinsicHelper.api.events.messages.MessagesStored
ExtrinsicHelper.api.events.messages.MessagesInBlock
);
}

Expand Down
2 changes: 2 additions & 0 deletions node/cli/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ cli-opt = { default-features = false, path = "../cli-opt" }

# Substrate
frame-benchmarking-cli = { git = "https://github.com/paritytech/polkadot-sdk", optional = true, branch = "release-polkadot-v1.1.0" }
frame-benchmarking = { git = "https://github.com/paritytech/polkadot-sdk", optional = true, branch = "release-polkadot-v1.1.0" }
frame-support = { git = "https://github.com/paritytech/polkadot-sdk", default-features = false, branch = "release-polkadot-v1.1.0" }
frame-system = { git = "https://github.com/paritytech/polkadot-sdk", default-features = false, branch = "release-polkadot-v1.1.0" }
pallet-balances = { git = "https://github.com/paritytech/polkadot-sdk", default-features = false, branch = "release-polkadot-v1.1.0" }
Expand Down Expand Up @@ -70,6 +71,7 @@ cli = [
"sc-cli",
"sc-service",
"frame-benchmarking-cli",
"frame-benchmarking",
"try-runtime-cli"
]
default = ["std", "cli"]
Expand Down
32 changes: 17 additions & 15 deletions node/cli/src/command.rs
Original file line number Diff line number Diff line change
Expand Up @@ -371,22 +371,24 @@ pub fn run() -> Result<()> {

#[cfg(feature = "try-runtime")]
Some(Subcommand::TryRuntime(cmd)) => {
use sc_executor::{sp_wasm_interface::ExtendedHostFunctions, NativeExecutionDispatch};
use common_runtime::constants::MILLISECS_PER_BLOCK;
use try_runtime_cli::block_building_info::timestamp_with_aura_info;

let runner = cli.create_runner(cmd)?;
runner.async_run(|config| {
// we don't need any of the components of new_partial, just a runtime, or a task
// manager to do `async_run`.
let registry = config.prometheus_config.as_ref().map(|cfg| &cfg.registry);
let task_manager =
sc_service::TaskManager::new(config.tokio_handle.clone(), registry)
.map_err(|e| sc_cli::Error::Service(sc_service::Error::Prometheus(e)))?;
Ok((
cmd.run::<Block, ExtendedHostFunctions<
sp_io::SubstrateHostFunctions,
<ExecutorDispatch as NativeExecutionDispatch>::ExtendHostFunctions,
>>(),
task_manager,
))

type HostFunctions =
(sp_io::SubstrateHostFunctions, frame_benchmarking::benchmarking::HostFunctions);

// grab the task manager.
let registry = &runner.config().prometheus_config.as_ref().map(|cfg| &cfg.registry);
let task_manager =
sc_service::TaskManager::new(runner.config().tokio_handle.clone(), *registry)
.map_err(|e| format!("Error: {:?}", e))?;

let info_provider = timestamp_with_aura_info(MILLISECS_PER_BLOCK);

runner.async_run(|_| {
Ok((cmd.run::<Block, HostFunctions, _>(Some(info_provider)), task_manager))
})
},
Some(Subcommand::ExportRuntimeVersion(cmd)) => {
Expand Down
25 changes: 9 additions & 16 deletions pallets/messages/src/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ use sp_runtime::traits::One;

const IPFS_SCHEMA_ID: u16 = 50;
const IPFS_PAYLOAD_LENGTH: u32 = 10;
const MAX_MESSAGES_IN_BLOCK: u32 = 500;

fn onchain_message<T: Config>(schema_id: SchemaId) -> DispatchResult {
let message_source_id = DelegatorId(1);
Expand Down Expand Up @@ -62,8 +63,6 @@ fn create_schema<T: Config>(location: PayloadLocation) -> DispatchResult {
}

benchmarks! {
// this is temporary to avoid massive PoV sizes which will break the chain until rework on messages
#[pov_mode = Measured]
add_onchain_message {
let n in 0 .. T::MessagesMaxPayloadSizeBytes::get() - 1;
let message_source_id = DelegatorId(2);
Expand All @@ -78,21 +77,17 @@ benchmarks! {
assert_ok!(T::MsaBenchmarkHelper::set_delegation_relationship(ProviderId(1), message_source_id.into(), [schema_id].to_vec()));

let payload = vec![1; n as usize];
let average_messages_per_block: u32 = T::MaxMessagesPerBlock::get() / 2;
for j in 1 .. average_messages_per_block {
for j in 1 .. MAX_MESSAGES_IN_BLOCK {
assert_ok!(onchain_message::<T>(schema_id));
}
}: _ (RawOrigin::Signed(caller), Some(message_source_id.into()), schema_id, payload)
verify {
assert_eq!(
MessagesPallet::<T>::get_messages(
BlockNumberFor::<T>::one(), schema_id).len(),
average_messages_per_block as usize
assert_eq!(MessagesPallet::<T>::get_messages_by_schema_and_block(
schema_id, PayloadLocation::OnChain, BlockNumberFor::<T>::one()).len(),
MAX_MESSAGES_IN_BLOCK as usize
);
}

// this is temporary to avoid massive PoV sizes which will break the chain until rework on messages
#[pov_mode = Measured]
add_ipfs_message {
let caller: T::AccountId = whitelisted_caller();
let cid = "bafkreidgvpkjawlxz6sffxzwgooowe5yt7i6wsyg236mfoks77nywkptdq".as_bytes().to_vec();
Expand All @@ -102,16 +97,14 @@ benchmarks! {
assert_ok!(create_schema::<T>(PayloadLocation::IPFS));
}
assert_ok!(T::MsaBenchmarkHelper::add_key(ProviderId(1).into(), caller.clone()));
let average_messages_per_block: u32 = T::MaxMessagesPerBlock::get() / 2;
for j in 1 .. average_messages_per_block {
for j in 1 .. MAX_MESSAGES_IN_BLOCK {
assert_ok!(ipfs_message::<T>(IPFS_SCHEMA_ID));
}
}: _ (RawOrigin::Signed(caller),IPFS_SCHEMA_ID, cid, IPFS_PAYLOAD_LENGTH)
verify {
assert_eq!(
MessagesPallet::<T>::get_messages(
BlockNumberFor::<T>::one(), IPFS_SCHEMA_ID).len(),
average_messages_per_block as usize
assert_eq!(MessagesPallet::<T>::get_messages_by_schema_and_block(
IPFS_SCHEMA_ID, PayloadLocation::IPFS, BlockNumberFor::<T>::one()).len(),
MAX_MESSAGES_IN_BLOCK as usize
);
}

Expand Down
Loading

0 comments on commit 85b3c2e

Please sign in to comment.