Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dependency (Substrate/Polkadot/Frontier/Cumulus/...) update to v0.9.40 #2202

Merged
merged 67 commits into from
May 25, 2023
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
6d394d0
update to v0.9.40
nbaztec Apr 1, 2023
d26b671
update benchmarking weight template
nbaztec Apr 3, 2023
2821987
fix build
nbaztec Apr 3, 2023
1ed2a6d
make test compile
nbaztec Apr 3, 2023
fb2df2a
Includes page heap fixes
crystalin Apr 5, 2023
c9c9faf
compile runtime-benchmarks
nbaztec Apr 18, 2023
02b81f3
Merge branch 'upgrade-v0.9.40' of github.com:PureStake/moonbeam into …
nbaztec Apr 18, 2023
2882824
make warp sync work
nbaztec Apr 19, 2023
7fd8681
merge master
nbaztec Apr 19, 2023
a723757
toml sort
nbaztec Apr 19, 2023
471a4d2
fix editorconfig
nbaztec Apr 19, 2023
e10550f
use new substrate version
nbaztec Apr 19, 2023
0fb006c
fix warp sync
nbaztec Apr 21, 2023
7d58b9d
merge conflicts
nbaztec Apr 21, 2023
d76bc6b
sort
nbaztec Apr 21, 2023
a4b0848
fix --dev
nbaztec Apr 25, 2023
0e45f47
remove duplicate SetMembersOrigin
nbaztec Apr 25, 2023
56ae6f8
toml-sort
nbaztec Apr 25, 2023
0ccee4a
remove kitchensink-runtime
nbaztec Apr 27, 2023
7da9ed8
fix builkd
nbaztec Apr 27, 2023
a1dead6
Merge branch 'master' into upgrade-v0.9.40
nbaztec Apr 28, 2023
eea7baa
use new weights
nbaztec Apr 28, 2023
aa5e0cc
Merge branch 'master' into upgrade-v0.9.40
nbaztec May 2, 2023
8091cee
set manual weights for xcm fungible
librelois May 2, 2023
817adaa
use Weight::from_parts
nbaztec May 3, 2023
f329f16
Merge branch 'upgrade-v0.9.40' of github.com:PureStake/moonbeam into …
nbaztec May 3, 2023
d41c37a
use 0 pov_size for ref_time weight
nbaztec May 3, 2023
6dbca4e
update nimbus
nbaztec May 3, 2023
d8136c1
exclude generated weight files from editorconfig
nbaztec May 4, 2023
5e801c5
fmt
nbaztec May 4, 2023
466953f
fmt
nbaztec May 4, 2023
57387ca
fix rust tests
nbaztec May 4, 2023
c3917d3
Merge branch 'master' into upgrade-v0.9.40
nbaztec May 4, 2023
2bfe372
fix import
nbaztec May 4, 2023
62df5a7
fix tests
nbaztec May 4, 2023
1527111
use Weight part pov_size to 0
nbaztec May 4, 2023
18aa311
make dalek test work
nbaztec May 5, 2023
f5f5e76
fix transfer tests
nbaztec May 8, 2023
4f94a30
merge master
nbaztec May 11, 2023
f3ab63d
use BoundedVec for auto compound delegations
nbaztec May 12, 2023
e42730c
fix modexp test
nbaztec May 12, 2023
9927390
fix modexp test
nbaztec May 12, 2023
fec3ec1
fix tests
nbaztec May 15, 2023
bcb720c
fix weight tests
nbaztec May 15, 2023
648606b
Merge branch 'upgrade-v0.9.40' of github.com:PureStake/moonbeam into …
nbaztec May 15, 2023
e7dd0a1
fix staking tests via chunking
nbaztec May 15, 2023
fde72fd
fix modexp test
nbaztec May 15, 2023
19b4228
fix lint and test
nbaztec May 15, 2023
8dff3c7
fix rust weight tests
nbaztec May 15, 2023
d243bb5
fix partial ts tests
nbaztec May 15, 2023
cd87118
Merge branch 'master' into upgrade-v0.9.40
crystalin May 15, 2023
b3049e7
temp fix for xcm v2
crystalin May 15, 2023
df27699
Fixes weight until benchmarking is fixed
crystalin May 16, 2023
7ef16e7
set manual weight, fix ts tests
nbaztec May 16, 2023
6669928
Adds temp hack for xcm tests
crystalin May 22, 2023
0d1f026
Use RefundSurplus as the no-op for saturating the queue, which does n…
girazoki May 23, 2023
882b85e
Update evm to 0.39
tgmichel May 23, 2023
389f757
Revert "Update evm to 0.39"
tgmichel May 23, 2023
d831644
upgrade polkadot for better support of xcm v2
librelois May 23, 2023
63c005a
prettier
librelois May 23, 2023
ea906b9
prettier
librelois May 23, 2023
f7d6b37
Revert temp fix for XCM weight
crystalin May 23, 2023
f54cc1b
upgrade polkadot fork
librelois May 23, 2023
6c044fc
Fixing hrmp-mock tests
girazoki May 24, 2023
1608234
clean up
girazoki May 24, 2023
a874fc4
Merge branch 'master' into upgrade-v0.9.40
librelois May 25, 2023
0ca0b68
prettier
librelois May 25, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2,878 changes: 1,808 additions & 1,070 deletions Cargo.lock

Large diffs are not rendered by default.

326 changes: 170 additions & 156 deletions Cargo.toml

Large diffs are not rendered by default.

70 changes: 46 additions & 24 deletions benchmarking/frame-weight-template.hbs
Original file line number Diff line number Diff line change
Expand Up @@ -14,30 +14,30 @@
// You should have received a copy of the GNU General Public License
// along with Moonbeam. If not, see <http://www.gnu.org/licenses/>.

{{header}}
//! Autogenerated weights for {{pallet}}
//!
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION {{version}}
//! DATE: {{date}}, STEPS: `{{cmd.steps}}`, REPEAT: {{cmd.repeat}}, LOW RANGE: `{{cmd.lowest_range_values}}`, HIGH RANGE: `{{cmd.highest_range_values}}`
//! DATE: {{date}}, STEPS: `{{cmd.steps}}`, REPEAT: `{{cmd.repeat}}`, LOW RANGE: `{{cmd.lowest_range_values}}`, HIGH RANGE: `{{cmd.highest_range_values}}`
//! WORST CASE MAP SIZE: `{{cmd.worst_case_map_values}}`
//! HOSTNAME: `{{hostname}}`, CPU: `{{cpuname}}`
//! EXECUTION: {{cmd.execution}}, WASM-EXECUTION: {{cmd.wasm_execution}}, CHAIN: {{cmd.chain}}, DB CACHE: {{cmd.db_cache}}

// Executed Command:
{{#each args as |arg|~}}
{{#each args as |arg|}}
// {{arg}}
{{/each}}

#![cfg_attr(rustfmt, rustfmt_skip)]
#![allow(unused_parens)]
#![allow(unused_imports)]

use frame_support::{
traits::Get,
weights::{constants::RocksDbWeight, Weight},
};
use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}};
use sp_std::marker::PhantomData;

/// Weight functions needed for {{pallet}}.
pub trait WeightInfo {
{{#each benchmarks as |benchmark|}}
#[rustfmt::skip]
fn {{benchmark.name~}}
(
{{~#each benchmark.components as |c| ~}}
Expand All @@ -48,33 +48,46 @@ pub trait WeightInfo {

/// Weights for {{pallet}} using the Substrate node and recommended hardware.
pub struct SubstrateWeight<T>(PhantomData<T>);
{{#if (eq pallet "frame_system")}}
impl<T: crate::Config> WeightInfo for SubstrateWeight<T> {
{{else}}
impl<T: frame_system::Config> WeightInfo for SubstrateWeight<T> {
{{/if}}
{{#each benchmarks as |benchmark|}}
{{#each benchmark.comments as |comment|}}
// {{comment}}
/// {{comment}}
{{/each}}
{{#each benchmark.component_ranges as |range|}}
/// The range of component `{{range.name}}` is `[{{range.min}}, {{range.max}}]`.
{{/each}}
#[rustfmt::skip]
fn {{benchmark.name~}}
(
{{~#each benchmark.components as |c| ~}}
{{~#if (not c.is_used)}}_{{/if}}{{c.name}}: u32, {{/each~}}
) -> Weight {
Weight::from_ref_time({{underscore benchmark.base_weight}} as u64)
// Proof Size summary in bytes:
// Measured: `{{benchmark.base_recorded_proof_size}}{{#each benchmark.component_recorded_proof_size as |cp|}} + {{cp.name}} * ({{cp.slope}} ±{{underscore cp.error}}){{/each}}`
// Estimated: `{{benchmark.base_calculated_proof_size}}{{#each benchmark.component_calculated_proof_size as |cp|}} + {{cp.name}} * ({{cp.slope}} ±{{underscore cp.error}}){{/each}}`
// Minimum execution time: {{underscore benchmark.min_execution_time}}_000 picoseconds.
Weight::from_parts({{underscore benchmark.base_weight}}, {{benchmark.base_calculated_proof_size}})
{{#each benchmark.component_weight as |cw|}}
// Standard Error: {{underscore cw.error}}
.saturating_add(Weight::from_ref_time({{underscore cw.slope}} as u64).saturating_mul({{cw.name}} as u64))
.saturating_add(Weight::from_parts({{underscore cw.slope}}, 0).saturating_mul({{cw.name}}.into()))
{{/each}}
{{#if (ne benchmark.base_reads "0")}}
.saturating_add(T::DbWeight::get().reads({{benchmark.base_reads}} as u64))
.saturating_add(T::DbWeight::get().reads({{benchmark.base_reads}}_u64))
{{/if}}
{{#each benchmark.component_reads as |cr|}}
.saturating_add(T::DbWeight::get().reads(({{cr.slope}} as u64).saturating_mul({{cr.name}} as u64)))
.saturating_add(T::DbWeight::get().reads(({{cr.slope}}_u64).saturating_mul({{cr.name}}.into())))
{{/each}}
{{#if (ne benchmark.base_writes "0")}}
.saturating_add(T::DbWeight::get().writes({{benchmark.base_writes}} as u64))
.saturating_add(T::DbWeight::get().writes({{benchmark.base_writes}}_u64))
{{/if}}
{{#each benchmark.component_writes as |cw|}}
.saturating_add(T::DbWeight::get().writes(({{cw.slope}} as u64).saturating_mul({{cw.name}} as u64)))
.saturating_add(T::DbWeight::get().writes(({{cw.slope}}_u64).saturating_mul({{cw.name}}.into())))
{{/each}}
{{#each benchmark.component_calculated_proof_size as |cp|}}
.saturating_add(Weight::from_parts(0, {{cp.slope}}).saturating_mul({{cp.name}}.into()))
{{/each}}
}
{{/each}}
Expand All @@ -84,31 +97,40 @@ impl<T: frame_system::Config> WeightInfo for SubstrateWeight<T> {
impl WeightInfo for () {
{{#each benchmarks as |benchmark|}}
{{#each benchmark.comments as |comment|}}
// {{comment}}
/// {{comment}}
{{/each}}
{{#each benchmark.component_ranges as |range|}}
/// The range of component `{{range.name}}` is `[{{range.min}}, {{range.max}}]`.
{{/each}}
#[rustfmt::skip]
fn {{benchmark.name~}}
(
{{~#each benchmark.components as |c| ~}}
{{~#if (not c.is_used)}}_{{/if}}{{c.name}}: u32, {{/each~}}
) -> Weight {
Weight::from_ref_time({{underscore benchmark.base_weight}} as u64)
// Proof Size summary in bytes:
// Measured: `{{benchmark.base_recorded_proof_size}}{{#each benchmark.component_recorded_proof_size as |cp|}} + {{cp.name}} * ({{cp.slope}} ±{{underscore cp.error}}){{/each}}`
// Estimated: `{{benchmark.base_calculated_proof_size}}{{#each benchmark.component_calculated_proof_size as |cp|}} + {{cp.name}} * ({{cp.slope}} ±{{underscore cp.error}}){{/each}}`
// Minimum execution time: {{underscore benchmark.min_execution_time}}_000 picoseconds.
Weight::from_parts({{underscore benchmark.base_weight}}, {{benchmark.base_calculated_proof_size}})
{{#each benchmark.component_weight as |cw|}}
// Standard Error: {{underscore cw.error}}
.saturating_add(Weight::from_ref_time({{underscore cw.slope}} as u64).saturating_mul({{cw.name}} as u64))
.saturating_add(Weight::from_parts({{underscore cw.slope}}, 0).saturating_mul({{cw.name}}.into()))
{{/each}}
{{#if (ne benchmark.base_reads "0")}}
.saturating_add(RocksDbWeight::get().reads({{benchmark.base_reads}} as u64))
.saturating_add(RocksDbWeight::get().reads({{benchmark.base_reads}}_u64))
{{/if}}
{{#each benchmark.component_reads as |cr|}}
.saturating_add(RocksDbWeight::get().reads(({{cr.slope}} as u64).saturating_mul({{cr.name}} as u64)))
.saturating_add(RocksDbWeight::get().reads(({{cr.slope}}_u64).saturating_mul({{cr.name}}.into())))
{{/each}}
{{#if (ne benchmark.base_writes "0")}}
.saturating_add(RocksDbWeight::get().writes({{benchmark.base_writes}} as u64))
.saturating_add(RocksDbWeight::get().writes({{benchmark.base_writes}}_u64))
{{/if}}
{{#each benchmark.component_writes as |cw|}}
.saturating_add(RocksDbWeight::get().writes(({{cw.slope}} as u64).saturating_mul({{cw.name}} as u64)))
.saturating_add(RocksDbWeight::get().writes(({{cw.slope}}_u64).saturating_mul({{cw.name}}.into())))
{{/each}}
{{#each benchmark.component_calculated_proof_size as |cp|}}
.saturating_add(Weight::from_parts(0, {{cp.slope}}).saturating_mul({{cp.name}}.into()))
{{/each}}
}
{{/each}}
}
}
28 changes: 13 additions & 15 deletions client/rpc/debug/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

// You should have received a copy of the GNU General Public License
// along with Moonbeam. If not, see <http://www.gnu.org/licenses/>.
use futures::{SinkExt, StreamExt};
use futures::StreamExt;
use jsonrpsee::core::{async_trait, RpcResult};
pub use moonbeam_rpc_core_debug::{DebugServer, TraceParams};

Expand Down Expand Up @@ -71,13 +71,12 @@ impl DebugServer for Debug {
transaction_hash: H256,
params: Option<TraceParams>,
) -> RpcResult<single::TransactionTrace> {
let mut requester = self.requester.clone();
let requester = self.requester.clone();

let (tx, rx) = oneshot::channel();
// Send a message from the rpc handler to the service level task.
requester
.send(((RequesterInput::Transaction(transaction_hash), params), tx))
.await
.unbounded_send(((RequesterInput::Transaction(transaction_hash), params), tx))
.map_err(|err| {
internal_err(format!(
"failed to send request to debug service : {:?}",
Expand All @@ -99,13 +98,12 @@ impl DebugServer for Debug {
id: RequestBlockId,
params: Option<TraceParams>,
) -> RpcResult<Vec<single::TransactionTrace>> {
let mut requester = self.requester.clone();
let requester = self.requester.clone();

let (tx, rx) = oneshot::channel();
// Send a message from the rpc handler to the service level task.
requester
.send(((RequesterInput::Block(id), params), tx))
.await
.unbounded_send(((RequesterInput::Block(id), params), tx))
.map_err(|err| {
internal_err(format!(
"failed to send request to debug service : {:?}",
Expand Down Expand Up @@ -329,7 +327,7 @@ where
};

// Get parent blockid.
let parent_block_id = BlockId::Hash(*header.parent_hash());
let parent_block_hash = *header.parent_hash();

let schema = fc_storage::onchain_storage_schema::<B, C, BE>(client.as_ref(), hash);

Expand Down Expand Up @@ -363,11 +361,11 @@ where

// Trace the block.
let f = || -> RpcResult<_> {
api.initialize_block(&parent_block_id, &header)
api.initialize_block(parent_block_hash, &header)
.map_err(|e| internal_err(format!("Runtime api access error: {:?}", e)))?;

let _result = api
.trace_block(&parent_block_id, exts, eth_tx_hashes)
.trace_block(parent_block_hash, exts, eth_tx_hashes)
.map_err(|e| {
internal_err(format!(
"Blockchain error when replaying block {} : {:?}",
Expand Down Expand Up @@ -460,7 +458,7 @@ where
_ => return Err(internal_err("Block header not found")),
};
// Get parent blockid.
let parent_block_id = BlockId::Hash(*header.parent_hash());
let parent_block_hash = *header.parent_hash();

// Get block extrinsics.
let exts = blockchain
Expand All @@ -470,7 +468,7 @@ where

// Get DebugRuntimeApi version
let trace_api_version = if let Ok(Some(api_version)) =
api.api_version::<dyn DebugRuntimeApi<B>>(&parent_block_id)
api.api_version::<dyn DebugRuntimeApi<B>>(parent_block_hash)
{
api_version
} else {
Expand Down Expand Up @@ -499,12 +497,12 @@ where
let transactions = block.transactions;
if let Some(transaction) = transactions.get(index) {
let f = || -> RpcResult<_> {
api.initialize_block(&parent_block_id, &header)
api.initialize_block(parent_block_hash, &header)
.map_err(|e| internal_err(format!("Runtime api access error: {:?}", e)))?;

if trace_api_version >= 4 {
let _result = api
.trace_transaction(&parent_block_id, exts, &transaction)
.trace_transaction(parent_block_hash, exts, &transaction)
.map_err(|e| {
internal_err(format!(
"Runtime api access error (version {:?}): {:?}",
Expand All @@ -518,7 +516,7 @@ where
ethereum::TransactionV2::Legacy(tx) =>
{
#[allow(deprecated)]
api.trace_transaction_before_version_4(&parent_block_id, exts, &tx)
api.trace_transaction_before_version_4(parent_block_hash, exts, &tx)
.map_err(|e| {
internal_err(format!(
"Runtime api access error (legacy): {:?}",
Expand Down
25 changes: 11 additions & 14 deletions client/rpc/trace/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
//! - For each traced block an async task responsible to wait for a permit, spawn a blocking
//! task and waiting for the result, then send it to the main `CacheTask`.

use futures::{select, stream::FuturesUnordered, FutureExt, SinkExt, StreamExt};
use futures::{select, stream::FuturesUnordered, FutureExt, StreamExt};
use std::{collections::BTreeMap, future::Future, marker::PhantomData, sync::Arc, time::Duration};
use tokio::{
sync::{mpsc, oneshot, Semaphore},
Expand All @@ -34,7 +34,7 @@ use tracing::{instrument, Instrument};

use sc_client_api::backend::{Backend, StateBackend, StorageProvider};
use sc_utils::mpsc::TracingUnboundedSender;
use sp_api::{ApiExt, BlockId, Core, HeaderT, ProvideRuntimeApi};
use sp_api::{ApiExt, Core, HeaderT, ProvideRuntimeApi};
use sp_block_builder::BlockBuilder;
use sp_blockchain::{
Backend as BlockchainBackend, Error as BlockChainError, HeaderBackend, HeaderMetadata,
Expand Down Expand Up @@ -282,14 +282,13 @@ impl CacheRequester {
#[instrument(skip(self))]
pub async fn start_batch(&self, blocks: Vec<H256>) -> Result<CacheBatchId, String> {
let (response_tx, response_rx) = oneshot::channel();
let mut sender = self.0.clone();
let sender = self.0.clone();

sender
.send(CacheRequest::StartBatch {
.unbounded_send(CacheRequest::StartBatch {
sender: response_tx,
blocks,
})
.await
.map_err(|e| {
format!(
"Failed to send request to the trace cache task. Error : {:?}",
Expand All @@ -312,14 +311,13 @@ impl CacheRequester {
#[instrument(skip(self))]
pub async fn get_traces(&self, block: H256) -> TxsTraceRes {
let (response_tx, response_rx) = oneshot::channel();
let mut sender = self.0.clone();
let sender = self.0.clone();

sender
.send(CacheRequest::GetTraces {
.unbounded_send(CacheRequest::GetTraces {
sender: response_tx,
block,
})
.await
.map_err(|e| {
format!(
"Failed to send request to the trace cache task. Error : {:?}",
Expand All @@ -342,13 +340,12 @@ impl CacheRequester {
/// this batch and still in the waiting pool will be discarded.
#[instrument(skip(self))]
pub async fn stop_batch(&self, batch_id: CacheBatchId) {
let mut sender = self.0.clone();
let sender = self.0.clone();

// Here we don't care if the request has been accepted or refused, the caller can't
// do anything with it.
let _ = sender
.send(CacheRequest::StopBatch { batch_id })
.await
.unbounded_send(CacheRequest::StopBatch { batch_id })
.map_err(|e| {
format!(
"Failed to send request to the trace cache task. Error : {:?}",
Expand Down Expand Up @@ -787,7 +784,7 @@ where
.ok_or_else(|| format!("Subtrate block {} don't exist", substrate_hash))?;

let height = *block_header.number();
let substrate_parent_id = BlockId::<B>::Hash(*block_header.parent_hash());
let substrate_parent_hash = *block_header.parent_hash();

let schema =
fc_storage::onchain_storage_schema::<B, C, BE>(client.as_ref(), substrate_hash);
Expand Down Expand Up @@ -829,11 +826,11 @@ where

// Trace the block.
let f = || -> Result<_, String> {
api.initialize_block(&substrate_parent_id, &block_header)
api.initialize_block(substrate_parent_hash, &block_header)
.map_err(|e| format!("Runtime api access error: {:?}", e))?;

let _result = api
.trace_block(&substrate_parent_id, extrinsics, eth_tx_hashes)
.trace_block(substrate_parent_hash, extrinsics, eth_tx_hashes)
.map_err(|e| format!("Blockchain error when replaying block {} : {:?}", height, e))?
.map_err(|e| {
tracing::warn!(
Expand Down
25 changes: 12 additions & 13 deletions client/rpc/txpool/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ use sc_transaction_pool::{ChainApi, Pool};
use sc_transaction_pool_api::InPoolTransaction;
use serde::Serialize;
use sha3::{Digest, Keccak256};
use sp_api::{ApiExt, BlockId, ProvideRuntimeApi};
use sp_api::{ApiExt, ProvideRuntimeApi};
use sp_blockchain::{Error as BlockChainError, HeaderBackend, HeaderMetadata};
use sp_runtime::traits::Block as BlockT;
use std::collections::HashMap;
Expand Down Expand Up @@ -73,20 +73,19 @@ where
.collect();

// Use the runtime to match the (here) opaque extrinsics against ethereum transactions.
let best_block: BlockId<B> = BlockId::Hash(self.client.info().best_hash);
let best_block = self.client.info().best_hash;
let api = self.client.runtime_api();
let api_version = if let Ok(Some(api_version)) =
api.api_version::<dyn TxPoolRuntimeApi<B>>(&best_block)
{
api_version
} else {
return Err(internal_err(
"failed to retrieve Runtime Api version".to_string(),
));
};
let api_version =
if let Ok(Some(api_version)) = api.api_version::<dyn TxPoolRuntimeApi<B>>(best_block) {
api_version
} else {
return Err(internal_err(
"failed to retrieve Runtime Api version".to_string(),
));
};
let ethereum_txns: TxPoolResponse = if api_version == 1 {
#[allow(deprecated)]
let res = api.extrinsic_filter_before_version_2(&best_block, txs_ready, txs_future)
let res = api.extrinsic_filter_before_version_2(best_block, txs_ready, txs_future)
.map_err(|err| {
internal_err(format!("fetch runtime extrinsic filter failed: {:?}", err))
})?;
Expand All @@ -103,7 +102,7 @@ where
.collect(),
}
} else {
api.extrinsic_filter(&best_block, txs_ready, txs_future)
api.extrinsic_filter(best_block, txs_ready, txs_future)
.map_err(|err| {
internal_err(format!("fetch runtime extrinsic filter failed: {:?}", err))
})?
Expand Down
Loading