Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Blockchains > Benchmarking #121

Merged
merged 15 commits into from
Nov 14, 2024
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
191 changes: 189 additions & 2 deletions develop/blockchains/custom-blockchains/benchmarking.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,191 @@
---
title: Benchmarking
description: TODO
---
description: Learn how to use FRAME's benchmarking framework to benchmark your custom pallets and provide correct weights for on-chain computation and execution of extrinsics.
---

## Introduction

Along with the development and testing capabilities that the Polkadot SDK provides, a crucial part of pallet development is **benchmarking**. Benchmarking your pallet ensures that an accurate [weight](../../../polkadot-protocol/glossary.md#weight) is assigned to your pallet's extrinsics. Each block has an allotted amount of time for executing extrinsics. The weight of an extrinsic is determined by the time it takes to execute and the storage reads/writes that it performs. Without the ability to know the computational resources that an extrinsic may take, it may run indefinitely or present an opportunity for a Denial of Service (DoS) attack that may halt block production.

FRAME provides a benchmarking framework ([`frame_benchmarking`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank}), which is a suite of tools that contain a set of macros (similar to conventional unit tests), a CLI for executing benchmarks, and linear regression analysis for processing benchmark data. These tools allow each extrinsic to be benchmarked and assigned an accurate weight within the runtime.

## Why Benchmark Pallets

Including or excluding transactions based on available resources ensures that the runtime can continue to produce and import blocks without service interruptions. For example, suppose you have a function call that requires particularly intensive computation. In that case, executing the call might exceed the maximum time allowed for producing or importing a block, disrupting the block handling process or stopping blockchain progress altogether. Benchmarking helps you validate that the execution time required for different functions is within reasonable boundaries.

Similarly, a malicious user might attempt to disrupt network service by repeatedly executing a function call that requires intensive computation or doesn't accurately reflect the necessary computation. If the cost of executing a function call doesn't accurately reflect the computation involved, there's no incentive to deter a malicious user from attacking the network.

!!!info "Benchmarking helps with predictable fees and computational outcomes."
Because benchmarking helps you evaluate the weight associated with executing transactions, it also helps you to determine appropriate transaction fees. Based on your benchmarks, you can set fees representing the resources consumed by executing specific calls on the blockchain.

### Benchmarking and Weight

Polkadot SDK-based chains use weight to represent the time it takes to execute the transactions in a block. The time required to execute any particular call in a transaction depends on several factors, including the following:

- Computational complexity
- Storage complexity
- Database read and write operations required
- Hardware used

To calculate an appropriate weight for a transaction, you can use benchmark parameters to measure the time it takes to execute the function calls on different hardware, using different values and repeating them multiple times. You can then use the results of the benchmarking tests to establish an approximate worst-case weight to represent the resources required to execute each function call and each code path. Fees are then based on the worst-case weight. If the actual call performs better than the worst case, the weight is adjusted, and any excess fees can be returned.

Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specific hardware used for benchmarking.

By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period of time.

Within FRAME, each function call that is dispatched must have a `#[weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs. The benchmarking framework automatically generates a file with those formulas for you, which you can use in that annotation within your pallet.
CrackTheCode016 marked this conversation as resolved.
Show resolved Hide resolved

## Benchmarking Steps

Benchmarking a pallet involves the following steps:

1. Creating a `benchmarking.rs` file within your pallet's structure
2. Writing a benchmarking test for each extrinisic
3. Executing the benchmarking tool, producing a linear model that measures the weight (time and space for an extrinsic)

The benchmarking tool then runs multiple iterations with different possible parameters for each of these functions, wherein the approximate **worst** case scenario is run and applied as the weight. In usage, if the weight is better than this benchmark, fees are adjusted and refunded accordingly. By default, the benchmarking pipeline is deactivated, and it can be enabled by passing in the `runtime-benchmarks` feature flag when compiling your runtime.

## Preparing Your Environment

Before writing benchmarks, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml`:

```toml
frame-benchmarking = { version = "37.0.0", default-features = false }
```

You must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`:

```toml
runtime-benchmarks = [
"frame-benchmarking/runtime-benchmarks",
"frame-support/runtime-benchmarks",
"frame-system/runtime-benchmarks",
"sp-runtime/runtime-benchmarks",
]
```

Lastly, ensure that `frame-benchmarking` is included in `std = []`:

```toml
std = [
# ...
"frame-benchmarking?/std",
# ...
]
```

Once this is complete, you have the required dependencies for writing benchmarks for your pallet.

## Writing Benchmarks

Create a `benchmarking.rs` file in your pallet's `src/`. You may copy the barebones template below to get started:
CrackTheCode016 marked this conversation as resolved.
Show resolved Hide resolved

!!!note "This example is from the pallet template"
Take a look at the repository in the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=_blank}
CrackTheCode016 marked this conversation as resolved.
Show resolved Hide resolved

```rust
//! Benchmarking setup for pallet-template
#![cfg(feature = "runtime-benchmarks")]

use super::*;
use frame_benchmarking::v2::*;

#[benchmarks]
mod benchmarks {
use super::*;
#[cfg(test)]
use crate::pallet::Pallet as Template;
use frame_system::RawOrigin;

#[benchmark]
fn do_something() {
let caller: T::AccountId = whitelisted_caller();
#[extrinsic_call]
do_something(RawOrigin::Signed(caller), 100);
assert_eq!(Something::<T>::get().map(|v| v.block_number), Some(100u32.into()));
}

impl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test);
}
CrackTheCode016 marked this conversation as resolved.
Show resolved Hide resolved
```

The function `do_something` is a placeholder; you must write your own function that tested your extrinsic. Similar to writing unit tests, you have access to the mock runtime, and can use functions such as `whitelisted_caller()` (an account whitelisted for DB reads/writes) to sign transactions in the benchmarking context.

There are a couple practices to notice:

- The `#[extrinsic_call]` macro is used when calling the extrinsic itself. This macro is a required part of a benchmarking function, [see the Rust docs](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=_blank} for more clarification
- The `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context

## Adding Benchmarks to Runtime

The last step before running the benchmarking tool is to ensure that the benchmarks are configured as part of your runtime.

Create another file in the node's directory called `benchmarks.rs`. This is where the pallets you wish to benchmark will be registered. This file should contain the following macro, which registers all pallets for benchmarking, as well as their respective configurations:

```rust
frame_benchmarking::define_benchmarks!(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html

Is this good enough? can you contribute back anything to this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe an example + slightly clearer description of what that looks like syntactically to better illustrate what is going on, I can open a PR and do this if wanted

[frame_system, SystemBench::<Runtime>]
[pallet_balances, Balances]
[pallet_session, SessionBench::<Runtime>]
[pallet_timestamp, Timestamp]
[pallet_message_queue, MessageQueue]
[pallet_sudo, Sudo]
[pallet_collator_selection, CollatorSelection]
[cumulus_pallet_parachain_system, ParachainSystem]
[cumulus_pallet_xcmp_queue, XcmpQueue]
);
```

For example, if the pallet named `pallet_template` is ready to be benchmarked, it may be added as follows:

```rust
frame_benchmarking::define_benchmarks!(
// ...
[pallet_template, Template]
// ...
);
```

!!!warning
If the pallet isn't included in the `define_benchmarks!` macro, the CLI will not be able to access it and benchmark it later.
CrackTheCode016 marked this conversation as resolved.
Show resolved Hide resolved

Navigate to the runtime's `lib.rs` file, and add the import for `benchmarks.rs` as follows:

```rust
#[cfg(feature = "runtime-benchmarks")]
mod benchmarks;
```

!!!info
The `runtime-benchmarks` feature gate ensures that the benchmarking operations aren't a part of the production runtime.

## Running Benchmarks

You can now compile your node with the following command with the `runtime-benchmarks` feature flag:

```bash
cargo build --features runtime-benchmarks --release
```

Once it is compiled with the correct feature set, you can run the benchmarking tool:

```sh
./target/release/<node-binary-name> benchmark pallet \
--runtime <path-to-wasm-runtime> \
--pallet <name-of-the-pallet> \
--extrinsic '*' \
--steps 20 \
--repeat 10 \
--output weights.rs
```

- `--runtime` - The path to your runtime's Wasm
- `--pallet` - The name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks`
- `--extrinsic` - Which extrinsic to test. Using `*` implies all extrinics will be benchmarked
- `--output` - Where the output of the auto-generated weights will reside
CrackTheCode016 marked this conversation as resolved.
Show resolved Hide resolved

The result should be a `weights.rs` file containing the types you can use to annotate your extrinsic with the correctly balanced weights in your runtime.

CrackTheCode016 marked this conversation as resolved.
Show resolved Hide resolved
## What's Next
dawnkelly09 marked this conversation as resolved.
Show resolved Hide resolved

- View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html)