Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fuzzing the entire runtime #4374

Closed
matklad opened this issue Jun 16, 2021 · 2 comments
Closed

Fuzzing the entire runtime #4374

matklad opened this issue Jun 16, 2021 · 2 comments
Assignees
Labels
A-testing Area: Unit testing / integration testing Node Node team T-node Team: issues relevant to the node experience team

Comments

@matklad
Copy link
Contributor

matklad commented Jun 16, 2021

[OKR 2021Q4] At a high level, our runtime is a pure function which takes some state, a bunch of actions, interprets those actions and returns a new state. We also care a lot about runtime being correct even in the phase of adversary inputs. So it behooves us to implement fuzzing of the runtime. This article (and it's bibliography) give a good overview of state of the art fuzzing in Rust: https://fitzgeraldnick.com/2020/08/24/writing-a-test-case-generator.html.

The TL;DR is that the best approach is structured, coverage guided fuzzing. We use something like libfuzzer to generate random inputs &[u8], then we use this input as a seed to generate a random sequence of valid actions, then we feed this input into the runtime. The fuzzer then observes code coverage as the runtime executes the input, and uses that info to generate better seeds to cover more of the branches, and to minimize failures for free.

Practically, that means that we should:

  • define a data structure describing a series of inputs to the runtime
  • implement https://docs.rs/arbitrary/1.0.1/arbitrary/trait.Arbitrary.html for this data structure. Care must be taken to generate reasonably well-formed sequences. Ie, maintain a pool of active accounts, using valid signatures, etc. Of course, generating invalid inputs is also something we need to do from time-to-time. The link blog post (and the wasm-smith crate it describes) are a good model here. We actually should directly use wasm-smith for DeployContract action.
  • implement a fuzzing target for cargo-fuzz
  • run the fuzzing locally and test that the coverage is reasonable. That is, deliberately introduce paniking bugs into runtime, and check that fuzzing catches them
  • setup some kind of CI job to run the fuzzing. Perhaps even apply for https://google.github.io/oss-fuzz/ ?
  • optional: look into "swarm testing" (see the link in the post)
@bowenwang1996 bowenwang1996 added the A-testing Area: Unit testing / integration testing label Jun 16, 2021
@janewang janewang added the T-node Team: issues relevant to the node experience team label Jun 23, 2021
@janewang janewang added the S-blocked Status: blocked label Jul 13, 2021
@bowenwang1996 bowenwang1996 removed the S-blocked Status: blocked label Jul 19, 2021
@bowenwang1996
Copy link
Collaborator

Regarding fuzz testing function calls, I think we should focus on the following:

@posvyatokum
Copy link
Member

Pushed more code to #4546
Right now switching to #4550 to run current fuzzer in CI

posvyatokum added a commit that referenced this issue Nov 8, 2021
pmnoxx pushed a commit that referenced this issue Nov 20, 2021
@gmilescu gmilescu added the Node Node team label Oct 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-testing Area: Unit testing / integration testing Node Node team T-node Team: issues relevant to the node experience team
Projects
None yet
Development

No branches or pull requests

5 participants