This repository contains infrastructure for the benchmarking and performance profiling of the xDSL compiler framework.
airspeed velocity (asv) is a tool for benchmarking Python packages over their lifetime. Runtime, memory consumption and even custom-computed values may be tracked. The results are displayed in an interactive web frontend that requires only a basic static webserver to host.
We use it in CI to benchmark commits made to the main branch of the xDSL repository.
Every day by the cron schedule 0 4 * * *
, a GitHub actions workflow is run
using ASV to benchmark the 15 most recent commits to the xDSL repository, and
commit the results to the .asv/results/github-action
directory of this
repository. Then, the interactive web frontend is built from these results and
all previously committed results from previous workflow runs, then finally
deployed to GitHub pages 1 2 3 4 5 6.
This web frontend can be found at https://xdsl.dev/xdsl-bench/.
Running profiling benchmarks locally rather than via ASV requires also
installing xdsl
to the virtual environment. This should be done by default
when syncing without extra flags, but can also be done with
uv sync --group profile
, which points to the submodule directory.
The general approach is using the same benchmarks defined for ASV to avoid
duplication, but setting up cProfile
tracing in the
if __name__ == "__main__":
construct. As such, ASV can run the benchmarks as
usual, but directly running the files with Python can be used to perform custom
profiling.
The generated profiles files can then be viewed using
snakeviz
, drawing an interactive
call graph of the execution.
For example, the commands to profile lexing the apply_pdl_extra_file.mlir
file using the lexer benchmark.
uv run python benchmarks/lexer.py
uv run snakeviz profiles/lexer__apply_pdl_extra_file.prof
You can also use flameprof
to
visualise the profile data as follows, but the generated SVG files are not
interactive and less readable:
uv run python benchmarks/lexer.py
uv run flameprof profiles/lexer__apply_pdl_extra_file__lex_only.prof \
> profiles/lexer__apply_pdl_extra_file__lex_only.svg
An alternative to profiling with cProfile
and visualising with snakeviz
is the end-to-end profiler viztracer
.
For example, the commands to profile an end-to-end test of running xDSL-opt on
an empty MLIR program with viztracer
are shown below:
uv run viztracer \
-o profiles/empty_program.json \
xdsl/xdsl/tools/xdsl_opt.py \
xdsl/tests/xdsl_opt/empty_program.mlir
--output-file profiles/empty_program.json
uv run vizviewer profiles/empty_program.json
- Fix ASV virtual environment issues due to versioneer with submodules
- Get ASV running locally
- Get ASV running on GitHub actions
- Add ASV machine description
- Deploy ASV website to GitHub pages
- Fix committing results so graph can have multiple points
- Identify why submodule checkout fails to any commits other than head
- Move repo to xDSL organisation
- Support multiple python versions
- ? Consider moving committed ASV runs to their own branch so they don't interfere with other things
- ? Consider inverting submodules to move benchmarks back into main repo and instead keep artifacts in submodule
- Importing
xDSLOptMain
- Lexing
- End-to-end optimisation
- Parsing
- Printing
- Loading dialects
-
builtin.py
-
arith.py
-
- Re-writing optimisations
-
Builder
-
Rewriter
-
PatternRewriter
-
- Package installation time
-
cProfile
+snakeviz
-
viztracer
-
scalene
- Memory profilers
- https://cerfacs.fr/coop/python-profiling
- https://www.petermcconnell.com/posts/perf_eng_with_py12/
- https://danmackinlay.name/notebook/python_debug
- https://www.brendangregg.com/blog/index.html
- https://superfastpython.com/benchmark-python-function/
- https://github-pages.arc.ucl.ac.uk/python-tooling/pages/benchmarking-profiling.html
- https://discuss.python.org/t/python-benchmarking-in-unstable-environments/22334
- https://switowski.com/blog/how-to-benchmark-python-code/
Footnotes
-
https://speakerdeck.com/anissa111/benchmarking-your-scientific-python-packages-using-asv-and-github-actions ↩
-
https://github.com/airspeed-velocity/asv_samples/blob/main/.github/workflows/build_test.yml ↩
-
https://labs.quansight.org/blog/2021/10/re-engineering-cicd-pipelines-for-scipy ↩
-
https://labs.quansight.org/blog/2021/08/github-actions-benchmarks ↩
-
https://github.com/man-group/ArcticDB/wiki/Running-ASV-Benchmarks ↩
-
https://github.com/man-group/ArcticDB/blob/master/.github/workflows/benchmark_commits.yml ↩