Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CI workflow for performance benchmarks #1983

Merged
merged 2 commits into from
Jan 21, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 77 additions & 0 deletions .github/workflows/benchmarks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
name: Performance benchmarks

on:
pull_request_target:
types: [opened, ready_for_review]
branches:
- main
issue_comment:
types: [created]

permissions:
EwoutH marked this conversation as resolved.
Show resolved Hide resolved
issues: write
pull-requests: write

jobs:
run-benchmarks:
if: >
github.event_name == 'pull_request_target' ||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '/rerun-benchmarks'))
runs-on: ubuntu-latest
steps:
- name: Checkout PR branch
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Add project directory to PYTHONPATH
run: echo "PYTHONPATH=$PYTHONPATH:$(pwd)" >> $GITHUB_ENV
- name: Install dependencies
run: pip install numpy pandas tqdm tabulate
- name: Checkout main branch
uses: actions/checkout@v4
with:
ref: main
repository: EwoutH/mesa
EwoutH marked this conversation as resolved.
Show resolved Hide resolved
- name: Run benchmarks on main branch
working-directory: benchmarks
run: python global_benchmark.py
- name: Upload benchmark results
uses: actions/upload-artifact@v4
with:
name: timings-main
path: benchmarks/timings_1.pickle
- name: Checkout PR branch
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
fetch-depth: 0
persist-credentials: false
clean: false
- name: Download benchmark results
uses: actions/download-artifact@v4
with:
name: timings-main
path: benchmarks
- name: Run benchmarks on PR branch
working-directory: benchmarks
run: python global_benchmark.py
- name: Run compare timings and encode output
working-directory: benchmarks
run: |
TIMING_COMPARISON=$(python compare_timings.py | base64 -w 0) # Base64 encode the output
echo "TIMING_COMPARISON=$TIMING_COMPARISON" >> $GITHUB_ENV
- name: Comment PR
uses: actions/github-script@v7
with:
script: |
const output = Buffer.from(process.env.TIMING_COMPARISON, 'base64').toString('utf-8');
const issue_number = context.issue.number;
github.rest.issues.createComment({
issue_number: issue_number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Performance benchmarks:\n\n' + output
});