Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split accuracy & speed benchmark github workflows #2763

Merged
merged 1 commit into from
Jan 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,22 +1,15 @@
name: Performance Benchmark Test
name: Performance-Accuracy Benchmark Test

on:
workflow_dispatch: # run on request (no need for PR)
inputs:
benchmark-type:
type: choice
description: Benchmark type
options:
- accuracy
- speed
required: true
model-type:
type: choice
description: Model type to run benchmark
options:
- default # speed, balance, accuracy models only
- all # default + other models
default: all
default: default
data-size:
type: choice
description: Dataset size to run benchmark
Expand All @@ -39,10 +32,10 @@ on:
- train
- export
- optimize
default: train
default: optimize

jobs:
Regression-Tests:
Perf-Accuracy-Benchmark-Tests:
strategy:
fail-fast: false
matrix:
Expand All @@ -57,23 +50,23 @@ jobs:
task: "anomaly"
- toxenv_task: "cls"
task: "classification"
name: Perf-Test-py310-${{ matrix.toxenv_task }}
name: Perf-Accuracy-Benchmark-Test-py310-${{ matrix.toxenv_task }}
uses: ./.github/workflows/run_tests_in_tox.yml
with:
python-version: "3.10"
toxenv-pyver: "py310"
toxenv-task: ${{ matrix.toxenv_task }}
tests-dir: >
tests/perf/test_${{ matrix.task }}.py
-k ${{ inputs.benchmark-type }}
-k accuracy
--model-type ${{ inputs.model-type }}
--data-root /home/validation/data/new/
--data-size ${{ inputs.data-size }}
--num-repeat ${{ inputs.num-repeat }}
--num-epoch ${{ inputs.num-epoch }}
--summary-csv .tox/perf-${{ inputs.benchmark-type }}-benchmark-${{ matrix.toxenv_task }}.csv
--summary-csv .tox/perf-accuracy-benchmark-${{ matrix.toxenv_task }}.csv
runs-on: "['self-hosted', 'Linux', 'X64', 'dmount']"
task: ${{ matrix.task }}
timeout-minutes: 8640
upload-artifact: true
artifact-prefix: perf-${{ inputs.benchmark-type }}-benchmark
artifact-prefix: perf-accuracy-benchmark
58 changes: 58 additions & 0 deletions .github/workflows/perf-speed.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
name: Performance-Speed Benchmark Test

on:
workflow_dispatch: # run on request (no need for PR)
inputs:
model-type:
type: choice
description: Model type to run benchmark
options:
- default # speed, balance, accuracy models only
- all # default + other models
default: default
data-size:
type: choice
description: Dataset size to run benchmark
options:
- small
- medium
- large
- all
default: large
num-repeat:
description: Overrides default per-data-size number of repeat setting
default: 0
num-epoch:
description: Overrides default per-model number of epoch setting
default: 0
eval-upto:
type: choice
description: The last operation to evaluate. 'optimize' means all.
options:
- train
- export
- optimize
default: optimize

jobs:
Perf-Speed-Benchmark-Tests:
name: Perf-Speed-Benchmark-Test-py310-all
uses: ./.github/workflows/run_tests_in_tox.yml
with:
python-version: "3.10"
toxenv-pyver: "py310"
toxenv-task: all
tests-dir: >
tests/perf/
-k speed
--model-type ${{ inputs.model-type }}
--data-root /home/validation/data/new/
--data-size ${{ inputs.data-size }}
--num-repeat ${{ inputs.num-repeat }}
--num-epoch ${{ inputs.num-epoch }}
--summary-csv .tox/perf-speed-benchmark-all.csv
runs-on: "['self-hosted', 'Linux', 'X64', 'dmount']"
task: all
timeout-minutes: 8640
upload-artifact: true
artifact-prefix: perf-speed-benchmark
Loading