-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add performance benchmark config: MPS 8da4w #8429
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8429
Note: Links to docs will display an error until the docs builds have been completed. ❌ 8 New Failures, 1 Cancelled JobAs of commit 7d3ca20 with merge base 931bb8b ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@manuelcandales Here is the instructions of how to trigger an on-demand benchmark job on your PR using the newly added benchmark config:
|
@manuelcandales Please note that the all infra backed by pytorch dev infra including the benchmarking infra can only run on a non-forked PR. If your PR is created on your own fork (seems like yes), you have to recreate a non-fork PR. |
@guangy10 Thank you for the detailed instructions. I created a non-fork PR #8461. However, when I select my branch under Run Workflow, it doesn't show the Backend delegates field. Notice that when I select the bench-debug branch from your example, I can see it. Do you know why are these templates different from branch to branch? Am I missing something on my branch? |
Adds a new performance benchmark config to keep track of performance on MPS backend when running Llama 3.2 1B inference with 8da4w quantization