Skip to content

Commit

Permalink
fix the benches
Browse files Browse the repository at this point in the history
Signed-off-by: Yu Chin Fabian Lim <flim@sg.ibm.com>
  • Loading branch information
fabianlim committed Aug 21, 2024
1 parent 5c3b4f2 commit b036263
Show file tree
Hide file tree
Showing 5 changed files with 19 additions and 3 deletions.
2 changes: 1 addition & 1 deletion plugins/accelerated-moe/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,5 +15,5 @@ Known Issues
Currently databricks megablocks does not have a PyPi repository and does not have a proper release, so we have to install from the github repository as below. Please note that installing from github will require CUDA Toolkit to build.

```
pip install git+https://github.com/databricks/megablocks.git@bce5d7b2aaf5038bc93b36f76c2baf51c2939bd2
pip install -r requirements_mb.txt
```
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
pip install git+https://github.com/databricks/megablocks.git@bce5d7b2aaf5038bc93b36f76c2baf51c2939bd2
5 changes: 5 additions & 0 deletions scripts/benchmarks/accelerator-config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"gradient_accumulation_kwargs": {
"sync_each_batch": true
}
}
9 changes: 7 additions & 2 deletions scripts/benchmarks/scenarios.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -98,8 +98,13 @@ scenarios:
framework_config:
- moe-megablocks
arguments:
learning_rate: 2e-4
bf16: True
learning_rate: 5e-5
torch_dtype: bfloat16
accelerator_config: scripts/benchmarks/accelerator-config.json
gradient_accumulation_steps: 16
logging_steps: 1
packing: False
adam_epsilon: 1e-8

model_name_or_path:
- 'mistralai/Mixtral-8x7B-Instruct-v0.1'
5 changes: 5 additions & 0 deletions tox.ini
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,11 @@ commands =
python -m fms_acceleration.cli install -e {toxinidir}/plugins/accelerated-peft
python -m fms_acceleration.cli install -e {toxinidir}/plugins/fused-ops-and-kernels
python -m fms_acceleration.cli install -e {toxinidir}/plugins/attention_and_distributed_packing
python -m fms_acceleration.cli install -e {toxinidir}/plugins/accelerated-moe

# need to install some optional dependencies
# - the megablocks dependency
pip install -r {toxinidir}/plugins/accelerated-moe/requirements-mb.txt

# run the benchmark script
bash scripts/run_benchmarks.sh {posargs:"1 2" benchmark_outputs}
Expand Down

0 comments on commit b036263

Please sign in to comment.