Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Userbenchmark] Add configuration support for test_bench #2592

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

shink
Copy link
Contributor

@shink shink commented Feb 12, 2025

fixes: #2590

Configuration file might like this:

devices:
  - "foo"
models:
  - model: BERT_pytorch
    batch_size: 1

  - model: yolov3
    skip: true
extra_args:
  - "--accuracy"

Comment on lines 122 to 139
model_set = set(list_models(internal=False))
device = config_obj["device"]
configs = []
for model in model_set:
cfg = next(filter(lambda c: c["model"] == model, config_obj["models"]), None)
tests = cfg.get("tests", "eval") if cfg is not None else ["eval"]
for test in tests:
config = TorchBenchModelConfig(
name=model,
device=device,
test=test,
batch_size=cfg.get("batch_size", None) if cfg is not None else None,
extra_args=[],
skip=cfg is not None and cfg.get("skip", False),
)
print(config)
configs.append(config)
return configs
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that we always list all models and apply configuration on them.

@shink
Copy link
Contributor Author

shink commented Feb 13, 2025

@xuzhao9 Could you please help review this? Any comments are welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Missing test_bench configuration support
2 participants