-
Notifications
You must be signed in to change notification settings - Fork 446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add otx micro benchmark #3762
Add otx micro benchmark #3762
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work. I left some comments. Please take a look.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this valuable feature.
I have few comments/proposals. I checked this functionality with segmentation models.
-
Currently, it is required to provide data_root for all OTX CLI commands. Since this functionality doesn't require any datasets, can we add some default datamodule (if there is no other provided) to run otx benchmark without --data_root option?
-
It will be a good option if we integrate otx.export (OV) to otx.benchmark. For example, I want to test torch models and OV models. For now, I should train them first (or remove checkpoint loading logic in OTX and use default weights), then export and after that use otx benchmark. It ends up to 2-3 commands and searching for checkpoint paths. Could we add like "convert_to_openvino" or "test_openvino" option, call export and call benchmark again with OV model?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your hard work! It seems the feature is what we really need.
I left some minor comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice addition. Looks good from Anomaly perspective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this feature, overall looks good
…vino_training_extensions into vs/micro_benchmark
Summary
otx benchmark
command aims to provide micro benchmarking capabilities for OTX users.Given a model (OV or torch)
otx benchmark
could quickly estimate realistic deployment performance and explore the effect of varying batch size. The evaluation works on synthetic data, which is provided byOTXModel.get_dummy_input(batch_size)
method. Also, in case of torch models theoretical complexity and amount of parameters are available.Issues:
get_dummy_input()
should be implemented for each model type. Current limitation is hard-coded per model input resolutions. The same problem is valid for export.How to test
Checklist
License
Feel free to contact the maintainers if that's a concern.