Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add otx micro benchmark #3762

Merged
merged 48 commits into from
Aug 1, 2024
Merged

Conversation

sovrasov
Copy link
Contributor

@sovrasov sovrasov commented Jul 24, 2024

Summary

otx benchmark command aims to provide micro benchmarking capabilities for OTX users.
Given a model (OV or torch) otx benchmark could quickly estimate realistic deployment performance and explore the effect of varying batch size. The evaluation works on synthetic data, which is provided by OTXModel.get_dummy_input(batch_size) method. Also, in case of torch models theoretical complexity and amount of parameters are available.
image

Issues:

  • get_dummy_input() should be implemented for each model type. Current limitation is hard-coded per model input resolutions. The same problem is valid for export.
  • torch 2.1 messes up with Tensor types in dispatch mode, a workaround was added. There is a chance torch 2.3 resolves the issue.
  • Models coverage.
  • Unclear console output model in Engine.

How to test

Checklist

  • I have added unit tests to cover my changes.​
  • I have added integration tests to cover my changes.​
  • I have ran e2e tests and there is no issues.
  • I have added the description of my changes into CHANGELOG in my target branch (e.g., CHANGELOG in develop).​
  • I have updated the documentation in my target branch accordingly (e.g., documentation in develop).
  • I have linked related issues.

License

  • I submit my code changes under the same Apache License that covers the project.
    Feel free to contact the maintainers if that's a concern.
  • I have updated the license header for each file (see an example below).
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

@sovrasov sovrasov changed the title Add otx benchmark Add otx micro benchmark Jul 24, 2024
@sovrasov sovrasov added this to the 2.2.0 milestone Jul 24, 2024
Copy link
Contributor

@eunwoosh eunwoosh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your work. I left some comments. Please take a look.

src/otx/core/model/base.py Outdated Show resolved Hide resolved
src/otx/core/model/classification.py Show resolved Hide resolved
src/otx/engine/engine.py Outdated Show resolved Hide resolved
src/otx/engine/engine.py Outdated Show resolved Hide resolved
src/otx/engine/engine.py Show resolved Hide resolved
src/otx/core/model/classification.py Outdated Show resolved Hide resolved
src/otx/engine/engine.py Outdated Show resolved Hide resolved
src/otx/engine/engine.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@kprokofi kprokofi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this valuable feature.

I have few comments/proposals. I checked this functionality with segmentation models.

  1. Currently, it is required to provide data_root for all OTX CLI commands. Since this functionality doesn't require any datasets, can we add some default datamodule (if there is no other provided) to run otx benchmark without --data_root option?

  2. It will be a good option if we integrate otx.export (OV) to otx.benchmark. For example, I want to test torch models and OV models. For now, I should train them first (or remove checkpoint loading logic in OTX and use default weights), then export and after that use otx benchmark. It ends up to 2-3 commands and searching for checkpoint paths. Could we add like "convert_to_openvino" or "test_openvino" option, call export and call benchmark again with OV model?

@sovrasov
Copy link
Contributor Author

Thank you for this valuable feature.

I have few comments/proposals. I checked this functionality with segmentation models.

  1. Currently, it is required to provide data_root for all OTX CLI commands. Since this functionality doesn't require any datasets, can we add some default datamodule (if there is no other provided) to run otx benchmark without --data_root option?
  2. It will be a good option if we integrate otx.export (OV) to otx.benchmark. For example, I want to test torch models and OV models. For now, I should train them first (or remove checkpoint loading logic in OTX and use default weights), then export and after that use otx benchmark. It ends up to 2-3 commands and searching for checkpoint paths. Could we add like "convert_to_openvino" or "test_openvino" option, call export and call benchmark again with OV model?
  1. That's the limitation of current OTX implementation. Without having a dataset one should perform full model auto-configuration, including setting num classes for instance. In theory, export command also doesn't require any data, but we just don't have a mechanism to skip the creation of an actual dataloader. I'll think about an API-based benchmark, so we could streamline the workflow at least there.
  2. The best we can do now is to benchmark a torch model without --checkpoint specified (~no training for torch model is required). otx * tools are supposed to be orthogonal, so none of it implicitly uses another. This introduces a clear understanding of dependencies required for each otx * tool, but it could take more time to achieve the result. If we relax otx export requirements (omit --checkpoint), that could mitigate the need of pretraining weights, but still 2 calls are required.

@github-actions github-actions bot added the TEST Any changes in tests label Jul 29, 2024
Copy link
Contributor

@sungchul2 sungchul2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your hard work! It seems the feature is what we really need.
I left some minor comments.

src/otx/algo/segmentation/litehrnet.py Outdated Show resolved Hide resolved
src/otx/core/model/instance_segmentation.py Outdated Show resolved Hide resolved
src/otx/engine/engine.py Outdated Show resolved Hide resolved
src/otx/core/model/segmentation.py Outdated Show resolved Hide resolved
src/otx/core/model/anomaly.py Show resolved Hide resolved
src/otx/core/model/anomaly.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@ashwinvaidya17 ashwinvaidya17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice addition. Looks good from Anomaly perspective.

kprokofi
kprokofi previously approved these changes Jul 31, 2024
Copy link
Collaborator

@kprokofi kprokofi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this feature, overall looks good

harimkang
harimkang previously approved these changes Aug 1, 2024
@sovrasov sovrasov added this pull request to the merge queue Aug 1, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Aug 1, 2024
@sovrasov sovrasov added this pull request to the merge queue Aug 1, 2024
Merged via the queue into openvinotoolkit:develop with commit 1de7b52 Aug 1, 2024
18 checks passed
@sovrasov sovrasov deleted the vs/micro_benchmark branch August 1, 2024 16:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DOC Improvements or additions to documentation TEST Any changes in tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants