Skip to content

Commit

Permalink
Merge branch 'develop' into mergeback/2.0.0
Browse files Browse the repository at this point in the history
  • Loading branch information
yunchu authored Jun 20, 2024
2 parents dfd3b10 + a684378 commit 8af2d12
Show file tree
Hide file tree
Showing 40 changed files with 609 additions and 140 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/publish.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -68,10 +68,10 @@ jobs:
file_glob: true
- name: Publish package distributions to PyPI
if: ${{ steps.check-tag.outputs.match != '' }}
uses: pypa/gh-action-pypi-publish@e53eb8b103ffcb59469888563dc324e3c8ba6f06 # v1.8.12
uses: pypa/gh-action-pypi-publish@ec4db0b4ddc65acdf4bff5fa45ac92d78b56bdf0 # v1.9.0
- name: Publish package distributions to TestPyPI
if: ${{ steps.check-tag.outputs.match == '' }}
uses: pypa/gh-action-pypi-publish@e53eb8b103ffcb59469888563dc324e3c8ba6f06 # v1.8.12
uses: pypa/gh-action-pypi-publish@ec4db0b4ddc65acdf4bff5fa45ac92d78b56bdf0 # v1.9.0
with:
repository-url: https://test.pypi.org/legacy/
verbose: true
65 changes: 65 additions & 0 deletions docs/source/_static/css/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,71 @@
--pst-icon-admonition-important: var(--pst-icon-exclamation-circle);
}

/* Main Block Width to 90% */
.bd-page-width {
width: 90%;
}

@media (min-width: 960px) {
.bd-page-width {
max-width: 90%;
}
}

/* Reduce Sidebar Width to 20% */
.bd-sidebar-primary {
background-color: var(--pst-color-background);
border-right: 1px solid var(--pst-color-border);
/* display:flex; */
flex: 0 0 20%;
flex-direction: column;
gap: 1rem;
max-height: calc(100vh - var(--pst-header-height));
max-width: 20%;
overflow-y: auto;
padding: 2rem 1rem 1rem;
position: sticky;
top: var(--pst-header-height);
}

/* Main Width to 100% */
.bd-main .bd-content .bd-article-container {
display: flex;
flex-direction: column;
justify-content: start;
max-width: 100%;
overflow-x: auto;
padding: 1rem;
width: 100%;
}

/* Hide Section Navigation Title */
nav.bd-links p.bd-links__title {
display: none;
}

/* Smaller current page side bar */
.bd-sidebar-secondary {
background-color: var(--pst-color-background);
display: flex;
flex-direction: column;
flex-shrink: 0;
max-height: calc(100vh - var(--pst-header-height));
order: 2;
overflow-y: auto;
padding: 2rem 1rem 1rem;
position: sticky;
top: var(--pst-header-height);
/* width:var(--pst-sidebar-secondary); */
}

/* Hide Search Button */
@media (min-width: 960px) {
.navbar-persistent--container {
display: none;
}
}

.navbar {
background: #0095ca !important;
}
Expand Down
2 changes: 2 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,8 @@

html_theme_options = {
"navbar_center": [],
"navbar_end": ["search-field.html", "theme-switcher.html", "navbar-icon-links.html"],
"search_bar_text": "Search",
"logo": {
"image_light": "logos/otx-logo.png",
"image_dark": "logos/otx-logo.png",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,14 @@
Auto-configuration
==================

|
.. figure:: ../../../../utils/images/auto_config.png
:align: center
:width: 100%

|
Auto-configuration for a deep learning framework means the automatic finding of the most appropriate settings for the training parameters, based on the dataset and the specific task at hand.
Auto-configuration can help to save time, it eases the process of interaction with OpenVINO™ Training Extensions and gives a better baseline for the given dataset.

Expand Down Expand Up @@ -84,13 +92,16 @@ To use this feature, add the following parameter:

.. code-block:: python
Need to update!
from otx.engine import Engine
engine = Engine(data_root="<path_to_data_root>")
engine.train(adaptive_bs="Safe")
.. tab-item:: CLI

.. code-block:: bash
Need to update!
(otx) ...$ otx train ... --adaptive_bs Safe
2. Find the maximum executable batch size (`Full` mode)

Expand All @@ -107,13 +118,16 @@ To use this feature, add the following parameter:

.. code-block:: python
Need to update!
from otx.engine import Engine
engine = Engine(data_root="<path_to_data_root>")
engine.train(adaptive_bs="Full")
.. tab-item:: CLI

.. code-block:: bash
Need to update!
(otx) ...$ otx train ... --adaptive_bs Full
.. Warning::
Expand Down

Large diffs are not rendered by default.

8 changes: 8 additions & 0 deletions docs/source/guide/get_started/cli_commands.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,14 @@ All possible OpenVINO™ Training Extensions CLI commands are presented below al
Also, by default, the OpenVINO™ Training Extensions CLI is written using jsonargparse, see jsonargparse or LightningCLI.
Please refer `Jsonargparse Documentation <https://jsonargparse.readthedocs.io/en/v4.27.4/#configuration-files>`_

|
.. figure:: ../../../utils/images/cli.png
:align: center
:width: 100%

|
*****
Help
*****
Expand Down
10 changes: 9 additions & 1 deletion docs/source/guide/get_started/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,20 @@ Introduction

**OpenVINO™ Training Extensions** is a low-code transfer learning framework for Computer Vision.

The CLI commands of the framework allows users to train, infer, optimize and deploy models easily and quickly even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on `PyTorch <https://pytorch.org/>`_ , `Lightning <https://lightning.ai/>`_ and `OpenVINO™ toolkit <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html>`_.
The CLI commands of the framework or API allows users to train, infer, optimize and deploy models easily and quickly even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on `PyTorch <https://pytorch.org/>`_ , `Lightning <https://lightning.ai/>`_ and `OpenVINO™ toolkit <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html>`_.

OpenVINO™ Training Extensions provide `recipe <https://github.com/openvinotoolkit/training_extensions/tree/develop/src/otx/recipe>`_ for every supported task type, which consolidates necessary information to build a model. Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general. If you are an experienced user, you can configure your own model based on `torchvision <https://pytorch.org/vision/stable/index.html>`_, `mmcv <https://github.com/open-mmlab/mmcv>`_ and `OpenVINO Model Zoo (OMZ) <https://github.com/openvinotoolkit/open_model_zoo>`_ frameworks.

Furthermore, OpenVINO™ Training Extensions provides :doc:`automatic configuration <../explanation/additional_features/auto_configuration>` of task types and hyperparameters. The framework will identify the most suitable recipe based on your dataset, and choose the best hyperparameter configuration. The development team is continuously extending functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project.

|
.. figure:: ../../../utils/images/diagram_otx.png
:align: center
:width: 100%

|
************
Key Features
************
Expand Down
1 change: 1 addition & 0 deletions docs/source/guide/tutorials/advanced/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,6 @@ Advanced Tutorials
:maxdepth: 1

configuration
semi_supervised_learning

.. Once we have enough material, we might need to categorize these into `data`, `model learning` sections.
167 changes: 167 additions & 0 deletions docs/source/guide/tutorials/advanced/semi_supervised_learning.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
############################
Use Semi-Supervised Learning
############################

This tutorial provides an example of how to use semi-supervised learning with OpenVINO™ Training Extensions on the specific dataset.

OpenVINO™ Training Extensions now offers semi-supervised learning, which combines labeled and unlabeled data during training to improve model accuracy in case when we have a small amount of annotated data. Currently, this type of training is available for multi-class classification.

If you want to learn more about the algorithms used in semi-supervised learning, please refer to the explanation section below:

- `Multi-class Classification <../../explanation/algorithms/classification/multi_class_classification.html#semi-supervised-learning>`__

In this tutorial, we use the MobileNet-V3-large model for multi-class classification to cite an example of semi-supervised learning.

The process has been tested on the following configuration:

- Ubuntu 20.04
- NVIDIA GeForce RTX 3090
- Intel(R) Core(TM) i9-11900
- CUDA Toolkit 11.8

.. note::

To learn how to export the trained model, refer to `classification export <../base/how_to_train/classification.html#export>`__.

To learn how to optimize the trained model (.xml) with OpenVINO™ PTQ, refer to `classification optimization <../base/how_to_train/classification.html#optimization>`__.

This tutorial explains how to train a model in semi-supervised learning mode and how to evaluate the resulting model.

*************************
Setup virtual environment
*************************

1. You can follow the installation process from a :doc:`quick start guide <../../get_started/installation>`
to create a universal virtual environment for OpenVINO™ Training Extensions.

2. Activate your virtual
environment:

.. code-block:: shell
.otx/bin/activate
# or by this line, if you created an environment, using tox
. venv/otx/bin/activate
***************************
Dataset preparation
***************************

We use the same dataset, `flowers dataset <https://www.tensorflow.org/hub/tutorials/image_feature_vector#the_flowers_dataset>`_, as we do in :doc:`classification tutorial <../base/how_to_train/classification>`.

Since it is assumed that we have additional unlabeled images,
we make a use of ``tests/assets/classification_semisl_dataset/unlabeled`` for this purpose as an example.

please keep the exact same name for the train/val/test folder, to identify the dataset.

.. code-block:: shell
flower_photos
├──labeled
| ├──train
| | ├── daisy
| | ├── dandelion
| | ├── roses
| | ├── sunflowers
| | ├── tulips
| ├──val
| | ├── daisy
| | ├── ...
| ├──test
| | ├── daisy
| | ├── ...
├──unlabeled
*********
Training
*********

1. The recipe that provides Semi-SL can be found below.

.. tab-set::

.. tab-item:: CLI

.. code-block:: shell
(otx) ...$ otx find --task MULTI_CLASS_CLS --pattern semisl
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Task ┃ Model Name ┃ Recipe Path ┃
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ MULTI_CLASS_CLS │ tv_efficientnet_v2_l_semisl │ src/otx/recipe/classification/multi_class_cls/tv_efficientnet_v2_l_semisl.yaml │
│ MULTI_CLASS_CLS │ mobilenet_v3_large_semisl │ src/otx/recipe/classification/multi_class_cls/mobilenet_v3_large_semisl.yaml │
│ MULTI_CLASS_CLS │ efficientnet_b0_semisl │ src/otx/recipe/classification/multi_class_cls/efficientnet_b0_semisl.yaml │
│ MULTI_CLASS_CLS │ tv_efficientnet_b3_semisl │ src/otx/recipe/classification/multi_class_cls/tv_efficientnet_b3_semisl.yaml │
│ MULTI_CLASS_CLS │ efficientnet_v2_semisl │ src/otx/recipe/classification/multi_class_cls/efficientnet_v2_semisl.yaml │
│ MULTI_CLASS_CLS │ deit_tiny_semisl │ src/otx/recipe/classification/multi_class_cls/deit_tiny_semisl.yaml │
│ MULTI_CLASS_CLS │ dino_v2_semisl │ src/otx/recipe/classification/multi_class_cls/dino_v2_semisl.yaml │
│ MULTI_CLASS_CLS │ tv_mobilenet_v3_small_semisl │ src/otx/recipe/classification/multi_class_cls/tv_mobilenet_v3_small_semisl.yaml│
└─────────────────┴─────────────────────━━━━━━━━━━━━━─────┴────────────────────────────────────────────────────────────────────────────────┘
.. tab-item:: API

.. code-block:: python
from otx.engine.utils.api import list_models
model_lists = list_models(task="MULTI_CLASS_CLS", pattern="*semisl")
print(model_lists)
'''
[
'tv_efficientnet_b3_semisl',
'efficientnet_b0_semisl',
'efficientnet_v2_semisl',
...
]
'''
2. We will use the MobileNet-V3-large model for multi-class classification in semi-supervised learning mode.

.. tab-set::

.. tab-item:: CLI (with config)

.. code-block:: shell
(otx) ...$ otx train \
--config src/otx/recipe/classification/multi_class_cls/mobilenet_v3_large_semisl.yaml \
--data_root data/flower_photos/labeled \
--data.config.unlabeled_subset.data_root data/flower_photos/unlabeled
.. tab-item:: API (from_config)

.. code-block:: python
from otx.engine import Engine
data_root = "data/flower_photos"
recipe = "src/otx/recipe/classification/multi_class_cls/mobilenet_v3_large_semisl.yaml"
overrides = {"data.config.unlabeled_subset.data_root": "data/flower_photos/unlabeled"}
engine = Engine.from_config(
config_path=recipe,
data_root=data_root,
work_dir="otx-workspace",
**kwargs,
)
engine.train(...)
.. tab-item:: API

.. code-block:: python
from otx.core.config.data import DataModuleConfig, UnlabeledDataConfig
from otx.core.data.module import OTXDataModule
from otx.engine import Engine
data_config = DataModuleConfig(..., unlabeled_subset=UnlabeledDataConfig(data_root="data/flower_photos/unlabeled", ...))
datamodule = OTXDataModule(..., config=data_config)
engine = Engine(..., datamodule=datamodule)
engine.train(max_epochs=200)
The rest of the commands are the same as the original Classification tutorial.
Please refer to the :doc:`classification tutorial <../base/how_to_train/classification>` for more details.
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ The list of supported recipes for classification is available with the command l
print(model_lists)
'''
[
'otx_efficientnet_b0',
'efficientnet_b0',
'efficientnet_v2_light',
'efficientnet_b0_light',
...
Expand All @@ -160,7 +160,7 @@ Let's check the multi-class classification configuration running the following c

.. code-block:: shell
(otx) ...$ otx train --config src/otx/recipe/classification/multi_class_cls/otx_mobilenet_v3_large.yaml --data_root data/flower_photos --print_config
(otx) ...$ otx train --config src/otx/recipe/classification/multi_class_cls/mobilenet_v3_large.yaml --data_root data/flower_photos --print_config
...
data_root: data/flower_photos
Expand Down Expand Up @@ -204,7 +204,7 @@ Here are the main outputs can expect with CLI:

.. code-block:: shell
(otx) ...$ otx train --config src/otx/recipe/classification/multi_class_cls/otx_mobilenet_v3_large.yaml --data_root data/flower_photos
(otx) ...$ otx train --config src/otx/recipe/classification/multi_class_cls/mobilenet_v3_large.yaml --data_root data/flower_photos
.. tab-item:: API (from_config)

Expand All @@ -213,7 +213,7 @@ Here are the main outputs can expect with CLI:
from otx.engine import Engine
data_root = "data/flower_photos"
recipe = "src/otx/recipe/classification/multi_class_cls/otx_mobilenet_v3_large.yaml"
recipe = "src/otx/recipe/classification/multi_class_cls/mobilenet_v3_large.yaml"
engine = Engine.from_config(
config_path=recipe,
Expand Down
Binary file added docs/utils/images/auto_config.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/utils/images/cli.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/utils/images/diagram_otx.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/utils/images/semi-sl-algo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/utils/images/semi-sl-effnet-b0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/utils/images/semi-sl-effnet-v2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/utils/images/semi-sl-mv3-large.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 8af2d12

Please sign in to comment.