Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update for_developer md files #2829

Merged
merged 3 commits into from
Jan 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
198 changes: 198 additions & 0 deletions for_developers/cli_guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
# How to use OTX CLI

## Installation

Please see [setup_guide.md](setup_guide.md).

## otx help

```console
otx --help
```

```powershell
╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────╮
│ Usage: otx [-h] [-v] {install,train,test,predict,export} ... │
│ │
│ │
│ OpenVINO Training-Extension command line tool │
│ │
│ │
│ Options: │
│ -h, --help Show this help message and exit. │
│ -v, --version Display OTX version number. │
│ │
│ Subcommands: │
│ For more details of each subcommand, add it as an argument followed by --help. │
│ │
│ │
│ Available subcommands: │
│ install Install OTX requirements. │
│ train Trains the model using the provided LightningModule and OTXDataModule. │
│ test Run the testing phase of the engine. │
│ predict Run predictions using the specified model and data. │
│ export Export the trained model to OpenVINO Intermediate Representation (IR) o │
│ ONNX formats. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
```

The subcommand can get help output in the following way.
For basic subcommand help, the Verbosity Level is 0. In this case, the CLI provides a Quick-Guide in markdown.

```console
# otx {subcommand} --help
otx train --help
```

```powershell
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ OpenVINO™ Training Extensions CLI Guide ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

Github Repository:
https://github.com/openvinotoolkit/training_extensions.

A better guide is provided by the documentation.
╭─ Quick-Start ─────────────────────────────────────────────────────────╮
│ │
│ 1 you can train with data_root only. then OTX will provide default │
│ model. │
│ │
│ │
│ otx train --data_root <DATASET_PATH> │
│ │
│ │
│ 2 you can pick a model or datamodule as Config file or Class. │
│ │
│ │
│ otx train │
│ --data_root <DATASET_PATH> │
│ --model <CONFIG | CLASS_PATH_OR_NAME> --data <CONFIG | │
│ CLASS_PATH_OR_NAME> │
│ │
│ │
│ 3 Of course, you can override the various values with commands. │
│ │
│ │
│ otx train │
│ --data_root <DATASET_PATH> │
│ --max_epochs <EPOCHS, int> --checkpoint <CKPT_PATH, str> │
│ │
│ │
│ 4 If you have a complete configuration file, run it like this. │
│ │
│ │
│ otx train --data_root <DATASET_PATH> --config <CONFIG_PATH, str> │
│ │
│ │
│ To get more overridable argument information, run the command below. │
│ │
│ │
│ # Verbosity Level 1 │
│ otx train [optional_arguments] -h -v │
│ # Verbosity Level 2 │
│ otx train [optional_arguments] -h -vv │
│ │
╰───────────────────────────────────────────────────────────────────────╯
```

For Verbosity Level 1, it shows Quick-Guide & the essential arguments.

```console
otx train --help -v
```

```powershell
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ OpenVINO™ Training Extensions CLI Guide ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛

Github Repository:
https://github.com/openvinotoolkit/training_extensions.

A better guide is provided by the documentation.
╭─ Quick-Start ─────────────────────────────────────────────────────────╮
│ ... │
╰───────────────────────────────────────────────────────────────────────╯
╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────╮
│ Usage: otx [options] train [-h] [-c CONFIG] [--print_config [=flags]] │
│ [--data_root DATA_ROOT] [--task TASK] │
│ [--engine CONFIG] │
│ [--engine.work_dir WORK_DIR] │
│ [--engine.checkpoint CHECKPOINT] │
│ [--engine.device {auto,gpu,cpu,tpu,ipu,hpu,mps}] │
│ [--model.help CLASS_PATH_OR_NAME] │
│ [--model CONFIG | CLASS_PATH_OR_NAME | .INIT_ARG_NAME VALUE] │
│ [--data CONFIG] │
│ [--optimizer CONFIG | CLASS_PATH_OR_NAME | .INIT_ARG_NAME VALUE] │
│ [--scheduler CONFIG | CLASS_PATH_OR_NAME | .INIT_ARG_NAME VALUE] │
│ │
...
```

For Verbosity Level 2, it shows all available arguments.

```console
otx train --help -vv
```

## otx {subcommand} --print_config

Preview all configuration values that will be executed through that command line.

```console
otx train --config <config-file-path> --print_config
```

```yaml
data_root: tests/assets/car_tree_bug
callback_monitor: val/map_50
engine:
task: DETECTION
work_dir: ./otx-workspace
device: auto
model:
class_path: otx.algo.detection.atss.ATSS
init_args:
num_classes: 1000
variant: mobilenetv2
optimizer: ...
scheduler: ...
data:
task: DETECTION
config:
data_format: coco_instances
train_subset: ...
val_subset: ...
test_subset: ...
mem_cache_size: 1GB
mem_cache_img_max_size: null
image_color_channel: RGB
include_polygons: false
max_epochs: 2
deterministic: false
precision: 16
callbacks: ...
logger: ...
```

Users can also pre-generate a config file with an example like the one below.

```console
otx train --config <config-file-path> --print_config > config.yaml
```

## otx {subcommand}

Use Configuration file

```console
otx train --config <config-file-path> --data_root <dataset-root>
```

Override Parameters

```console
otx train ... --model.num_classes <num-classes> --max_epochs <max-epochs>
```
6 changes: 2 additions & 4 deletions for_developers/dir_structure.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
root/
algo/ # Custom algo (e.g., hierarchical_cls_head)
cli/ # CLI entrypoints
config/ # Default YAML config files
engine/ # OTX Engine with Entry Point
core/
config/ # Structured data type object for configurations
data/ # Data related things
Expand All @@ -17,9 +17,6 @@ root/
transform_libs/ # To support transform libraries (e.g., MMCV)
factory.py # Factory to instantiate data related objects
module.py # OTXDataModule
engine/ # PyTorchLightning engine
train.py
...
model/ # Model related things
entity/ # OTXModel
base.py
Expand All @@ -32,6 +29,7 @@ root/
types/ # Enum definitions (e.g. OTXTaskType)
utils/ # Utility functions
recipe/ # Recipe YAML config for each model we support
_base_/ # Default YAML config files
detection/ # (e.g., rtmdet_tiny)
...
tools/ # Python runnable scripts for some TBD use cases
Expand Down
36 changes: 11 additions & 25 deletions for_developers/setup_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,11 @@ conda activate otx-v2
# Install PyTorch and TorchVision
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia

# Install core dependency
pip install lightning datumaro omegaconf hydra-core

# Install mmcv (mmdet)
pip install -U openmim
mim install mmengine "mmcv>=2.0.0" mmdet

# Install this package
# Install otx with core requirements
pip install -e .

# otx install (install mmX)
otx install -v
```

### With PIP & 'otx install'
Expand All @@ -37,12 +33,8 @@ pip install -e .
otx --help

# Install torch & lightning base on user environments
otx install
# or 'otx install -v' (Verbose mode)

# Install other mmlab library or optional-dependencies
otx install --option dev
# or 'otx install --option mmpretrain'
otx install -v
# or 'otx install' (Not verbose mode)
```

Please see [requirements-lock.txt](requirements-lock.txt). This is what I got after the above installation steps by `pip freeze`.
Expand All @@ -52,31 +44,25 @@ Please see [requirements-lock.txt](requirements-lock.txt). This is what I got af
- Launch detection task ATSS-R50-FPN template

```console
otx train +recipe=detection/atss_r50_fpn base.data_dir=tests/assets/car_tree_bug model.otx_model.config.bbox_head.num_classes=3 trainer.max_epochs=50 trainer.check_val_every_n_epoch=10 trainer=gpu base.work_dir=outputs/test_work_dir base.output_dir=outputs/test_output_dir
otx train --config src/otx/recipe/detection/atss_r50_fpn.yaml --data_root tests/assets/car_tree_bug --model.num_classes=3 --max_epochs=50 --check_val_every_n_epoch=10 --engine.device gpu --engine.work_dir ./otx-workspace
```

- Change subset names, e.g., "train" -> "train_16" (for training)

```console
otx train ... data.train_subset.subset_name=<arbitrary-name> data.val_subset.subset_name=<arbitrary-name> data.test_subset.subset_name=<arbitrary-name>
```

- Do test with the best validation model checkpoint

```console
otx train ... test=true
otx train ... --data.config.train_subset.subset_name <arbitrary-name> --data.config.val_subset.subset_name <arbitrary-name> --data.config.test_subset.subset_name <arbitrary-name>
```

- Do train with the existing model checkpoint for resume

```console
otx train ... checkpoint=<checkpoint-path>
otx train ... --checkpoint <checkpoint-path>
```

- Do experiment with deterministic operations and the fixed seed

```console
otx train ... trainer.deterministic=True seed=<arbitrary-seed>
otx train ... --deterministic True --seed <arbitrary-seed>
```

- Do test with the existing model checkpoint
Expand All @@ -85,4 +71,4 @@ Please see [requirements-lock.txt](requirements-lock.txt). This is what I got af
otx test ... checkpoint=<checkpoint-path>
```

`trainer.deterministic=True` might affect to the model performance. Please see [this link](https://lightning.ai/docs/pytorch/stable/common/trainer.html#deterministic). Therefore, it is not recommended to turn on this option for the model performance comparison.
`--deterministic True` might affect to the model performance. Please see [this link](https://lightning.ai/docs/pytorch/stable/common/trainer.html#deterministic). Therefore, it is not recommended to turn on this option for the model performance comparison.