Skip to content

ModelPreparing

Valentina edited this page Oct 16, 2023 · 33 revisions

Model preparing

Intel® Distribution of OpenVINO™ Toolkit

To prepare models and data for benchmarking, please, follow instructions.

  1. Create <working_dir> directory which will contain models and datasets.

    mkdir <working_dir>
  2. Download models using the OpenVINO Model Downloader tool to the <working_dir> directory:

    omz_downloader --all --output_dir <working_dir> --cache_dir <cache_dir>
  3. Convert models using the OpenVINO Model Converter tool to the <working_dir> directory:

    omz_converter --output_dir <working_dir> --download_dir <working_dir>
  4. (Optional) Convert models to the INT8 precision:

    1. Prepare configuration files in accordance with src/configs/quantization_configuration_file_template.xml. Please, use GUI application (src/config_maker).

    2. Quantize models to the INT8 precision using the script src/quantization/quantization.py in accordiance with src/quantization/README.md.

      python3 ~/dl-benchmark/src/quantization/quantization.py -c <config_path>

Intel® Optimization for Caffe

[TBD]

Intel® Optimizations for TensorFlow

We used OMZ-models for performance analysis. To download these models, please, use the OpenVINO Model Downloader tool described earlier and load TensorFlow-models by names.

omz_downloader --name <model_name> --output_dir <working_dir> --cache_dir <cache_dir>

Models stored in pb- or meta-format can be inferred directly. Models stored in the h5-format should be converted to te pb-format using omz_converter.

omz_converter --name <model_name> --output_dir <working_dir> \
              --download_dir <working_dir>

For several models its required to export PYTHONPATH before model convertation.

export PYTHONPATH=`pwd`:`pwd`/<model_name>/models/research/slim

TensorFlow Lite

We used TensorFlow-models from the OpenVINO™ Toolkit Open Model Zoo. These models we converted into tflite-format using the internal conveter (src/model_converters/tflite_conveter.py).

python tflite_converter.py --model-path <model_path> \
                           --input-names <inputs> \
                           --output-names <outputs> \
                           --source-framework tf

Also we inferred several models from TF hub.

ONNX Runtime

omz_converter supports exporting of PyTorch-models to ONNX-format. For more info see Exporting a PyTorch Model to ONNX Format.

  1. Create <working_dir> directory which will contain models and datasets.

    mkdir <working_dir>
  2. Download models using the OpenVINO Model Downloader tool to the <working_dir> directory:

    omz_downloader --all --output_dir <working_dir> \
                   --cache_dir <cache_dir>

    or

    omz_downloader --name <model_name> --output_dir <working_dir> \
                   --cache_dir <cache_dir>
  3. Convert models using the OpenVINO Model Converter tool to the <working_dir> directory:

    omz_converter --output_dir <working_dir> --download_dir <working_dir>

    output_dir will contain model converted to ONNX-format.

MXNet

To infer MXNet-models it is required to install GluonCV python package.

pip install gluoncv[full]

OpenCV DNN

We used TensorFlow, Caffe and ONNX models from the OpenVINO™ Toolkit Open Model Zoo.

PyTorch

To infer PyTorch-models it is required to install TorchVision python package or, please, use models from the OpenVINO™ Toolkit Open Model Zoo.

pip install torchvision
Clone this wiki locally