Skip to content

ModelPreparing

Valentina edited this page Dec 11, 2024 · 33 revisions

Model preparing

Intel® Distribution of OpenVINO™ Toolkit

To prepare models and data for benchmarking, please, follow instructions.

  1. Create <working_dir> directory which will contain models and datasets.

    mkdir <working_dir>
  2. Download models using the OpenVINO Model Downloader tool to the <working_dir> directory:

    omz_downloader --all --output_dir <working_dir> --cache_dir <cache_dir>
  3. Convert models using the OpenVINO Model Converter tool to the <working_dir> directory:

    omz_converter --output_dir <working_dir> --download_dir <working_dir>
  4. (Optional) Convert models to the INT8 precision:

    1. Prepare configuration files in accordance with src/configs/quantization_configuration_file_template.xml. Please, use GUI application (src/config_maker).

    2. Quantize models to the INT8 precision using the script src/quantization/quantization.py in accordiance with src/quantization/README.md.

      python3 ~/dl-benchmark/src/quantization/quantization.py -c <config_path>

Intel® Optimization for Caffe

In the initial versions we used models as part of the OpenVINO™ Toolkit

  • Open Model Zoo repository.

Intel® Optimizations for TensorFlow

We used OMZ-models for performance analysis. To download these models, please, use the OpenVINO Model Downloader tool described earlier and load TensorFlow-models by names.

omz_downloader --name <model_name> --output_dir <working_dir> --cache_dir <cache_dir>

Models stored in pb- or meta-format can be inferred directly. Models stored in the h5-format should be converted to te pb-format using omz_converter.

omz_converter --name <model_name> --output_dir <working_dir> \
              --download_dir <working_dir>

For several models its required to export PYTHONPATH before model convertation.

export PYTHONPATH=`pwd`:`pwd`/<model_name>/models/research/slim

TensorFlow Lite

We used TensorFlow-models from the OpenVINO™ Toolkit Open Model Zoo. These models we converted into tflite-format using the internal conveter (src/model_converters/tflite_conveter.py).

python tflite_converter.py --model-path <model_path> \
                           --input-names <inputs> \
                           --output-names <outputs> \
                           --source-framework tf

Also we inferred several models from TF hub.

ONNX Runtime

omz_converter supports exporting of PyTorch-models to ONNX-format. For more info see Exporting a PyTorch Model to ONNX Format.

  1. Create <working_dir> directory which will contain models and datasets.

    mkdir <working_dir>
  2. Download models using the OpenVINO Model Downloader tool to the <working_dir> directory:

    omz_downloader --all --output_dir <working_dir> \
                   --cache_dir <cache_dir>

    or

    omz_downloader --name <model_name> --output_dir <working_dir> \
                   --cache_dir <cache_dir>
  3. Convert models using the OpenVINO Model Converter tool to the <working_dir> directory:

    omz_converter --output_dir <working_dir> --download_dir <working_dir>

    output_dir will contain model converted to ONNX-format.

MXNet

To infer MXNet-models it is required to install GluonCV Python package.

pip install gluoncv[full]

OpenCV DNN

We used TensorFlow, Caffe and ONNX models from the OpenVINO™ Toolkit Open Model Zoo.

PyTorch

To infer PyTorch-models it is required to install the torchvision Python package or to download models from the OpenVINO™ Toolkit Open Model Zoo.

pip install torchvision

Apache TVM

We used Caffe, TensorFlow, PyTorch and ONNX models from the OpenVINO™ Toolkit Open Model Zoo.

DGL

To infer DGL-models it is required to install the PyTorch and DGL Python packages. From the model files, you have to download a file in the PyTorch format (.pt, .pth) and a .py file with a description of the architecture. This repository provides an example of the described files.

Spektral

Spektral uses keras save/load system implemented in newer versions of Tensorflow using special .keras file format. Additionaly, the .py file of the class implementation of graph neural network is required. Examples can be found in here.

RKNN

RKNN launcher supports models in .rknn formats. To obtain models in this format RKNN Toolkit 2 must be used. It supports conversions from various frameworks (e.g. ONNX, TensorFlow Lite, PyTorch).

ncnn

To infer ncnn-models it is required to install the ncnn Python package.

pip install ncnn

PaddlePaddle

You can download models directly from the PaddleClas repository. The list of models is available here. To download and unpack model, please, use the following commands.

wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/<model_name>.tar
tar -xf <model_name>.tar
Clone this wiki locally