-
Notifications
You must be signed in to change notification settings - Fork 38
ModelPreparing
To prepare models and data for benchmarking, please, follow instructions.
-
Create
<working_dir>
directory which will contain models and datasets.mkdir <working_dir>
-
Download models using the OpenVINO Model Downloader tool to the
<working_dir>
directory:omz_downloader --all --output_dir <working_dir> --cache_dir <cache_dir>
-
Convert models using the OpenVINO Model Converter tool to the
<working_dir>
directory:omz_converter --output_dir <working_dir> --download_dir <working_dir>
-
(Optional) Convert models to the INT8 precision:
-
Prepare configuration files in accordance with
src/configs/quantization_configuration_file_template.xml
. Please, use GUI application (src/config_maker
). -
Quantize models to the INT8 precision using the script
src/quantization/quantization.py
in accordiance withsrc/quantization/README.md
.python3 ~/dl-benchmark/src/quantization/quantization.py -c <config_path>
-
[TBD]
We used OMZ-models for performance analysis. To download these models, please, use the OpenVINO Model Downloader tool described earlier and load TensorFlow-models by names.
omz_downloader --name <model_name> --output_dir <working_dir> --cache_dir <cache_dir>
Models stored in pb- or meta-format can be inferred directly.
Models stored in the h5-format should be converted to te pb-format
using omz_converter
.
omz_converter --name <model_name> --output_dir <working_dir> \
--download_dir <working_dir>
For several models its required to export PYTHONPATH
before model convertation.
export PYTHONPATH=`pwd`:`pwd`/<model_name>/models/research/slim
We used TensorFlow-models from the OpenVINO™ Toolkit Open Model Zoo.
These models we converted into tflite-format using the internal
conveter (src/model_converters/tflite_conveter.py
).
python tflite_converter.py --model-path <model_path> \
--input-names <inputs> \
--output-names <outputs> \
--source-framework tf
Also we inferred several models from TF hub.
omz_converter
supports exporting of PyTorch-models to ONNX-format.
For more info see Exporting a PyTorch Model to ONNX Format.
-
Create
<working_dir>
directory which will contain models and datasets.mkdir <working_dir>
-
Download models using the OpenVINO Model Downloader tool to the
<working_dir>
directory:omz_downloader --all --output_dir <working_dir> \ --cache_dir <cache_dir>
or
omz_downloader --name <model_name> --output_dir <working_dir> \ --cache_dir <cache_dir>
-
Convert models using the OpenVINO Model Converter tool to the
<working_dir>
directory:omz_converter --output_dir <working_dir> --download_dir <working_dir>
output_dir
will contain model converted to ONNX-format.
To infer MXNet-models it is required to install GluonCV Python package.
pip install gluoncv[full]
We used TensorFlow, Caffe and ONNX models from the OpenVINO™ Toolkit Open Model Zoo.
To infer PyTorch-models it is required to install the torchvision
Python package
or to download models from the OpenVINO™ Toolkit Open Model Zoo.
pip install torchvision
We used Caffe, TensorFlow, PyTorch and ONNX models from the OpenVINO™ Toolkit Open Model Zoo.
To infer DGL-models it is required to install the
PyTorch
and DGL Python packages.
From the model files, you have to download a file in the PyTorch format (.pt
, .pth
)
and a .py
file with a description of the architecture.
This repository
provides an example of the described files.
Spektral uses keras save/load system implemented in newer versions of Tensorflow using special .keras
file format. Additionaly, the .py
file of the class implementation of graph neural network is required.
Examples can be found in here.
RKNN launcher supports models in .rknn formats. To obtain models in this format RKNN Toolkit 2 must be used. It supports conversions from various frameworks (e.g. ONNX, TensorFlow Lite, PyTorch).
To infer ncnn-models it is required to install the ncnn
Python package.
pip install ncnn
You can download models directly from the PaddleClas repository. The list of models is available here. To download and unpack model, please, use the following commands.
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/<model_name>.tar
tar -xf <model_name>.tar