-
Notifications
You must be signed in to change notification settings - Fork 38
ModelPreparing
To prepare models and data for benchmarking, please, follow instructions.
-
Create
<working_dir>
directory which will contain models and datasets.mkdir <working_dir>
-
Download models using the OpenVINO Model Downloader tool to the
<working_dir>
directory:omz_downloader --all --output_dir <working_dir> --cache_dir <cache_dir>
-
Convert models using the OpenVINO Model Converter tool to the
<working_dir>
directory:omz_converter --output_dir <working_dir> --download_dir <working_dir>
-
(Optional) Convert models to the INT8 precision:
-
Prepare configuration files in accordance with
src/configs/quantization_configuration_file_template.xml
. Please, use GUI application (src/config_maker
). -
Quantize models to the INT8 precision using the script
src/quantization/quantization.py
in accordiance withsrc/quantization/README.md
.python3 ~/dl-benchmark/src/quantization/quantization.py -c <config_path>
-
[TBD]
We used OMZ-models for performance analysis. To download these models, please, use the OpenVINO Model Downloader tool described earlier and load TensorFlow-models by names.
omz_downloader --name <model_name> --output_dir <working_dir> --cache_dir <cache_dir>
Models stored in pb- or meta-format can be inferred directly.
Models stored in the h5-format should be converted to te pb-format
using omz_converter
.
omz_converter --name <model_name> --output_dir <working_dir> \
--download_dir <working_dir>
For several models its required to export PYTHONPATH
before model convertation.
export PYTHONPATH=`pwd`:`pwd`/<model_name>/models/research/slim
omz_converter
supports exporting of PyTorch-models to ONNX-format.
For more info see Exporting a PyTorch Model to ONNX Format
-
Create
<working_dir>
directory which will contain models and datasets.mkdir <working_dir>
-
Download models using the OpenVINO Model Downloader tool to the
<working_dir>
directory:omz_downloader --all --output_dir <working_dir> \ --cache_dir <cache_dir>
or
omz_downloader --name <model_name> --output_dir <working_dir> \ --cache_dir <cache_dir>
-
Convert models using the OpenVINO Model Converter tool to the
<working_dir>
directory:omz_converter --output_dir <working_dir> --download_dir <working_dir>
output_dir
will contain model converted to ONNX-format.
We used TensorFlow-models from the OpenVINO™ Toolkit Open Model Zoo.
These models we converted into tflite-format using the internal
conveter (src/model_converters/tflite_conveter.py
).
python tflite_converter.py --model-path <model_path> \
--input-names <inputs> \
--output-names <outputs> \
--source-framework tf
Also we inferred several models from TF hub.
To infer MXNet-models it is required to install GluonCV python package.
pip install gluoncv[full]
We used TensorFlow, Caffe and ONNX models from the OpenVINO Toolkit Open Model Zoo.