Skip to content

Commit

Permalink
Merge pull request #292 from microsoft/master
Browse files Browse the repository at this point in the history
Merge master
  • Loading branch information
SparkSnail authored Apr 12, 2021
2 parents e3fab14 + 08986c6 commit ad26f40
Show file tree
Hide file tree
Showing 273 changed files with 5,481 additions and 2,846 deletions.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
/ts/nni_manager/metrics.json
/ts/nni_manager/trial_jobs.json


# Logs
logs
*.log
Expand Down
146 changes: 73 additions & 73 deletions README.md

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions docs/en_US/CommunitySharings/ModelCompressionComparison.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The experiments are performed with the following pruners/datasets/models:


*
Models: :githublink:`VGG16, ResNet18, ResNet50 <examples/model_compress/models/cifar10>`
Models: :githublink:`VGG16, ResNet18, ResNet50 <examples/model_compress/pruning/models/cifar10>`

*
Datasets: CIFAR-10
Expand Down Expand Up @@ -96,14 +96,14 @@ Implementation Details
This avoids potential issues of counting them of masked models.

*
The experiment code can be found :githublink:`here <examples/model_compress/auto_pruners_torch.py>`.
The experiment code can be found :githublink:`here <examples/model_compress/pruning/auto_pruners_torch.py>`.

Experiment Result Rendering
^^^^^^^^^^^^^^^^^^^^^^^^^^^


*
If you follow the practice in the :githublink:`example <examples/model_compress/auto_pruners_torch.py>`\ , for every single pruning experiment, the experiment result will be saved in JSON format as follows:
If you follow the practice in the :githublink:`example <examples/model_compress/pruning/auto_pruners_torch.py>`\ , for every single pruning experiment, the experiment result will be saved in JSON format as follows:

.. code-block:: json
Expand All @@ -114,8 +114,8 @@ Experiment Result Rendering
}
*
The experiment results are saved :githublink:`here <examples/model_compress/comparison_of_pruners>`.
You can refer to :githublink:`analyze <examples/model_compress/comparison_of_pruners/analyze.py>` to plot new performance comparison figures.
The experiment results are saved :githublink:`here <examples/model_compress/pruning/comparison_of_pruners>`.
You can refer to :githublink:`analyze <examples/model_compress/pruning/comparison_of_pruners/analyze.py>` to plot new performance comparison figures.

Contribution
------------
Expand Down
14 changes: 14 additions & 0 deletions docs/en_US/Compression/CompressionReference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,20 @@ Quantizers
.. autoclass:: nni.algorithms.compression.pytorch.quantization.quantizers.BNNQuantizer
:members:

Model Speedup
-------------

Quantization Speedup
^^^^^^^^^^^^^^^^^^^^

.. autoclass:: nni.compression.pytorch.quantization_speedup.backend.BaseModelSpeedup
:members:

.. autoclass:: nni.compression.pytorch.quantization_speedup.integrated_tensorrt.ModelSpeedupTensorRT
:members:

.. autoclass:: nni.compression.pytorch.quantization_speedup.calibrator.Calibrator
:members:


Compression Utilities
Expand Down
8 changes: 5 additions & 3 deletions docs/en_US/Compression/Overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ NNI provides a model compression toolkit to help user compress and speed up thei
* Provide friendly and easy-to-use compression utilities for users to dive into the compression process and results.
* Concise interface for users to customize their own compression algorithms.

*Note that the interface and APIs are unified for both PyTorch and TensorFlow, currently only PyTorch version has been supported, TensorFlow version will be supported in future.*
.. note::
Since NNI compression algorithms are not meant to compress model while NNI speedup tool can truly compress model and reduce latency. To obtain a truly compact model, users should conduct `model speedup <./ModelSpeedup.rst>`__. The interface and APIs are unified for both PyTorch and TensorFlow, currently only PyTorch version has been supported, TensorFlow version will be supported in future.


Supported Algorithms
--------------------
Expand All @@ -24,7 +26,7 @@ The algorithms include pruning algorithms and quantization algorithms.
Pruning Algorithms
^^^^^^^^^^^^^^^^^^

Pruning algorithms compress the original network by removing redundant weights or channels of layers, which can reduce model complexity and address the over-fitting issue.
Pruning algorithms compress the original network by removing redundant weights or channels of layers, which can reduce model complexity and address the over-fitting issue.

.. list-table::
:header-rows: 1
Expand Down Expand Up @@ -90,7 +92,7 @@ Quantization algorithms compress the original network by reducing the number of
Model Speedup
-------------

The final goal of model compression is to reduce inference latency and model size. However, existing model compression algorithms mainly use simulation to check the performance (e.g., accuracy) of compressed model, for example, using masks for pruning algorithms, and storing quantized values still in float32 for quantization algorithms. Given the output masks and quantization bits produced by those algorithms, NNI can really speed up the model. The detailed tutorial of Model Speedup can be found `here <./ModelSpeedup.rst>`__.
The final goal of model compression is to reduce inference latency and model size. However, existing model compression algorithms mainly use simulation to check the performance (e.g., accuracy) of compressed model, for example, using masks for pruning algorithms, and storing quantized values still in float32 for quantization algorithms. Given the output masks and quantization bits produced by those algorithms, NNI can really speed up the model. The detailed tutorial of Masked Model Speedup can be found `here <./ModelSpeedup.rst>`__, The detailed tutorial of Mixed Precision Quantization Model Speedup can be found `here <./QuantizationSpeedup.rst>`__.

Compression Utilities
---------------------
Expand Down
137 changes: 137 additions & 0 deletions docs/en_US/Compression/QuantizationSpeedup.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
Speed up Mixed Precision Quantization Model (experimental)
==========================================================


Introduction
------------

Deep learning network has been computational intensive and memory intensive
which increases the difficulty of deploying deep neural network model. Quantization is a
fundamental technology which is widely used to reduce memory footprint and speed up inference
process. Many frameworks begin to support quantization, but few of them support mixed precision
quantization and get real speedup. Frameworks like `HAQ: Hardware-Aware Automated Quantization with Mixed Precision <https://arxiv.org/pdf/1811.08886.pdf>`__\, only support simulated mixed precision quantization which will
not speed up the inference process. To get real speedup of mixed precision quantization and
help people get the real feedback from hardware, we design a general framework with simple interface to allow NNI quantization algorithms to connect different
DL model optimization backends (e.g., TensorRT, NNFusion), which gives users an end-to-end experience that after quantizing their model
with quantization algorithms, the quantized model can be directly speeded up with the connected optimization backend. NNI connects
TensorRT at this stage, and will support more backends in the future.


Design and Implementation
-------------------------

To support speeding up mixed precision quantization, we divide framework into two part, frontend and backend.
Frontend could be popular training frameworks such as PyTorch, TensorFlow etc. Backend could be inference
framework for different hardwares, such as TensorRT. At present, we support PyTorch as frontend and
TensorRT as backend. To convert PyTorch model to TensorRT engine, we leverage onnx as intermediate graph
representation. In this way, we convert PyTorch model to onnx model, then TensorRT parse onnx
model to generate inference engine.


Quantization aware training combines NNI quantization algorithm 'QAT' and NNI quantization speedup tool.
Users should set config to train quantized model using QAT algorithm(please refer to `NNI Quantization Algorithms <https://nni.readthedocs.io/en/stable/Compression/Quantizer.html>`__\ ).
After quantization aware training, users can get new config with calibration parameters and model with quantized weight. By passing new config and model to quantization speedup tool, users can get real mixed precision speedup engine to do inference.


After getting mixed precision engine, users can do inference with input data.


Note


* User can also do post-training quantization leveraging TensorRT directly(need to provide calibration dataset).
* Not all op types are supported right now. At present, NNI supports Conv, Linear, Relu and MaxPool. More op types will be supported in the following release.


Prerequisite
------------
CUDA version >= 11.0

TensorRT version >= 7.2

Usage
-----
quantization aware training:

.. code-block:: python
# arrange bit config for QAT algorithm
configure_list = [{
'quant_types': ['weight', 'output'],
'quant_bits': {'weight':8, 'output':8},
'op_names': ['conv1']
}, {
'quant_types': ['output'],
'quant_bits': {'output':8},
'op_names': ['relu1']
}
]
quantizer = QAT_Quantizer(model, configure_list, optimizer)
quantizer.compress()
calibration_config = quantizer.export_model(model_path, calibration_path)
engine = ModelSpeedupTensorRT(model, input_shape, config=calibration_config, batchsize=batch_size)
# build tensorrt inference engine
engine.compress()
# data should be pytorch tensor
output, time = engine.inference(data)
Note that NNI also supports post-training quantization directly, please refer to complete examples for detail.


For complete examples please refer to :githublink:`the code <examples/model_compress/quantization/mixed_precision_speedup_mnist.py>`.


For more parameters about the class 'TensorRTModelSpeedUp', you can refer to `Model Compression API Reference <https://nni.readthedocs.io/en/stable/Compression/CompressionReference.html#quantization-speedup>`__\.


Mnist test
^^^^^^^^^^^^^^^^^^^

on one GTX2080 GPU,
input tensor: ``torch.randn(128, 1, 28, 28)``

.. list-table::
:header-rows: 1
:widths: auto

* - quantization strategy
- Latency
- accuracy
* - all in 32bit
- 0.001199961
- 96%
* - mixed precision(average bit 20.4)
- 0.000753688
- 96%
* - all in 8bit
- 0.000229869
- 93.7%


Cifar10 resnet18 test(train one epoch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


on one GTX2080 GPU,
input tensor: ``torch.randn(128, 3, 32, 32)``


.. list-table::
:header-rows: 1
:widths: auto

* - quantization strategy
- Latency
- accuracy
* - all in 32bit
- 0.003286268
- 54.21%
* - mixed precision(average bit 11.55)
- 0.001358022
- 54.78%
* - all in 8bit
- 0.000859139
- 52.81%
5 changes: 2 additions & 3 deletions docs/en_US/Compression/QuickStart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -110,12 +110,11 @@ Step2. Choose a quantizer and compress the model
Step3. Export compression result
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

You can export the quantized model directly by using ``torch.save`` api and the quantized model can be loaded by ``torch.load`` without any extra modification.
After training and calibration, you can export model weight to a file, and the generated calibration parameters to a file as well. Exporting onnx model is also supported.

.. code-block:: python
# Save quantized model which is generated by using NNI QAT algorithm
torch.save(model.state_dict(), "quantized_model.pth")
calibration_config = quantizer.export_model(model_path, calibration_path, onnx_path, input_shape, device)
Plese refer to :githublink:`mnist example <examples/model_compress/quantization/QAT_torch_quantizer.py>` for example code.

Expand Down
88 changes: 48 additions & 40 deletions docs/en_US/Compression/Tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,30 +17,37 @@ The ``dict``\ s in the ``list`` are applied one by one, that is, the configurati

There are different keys in a ``dict``. Some of them are common keys supported by all the compression algorithms:

* **op_types**\ : This is to specify what types of operations to be compressed. 'default' means following the algorithm's default setting.
* **op_types**\ : This is to specify what types of operations to be compressed. 'default' means following the algorithm's default setting. All suported module types are defined in :githublink:`default_layers.py <nni/compression/pytorch/default_layers.py>` for pytorch.
* **op_names**\ : This is to specify by name what operations to be compressed. If this field is omitted, operations will not be filtered by it.
* **exclude**\ : Default is False. If this field is True, it means the operations with specified types and names will be excluded from the compression.

Some other keys are often specific to a certain algorithm, users can refer to `pruning algorithms <./Pruner.rst>`__ and `quantization algorithms <./Quantizer.rst>`__ for the keys allowed by each algorithm.

A simple example of configuration is shown below:
To prune all ``Conv2d`` layers with the sparsity of 0.6, the configuration can be written as:

.. code-block:: python
[
{
'sparsity': 0.8,
'op_types': ['default']
},
{
'sparsity': 0.6,
'op_names': ['op_name1', 'op_name2']
},
{
'exclude': True,
'op_names': ['op_name3']
}
]
[{
'sparsity': 0.6,
'op_types': ['Conv2d']
}]
To control the sparsity of specific layers, the configuration can be written as:

.. code-block:: python
[{
'sparsity': 0.8,
'op_types': ['default']
},
{
'sparsity': 0.6,
'op_names': ['op_name1', 'op_name2']
},
{
'exclude': True,
'op_names': ['op_name3']
}]
It means following the algorithm's default setting for compressed operations with sparsity 0.8, but for ``op_name1`` and ``op_name2`` use sparsity 0.6, and do not compress ``op_name3``.

Expand All @@ -62,44 +69,45 @@ bits length of quantization, key is the quantization type, value is the quantiza
.. code-block:: bash
{
quant_bits: {
'weight': 8,
'output': 4,
},
quant_bits: {
'weight': 8,
'output': 4,
},
}
when the value is int type, all quantization types share same bits length. eg.

.. code-block:: bash
{
quant_bits: 8, # weight or output quantization are all 8 bits
quant_bits: 8, # weight or output quantization are all 8 bits
}
The following example shows a more complete ``config_list``\ , it uses ``op_names`` (or ``op_types``\ ) to specify the target layers along with the quantization bits for those layers.

.. code-block:: bash
config_list = [{
'quant_types': ['weight'],
'quant_bits': 8,
'op_names': ['conv1']
}, {
'quant_types': ['weight'],
'quant_bits': 4,
'quant_start_step': 0,
'op_names': ['conv2']
}, {
'quant_types': ['weight'],
'quant_bits': 3,
'op_names': ['fc1']
},
{
'quant_types': ['weight'],
'quant_bits': 2,
'op_names': ['fc2']
}
]
'quant_types': ['weight'],
'quant_bits': 8,
'op_names': ['conv1']
},
{
'quant_types': ['weight'],
'quant_bits': 4,
'quant_start_step': 0,
'op_names': ['conv2']
},
{
'quant_types': ['weight'],
'quant_bits': 3,
'op_names': ['fc1']
},
{
'quant_types': ['weight'],
'quant_bits': 2,
'op_names': ['fc2']
}]
In this example, 'op_names' is the name of layer and four layers will be quantized to different quant_bits.

Expand Down
1 change: 1 addition & 0 deletions docs/en_US/Compression/quantization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,4 @@ create your own quantizer using NNI model compression interface.
:maxdepth: 2

Quantizers <Quantizer>
Quantization Speedup <QuantizationSpeedup>
4 changes: 2 additions & 2 deletions docs/en_US/FeatureEngineering/Overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ How to use?

.. code-block:: python
from nni.feature_engineering.gradient_selector import FeatureGradientSelector
# from nni.feature_engineering.gbdt_selector import GBDTSelector
from nni.algorithms.feature_engineering.gradient_selector import FeatureGradientSelector
# from nni.algorithms.feature_engineering.gbdt_selector import GBDTSelector
# load data
...
Expand Down
Loading

0 comments on commit ad26f40

Please sign in to comment.