NNI provides an easy-to-use toolkit to help user design and use compression algorithms. It supports Tensorflow and PyTorch with unified interface. For users to compress their models, they only need to add several lines in their code. There are some popular model compression algorithms built-in in NNI. Users could further use NNI's auto tuning power to find the best compressed model, which is detailed in Auto Model Compression. On the other hand, users could easily customize their new compression algorithms using NNI's interface, refer to the tutorial here.
We have provided two naive compression algorithms and three popular ones for users, including two pruning algorithms and three quantization algorithms:
Name | Brief Introduction of Algorithm |
---|---|
Level Pruner | Pruning the specified ratio on each weight based on absolute values of weights |
AGP Pruner | Automated gradual pruning (To prune, or not to prune: exploring the efficacy of pruning for model compression) Reference Paper |
Naive Quantizer | Quantize weights to default 8 bits |
QAT Quantizer | Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. Reference Paper |
DoReFa Quantizer | DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. Reference Paper |
We use a simple example to show how to modify your trial code in order to apply the compression algorithms. Let's say you want to prune all weight to 80% sparsity with Level Pruner, you can add the following three lines into your code before training your model (here is complete code).
Tensorflow code
from nni.compression.tensorflow import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': 'default' }]
pruner = LevelPruner(config_list)
pruner(tf.get_default_graph())
PyTorch code
from nni.compression.torch import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': 'default' }]
pruner = LevelPruner(config_list)
pruner(model)
You can use other compression algorithms in the package of nni.compression
. The algorithms are implemented in both PyTorch and Tensorflow, under nni.compression.torch
and nni.compression.tensorflow
respectively. You can refer to Pruner and Quantizer for detail description of supported algorithms.
The function call pruner(model)
receives user defined model (in Tensorflow the model can be obtained with tf.get_default_graph()
, while in PyTorch the model is the defined model class), and the model is modified with masks inserted. Then when you run the model, the masks take effect. The masks can be adjusted at runtime by the algorithms.
When instantiate a compression algorithm, there is config_list
passed in. We describe how to write this config below.
When compressing a model, users may want to specify the ratio for sparsity, to specify different ratios for different types of operations, to exclude certain types of operations, or to compress only a certain types of operations. For users to express these kinds of requirements, we define a configuration specification. It can be seen as a python list
object, where each element is a dict
object. In each dict
, there are some keys commonly supported by NNI compression:
- op_types: This is to specify what types of operations to be compressed. 'default' means following the algorithm's default setting.
- op_names: This is to specify by name what operations to be compressed. If this field is omitted, operations will not be filtered by it.
- exclude: Default is False. If this field is True, it means the operations with specified types and names will be excluded from the compression.
There are also other keys in the dict
, but they are specific for every compression algorithm. For example, some , some.
The dict
s in the list
are applied one by one, that is, the configurations in latter dict
will overwrite the configurations in former ones on the operations that are within the scope of both of them.
A simple example of configuration is shown below:
[
{
'sparsity': 0.8,
'op_types': 'default'
},
{
'sparsity': 0.6,
'op_names': ['op_name1', 'op_name2']
},
{
'exclude': True,
'op_names': ['op_name3']
}
]
It means following the algorithm's default setting for compressed operations with sparsity 0.8, but for op_name1
and op_name2
use sparsity 0.6, and please do not compress op_name3
.
Some compression algorithms use epochs to control the progress of compression (e.g. AGP), and some algorithms need to do something after every minibatch. Therefore, we provide another two APIs for users to invoke. One is update_epoch
, you can use it as follows:
Tensorflow code
pruner.update_epoch(epoch, sess)
PyTorch code
pruner.update_epoch(epoch)
The other is step
, it can be called with pruner.step()
after each minibatch. Note that not all algorithms need these two APIs, for those that do not need them, calling them is allowed but has no effect.
[TODO] The last API is for users to export the compressed model. You will get a compressed model when you finish the training using this API. It also exports another file storing the values of masks.
To simplify writing a new compression algorithm, we design programming interfaces which are simple but flexible enough. There are interfaces for pruner and quantizer respectively.
If you want to write a new pruning algorithm, you can write a class that inherits nni.compression.tensorflow.Pruner
or nni.compression.torch.Pruner
depending on which framework you use. Then, override the member functions with the logic of your algorithm.
# This is writing a pruner in tensorflow.
# For writing a pruner in PyTorch, you can simply replace
# nni.compression.tensorflow.Pruner with
# nni.compression.torch.Pruner
class YourPruner(nni.compression.tensorflow.Pruner):
def __init__(self, config_list):
# suggest you to use the NNI defined spec for config
super().__init__(config_list)
def bind_model(self, model):
# this func can be used to remember the model or its weights
# in member variables, for getting their values during training
pass
def calc_mask(self, weight, config, **kwargs):
# weight is the target weight tensor
# config is the selected dict object in config_list for this layer
# kwargs contains op, op_type, and op_name
# design your mask and return your mask
return your_mask
# note for pytorch version, there is no sess in input arguments
def update_epoch(self, epoch_num, sess):
pass
# note for pytorch version, there is no sess in input arguments
def step(self, sess):
# can do some processing based on the model or weights binded
# in the func bind_model
pass
For the simpliest algorithm, you only need to override calc_mask
. It receives each layer's weight and selected configuration, as well as op information. You generate the mask for this weight in this function and return. Then NNI applies the mask for you.
Some algorithms generate mask based on training progress, i.e., epoch number. We provide update_epoch
for the pruner to be aware of the training progress.
Some algorithms may want global information for generating masks, for example, all weights of the model (for statistic information), model optimizer's information. NNI supports this requirement using bind_model
. bind_model
receives the complete model, thus, it could record any information (e.g., reference to weights) it cares about. Then step
can process or update the information according to the algorithm. You can refer to source code of built-in algorithms for example implementations.
The interface for customizing quantization algorithm is similar to that of pruning algorithms. The only difference is that calc_mask
is replaced with quantize_weight
. quantize_weight
directly returns the quantized weights rather than mask, because for quantization the quantized weights cannot be obtained by applying mask.
# This is writing a Quantizer in tensorflow.
# For writing a Quantizer in PyTorch, you can simply replace
# nni.compression.tensorflow.Quantizer with
# nni.compression.torch.Quantizer
class YourPruner(nni.compression.tensorflow.Quantizer):
def __init__(self, config_list):
# suggest you to use the NNI defined spec for config
super().__init__(config_list)
def bind_model(self, model):
# this func can be used to remember the model or its weights
# in member variables, for getting their values during training
pass
def quantize_weight(self, weight, config, **kwargs):
# weight is the target weight tensor
# config is the selected dict object in config_list for this layer
# kwargs contains op, op_type, and op_name
# design your quantizer and return new weight
return new_weight
# note for pytorch version, there is no sess in input arguments
def update_epoch(self, epoch_num, sess):
pass
# note for pytorch version, there is no sess in input arguments
def step(self, sess):
# can do some processing based on the model or weights binded
# in the func bind_model
pass
# you can also design your method
def your_method(self, your_input):
#your code
def bind_model(self, model):
#preprocess model
[TODO] Will add another member function quantize_layer_output
, as some quantization algorithms also quantize layers' output.
[TODO] ...