Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Refactor model compression examples #3326

Merged
merged 21 commits into from
Feb 4, 2021
Merged
Show file tree
Hide file tree
Changes from 19 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
139 changes: 43 additions & 96 deletions docs/en_US/Compression/AutoPruningUsingTuners.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,116 +6,63 @@ It's convenient to implement auto model pruning with NNI compression and NNI tun
First, model compression with NNI
---------------------------------

You can easily compress a model with NNI compression. Take pruning for example, you can prune a pretrained model with LevelPruner like this
You can easily compress a model with NNI compression. Take pruning for example, you can prune a pretrained model with L2FilterPruner like this

.. code-block:: python

from nni.algorithms.compression.pytorch.pruning import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': ['default'] }]
pruner = LevelPruner(model, config_list)
from nni.algorithms.compression.pytorch.pruning import L2FilterPruner
config_list = [{ 'sparsity': 0.5, 'op_types': ['Conv2d'] }]
pruner = L2FilterPruner(model, config_list)
pruner.compress()

The 'default' op_type stands for the module types defined in :githublink:`default_layers.py <nni/compression/pytorch/default_layers.py>` for pytorch.
The 'Conv2d' op_type stands for the module types defined in :githublink:`default_layers.py <nni/compression/pytorch/default_layers.py>` for pytorch.

Therefore ``{ 'sparsity': 0.8, 'op_types': ['default'] }``\ means that **all layers with specified op_types will be compressed with the same 0.8 sparsity**. When ``pruner.compress()`` called, the model is compressed with masks and after that you can normally fine tune this model and **pruned weights won't be updated** which have been masked.
Therefore ``{ 'sparsity': 0.5, 'op_types': ['Conv2d'] }``\ means that **all layers with specified op_types will be compressed with the same 0.5 sparsity**. When ``pruner.compress()`` called, the model is compressed with masks and after that you can normally fine tune this model and **pruned weights won't be updated** which have been masked.

Then, make this automatic
-------------------------

The previous example manually choosed LevelPruner and pruned all layers with the same sparsity, this is obviously sub-optimal because different layers may have different redundancy. Layer sparsity should be carefully tuned to achieve least model performance degradation and this can be done with NNI tuners.

The first thing we need to do is to design a search space, here we use a nested search space which contains choosing pruning algorithm and optimizing layer sparsity.

.. code-block:: json

{
"prune_method": {
"_type": "choice",
"_value": [
{
"_name": "agp",
"conv0_sparsity": {
"_type": "uniform",
"_value": [
0.1,
0.9
]
},
"conv1_sparsity": {
"_type": "uniform",
"_value": [
0.1,
0.9
]
},
},
{
"_name": "level",
"conv0_sparsity": {
"_type": "uniform",
"_value": [
0.1,
0.9
]
},
"conv1_sparsity": {
"_type": "uniform",
"_value": [
0.01,
0.9
]
},
}
]
}
}

Then we need to modify our codes for few lines
The previous example manually choosed L2FilterPruner and pruned with a specified sparsity. Different sparsity and different pruners may have different effect on different models. This process can be done with NNI tuners.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"choosed" -> "chose"


Firstly, modify our codes for few lines

.. code-block:: python

import nni
from nni.algorithms.compression.pytorch.pruning import *
params = nni.get_parameters()
conv0_sparsity = params['prune_method']['conv0_sparsity']
conv1_sparsity = params['prune_method']['conv1_sparsity']
# these raw sparsity should be scaled if you need total sparsity constrained
config_list_level = [{ 'sparsity': conv0_sparsity, 'op_name': 'conv0' },
{ 'sparsity': conv1_sparsity, 'op_name': 'conv1' }]
config_list_agp = [{'initial_sparsity': 0, 'final_sparsity': conv0_sparsity,
'start_epoch': 0, 'end_epoch': 3,
'frequency': 1,'op_name': 'conv0' },
{'initial_sparsity': 0, 'final_sparsity': conv1_sparsity,
'start_epoch': 0, 'end_epoch': 3,
'frequency': 1,'op_name': 'conv1' },]
PRUNERS = {'level':LevelPruner(model, config_list_level), 'agp':AGPPruner(model, config_list_agp)}
pruner = PRUNERS(params['prune_method']['_name'])
pruner.compress()
... # fine tuning
acc = evaluate(model) # evaluation
nni.report_final_results(acc)
import nni
from nni.algorithms.compression.pytorch.pruning import *

params = nni.get_parameters()
sparsity = params['sparsity']
pruner_name = params['pruner']
model_name = params['model']

model, pruner = get_model_pruner(model_name, pruner_name, sparsity)
pruner.compress()

# after testing
nni.report_final_results(acc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be better to add simple code to show how is acc created


Last, define our task and automatically tuning pruning methods with layers sparsity
Then, define a ``config`` file in YAML to automatically tuning model, pruning algorithm and sparisty.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sparisty -> sparsity


.. code-block:: yaml

authorName: default
experimentName: Auto_Compression
trialConcurrency: 2
maxExecDuration: 100h
maxTrialNum: 500
#choice: local, remote, pai
trainingServicePlatform: local
#choice: true, false
useAnnotation: False
searchSpacePath: search_space.json
tuner:
#choice: TPE, Random, Anneal...
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
trial:
command: bash run_prune.sh
codeDir: .
gpuNum: 1
searchSpace:
sparsity:
_type: choice
_value: [0.25, 0.5, 0.75]
pruner:
_type: choice
_value: ['slim', 'l2filter', 'fpgm', 'apoz']
model:
_type: choice
_value: ['vgg16', 'vgg19']
trainingService:
platform: local
trialCodeDirectory: .
trialCommand: python3 basic_pruners_torch.py --nni
trialConcurrency: 1
trialGpuNumber: 0
tuner:
name: grid
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the indent is strange

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to tell users how to start this experiment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix it


The full example can be found :githublink:`here <examples/model_compress/pruning/config.yml>`
Loading