Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Compression doc structure refactor #2676

Merged
merged 116 commits into from
Jul 31, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
116 commits
Select commit Hold shift + click to select a range
80165e9
init sapruner
suiguoxin Apr 21, 2020
3fcebef
seperate sapruners from other one-shot pruners
suiguoxin Apr 22, 2020
b4be2d0
update
suiguoxin Apr 22, 2020
805c32c
fix model params issue
suiguoxin Apr 23, 2020
4c33432
make the process runnable
suiguoxin Apr 23, 2020
489f7b6
show evaluation result in example
suiguoxin Apr 23, 2020
00dbddf
sort the sparsities and scale it
suiguoxin Apr 23, 2020
e1f9654
fix rescale issue
suiguoxin Apr 23, 2020
4b5ea0d
fix scale issue; add pruning history
suiguoxin Apr 24, 2020
6120b70
record the actual total sparsity
suiguoxin Apr 24, 2020
a6114f7
fix sparsity 0/1 problem
suiguoxin Apr 26, 2020
2e928ae
revert useless modif
suiguoxin Apr 26, 2020
546ca73
revert useless modif
suiguoxin Apr 26, 2020
1dc4713
fix 0 pruning weights problem
suiguoxin Apr 27, 2020
d1a5646
save pruning history in csv file
suiguoxin Apr 28, 2020
75a53da
fix typo
suiguoxin Apr 28, 2020
e8900f8
remove check perm in Makefile
suiguoxin Apr 28, 2020
9c5ba41
use os path
suiguoxin Apr 29, 2020
9a60501
save config list in json format
suiguoxin Apr 29, 2020
951c60f
update analyze py; update docker
suiguoxin Apr 30, 2020
c784790
update
suiguoxin Apr 30, 2020
836e74b
update analyze
suiguoxin May 4, 2020
70aca26
update log info in compressor
suiguoxin May 4, 2020
efa0637
init NetAdapt Pruner
suiguoxin May 4, 2020
8695564
refine examples
suiguoxin May 6, 2020
8fdad96
Merge remote-tracking branch 'msft/master' into sapruner
suiguoxin May 6, 2020
db3074e
update
suiguoxin May 7, 2020
78ee01a
fine tune
suiguoxin May 7, 2020
3e40c4a
update
suiguoxin May 7, 2020
2560050
fix quote issue
suiguoxin May 7, 2020
d6e4101
add code for imagenet integrity
suiguoxin May 8, 2020
65f8e2b
update
suiguoxin May 8, 2020
d27ac7d
use datasets.ImageNet
suiguoxin May 8, 2020
f47260f
update
suiguoxin May 8, 2020
358921c
update
suiguoxin May 9, 2020
f50e947
add channel pruning in SAPruner; refine example
suiguoxin May 11, 2020
ea07c00
update net_adapt pruner; add dependency constraint in sapruner(beta)
suiguoxin May 11, 2020
7d73050
update
suiguoxin May 12, 2020
220e4a3
update
suiguoxin May 12, 2020
e692eb1
update
suiguoxin May 12, 2020
fc389d0
fix zero division problem
suiguoxin May 12, 2020
a69da67
fix typo
suiguoxin May 12, 2020
e0ab4bc
update
suiguoxin May 12, 2020
7724104
fix naive issue of NetAdaptPruner
suiguoxin May 12, 2020
f9f4a61
fix data issue for no-dependency modules
suiguoxin May 13, 2020
93698ac
add cifar10 vgg16 examplel
suiguoxin May 14, 2020
7d7f36d
update
suiguoxin May 14, 2020
9fc1029
update
suiguoxin May 14, 2020
9d506b1
fix folder creation issue; change lr for vgg exp
suiguoxin May 15, 2020
6ca5b27
update
suiguoxin May 15, 2020
fe9c1bf
add save model arg
suiguoxin May 15, 2020
1ec68a4
fix model copy issue
suiguoxin May 15, 2020
c99e4a3
init related weights calc
suiguoxin May 15, 2020
b6ce773
update analyze file
suiguoxin May 15, 2020
559c631
NetAdaptPruner: use fine-tuned weights after each iteration; fix modu…
suiguoxin May 18, 2020
2bd5a80
Merge remote-tracking branch 'msft/master' into sapruner
suiguoxin May 18, 2020
5ebea45
consider channel/filter cross pruning
suiguoxin May 18, 2020
f74324c
NetAdapt: consider previous op when calc total sparsity
suiguoxin May 18, 2020
27ad5f7
update
suiguoxin May 18, 2020
7f607ce
use customized vgg
suiguoxin May 19, 2020
6137373
add performances comparison plt
suiguoxin May 19, 2020
b9222c7
fix netadaptPruner mask copy issue
suiguoxin May 19, 2020
71e3651
add resnet18 example
suiguoxin May 19, 2020
045f114
fix example issue
suiguoxin May 19, 2020
e7b0410
Merge remote-tracking branch 'msft/master' into sapruner
suiguoxin May 19, 2020
98c5cb4
update experiment data
suiguoxin May 20, 2020
c220f84
fix bool arg parsing issue
suiguoxin May 20, 2020
5a1728e
update
suiguoxin May 20, 2020
b1a4058
init ADMMPruner
suiguoxin May 21, 2020
b36a170
ADMMPruner: update
suiguoxin May 21, 2020
fd6f3a6
ADMMPruner: finish v1.0
suiguoxin May 22, 2020
0b8840f
ADMMPruner: refine
suiguoxin May 22, 2020
7f1c319
update
suiguoxin May 22, 2020
87b090c
AutoCompress init
suiguoxin May 25, 2020
6c82d6c
AutoCompress: update
suiguoxin May 25, 2020
efd8f10
AutoCompressPruner: fix issues:
suiguoxin May 26, 2020
85a4483
add test for auto pruners
suiguoxin May 26, 2020
180a709
add doc for auto pruners
suiguoxin May 26, 2020
e87122c
fix link in md
suiguoxin May 26, 2020
955a6ee
remove irrelevant files
suiguoxin May 26, 2020
51e004e
Clean code
suiguoxin May 26, 2020
4eeb65e
code clean
suiguoxin May 26, 2020
f8ebc19
fix pylint issue
suiguoxin May 26, 2020
e241708
fix pylint issue
suiguoxin May 26, 2020
0edddeb
rename admm & autoCompress param
suiguoxin May 26, 2020
c93e0eb
use abs link in doc
suiguoxin May 26, 2020
e88e4d7
merge from master % resolve conflict
suiguoxin May 28, 2020
67c41d5
reorder import to fix import issue: autocompress relies on speedup
suiguoxin May 28, 2020
c057307
refine doc
suiguoxin Jun 4, 2020
7f3de4e
NetAdaptPruner: decay pruning step
suiguoxin Jun 11, 2020
55e705e
take changes from testing branch
suiguoxin Jun 29, 2020
e1775b3
merge from master
suiguoxin Jun 29, 2020
840213d
refine
suiguoxin Jun 29, 2020
d4b80bc
fix typo
suiguoxin Jun 29, 2020
c9fffe0
ADMMPruenr: check base_algo together with config schema
suiguoxin Jun 29, 2020
87f3232
fix broken link
suiguoxin Jun 29, 2020
16b1c95
doc refine
suiguoxin Jun 29, 2020
6bff198
ADMM:refine
suiguoxin Jun 29, 2020
d86fad4
refine doc
suiguoxin Jun 30, 2020
c29f758
resolve conflict
suiguoxin Jun 30, 2020
32d14d9
refine doc
suiguoxin Jun 30, 2020
5950bec
refince doc
ultmaster Jun 29, 2020
8a11b45
resolve conflict
suiguoxin Jun 30, 2020
be782bc
refine doc
suiguoxin Jun 30, 2020
d449e6a
refine doc
suiguoxin Jun 30, 2020
cb6376b
refine doc
suiguoxin Jun 30, 2020
cee3fdd
refine doc
suiguoxin Jun 30, 2020
bec0dbe
update
suiguoxin Jul 10, 2020
1b0c36d
update
suiguoxin Jul 10, 2020
7d45142
update
suiguoxin Jul 10, 2020
642d4a7
refactor AGP doc
suiguoxin Jul 10, 2020
15a2b0d
update
suiguoxin Jul 17, 2020
d93ffc0
fix optimizer issue
suiguoxin Jul 17, 2020
239f736
fix comments: typo, rename AGP_Pruner
suiguoxin Jul 27, 2020
5a20055
fix torch.nn.Module issue; refine SA docstring
suiguoxin Jul 31, 2020
c171e9d
fix typo
suiguoxin Jul 31, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/en_US/Compressor/AutoCompression.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ config_list_agp = [{'initial_sparsity': 0, 'final_sparsity': conv0_sparsity,
{'initial_sparsity': 0, 'final_sparsity': conv1_sparsity,
'start_epoch': 0, 'end_epoch': 3,
'frequency': 1,'op_name': 'conv1' },]
PRUNERS = {'level':LevelPruner(model, config_list_level), 'agp':AGP_Pruner(model, config_list_agp)}
PRUNERS = {'level':LevelPruner(model, config_list_level), 'agp':AGPPruner(model, config_list_agp)}
pruner = PRUNERS(params['prune_method']['_name'])
pruner.compress()
... # fine tuning
Expand Down
316 changes: 110 additions & 206 deletions docs/en_US/Compressor/Pruner.md

Large diffs are not rendered by default.

Binary file modified docs/img/agp_pruner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion examples/model_compress/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ configure_list = [{
'frequency': 1,
'op_types': ['default']
}]
pruner = AGP_Pruner(configure_list)
pruner = AGPPruner(configure_list)
```

When ```pruner(model)``` is called, your model is injected with masks as embedded operations. For example, a layer takes a weight as input, we will insert an operation between the weight and the layer, this operation takes the weight as input and outputs a new weight applied by the mask. Thus, the masks are applied at any time the computation goes through the operations. You can fine-tune your model **without** any modifications.
Expand Down
4 changes: 2 additions & 2 deletions examples/model_compress/model_prune_torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
from models.cifar10.vgg import VGG
import nni
from nni.compression.torch import LevelPruner, SlimPruner, FPGMPruner, L1FilterPruner, \
L2FilterPruner, AGP_Pruner, ActivationMeanRankFilterPruner, ActivationAPoZRankFilterPruner
L2FilterPruner, AGPPruner, ActivationMeanRankFilterPruner, ActivationAPoZRankFilterPruner

prune_config = {
'level': {
Expand All @@ -25,7 +25,7 @@
'agp': {
'dataset_name': 'mnist',
'model_name': 'naive',
'pruner_class': AGP_Pruner,
'pruner_class': AGPPruner,
'config_list': [{
'initial_sparsity': 0.,
'final_sparsity': 0.8,
Expand Down
66 changes: 33 additions & 33 deletions src/sdk/pynni/nni/compression/tensorflow/builtin_pruners.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,23 @@
import tensorflow as tf
from .compressor import Pruner

__all__ = ['LevelPruner', 'AGP_Pruner', 'FPGMPruner']
__all__ = ['LevelPruner', 'AGPPruner', 'FPGMPruner']

_logger = logging.getLogger(__name__)


class LevelPruner(Pruner):
"""
Parameters
----------
model : tensorflow model
Model to be pruned
config_list : list
Supported keys:
- sparsity : This is to specify the sparsity operations to be compressed to.
- op_types : Operation types to prune.
"""
def __init__(self, model, config_list):
"""
config_list: supported keys:
- sparsity
"""
super().__init__(model, config_list)
self.mask_list = {}
self.if_init_list = {}
Expand All @@ -34,24 +40,22 @@ def calc_mask(self, layer, config):
return mask


class AGP_Pruner(Pruner):
"""An automated gradual pruning algorithm that prunes the smallest magnitude
weights to achieve a preset level of network sparsity.
Michael Zhu and Suyog Gupta, "To prune, or not to prune: exploring the
efficacy of pruning for model compression", 2017 NIPS Workshop on Machine
Learning of Phones and other Consumer Devices,
https://arxiv.org/pdf/1710.01878.pdf
class AGPPruner(Pruner):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned.
config_list : listlist
Supported keys:
- initial_sparsity: This is to specify the sparsity when compressor starts to compress.
- final_sparsity: This is to specify the sparsity when compressor finishes to compress.
- start_epoch: This is to specify the epoch number when compressor starts to compress, default start from epoch 0.
- end_epoch: This is to specify the epoch number when compressor finishes to compress.
- frequency: This is to specify every *frequency* number epochs compressor compress once, default frequency=1.
"""

def __init__(self, model, config_list):
"""
config_list: supported keys:
- initial_sparsity
- final_sparsity: you should make sure initial_sparsity <= final_sparsity
- start_epoch: start epoch numer begin update mask
- end_epoch: end epoch number stop update mask
- frequency: if you want update every 2 epoch, you can set it 2
"""
super().__init__(model, config_list)
self.mask_list = {}
self.if_init_list = {}
Expand Down Expand Up @@ -102,23 +106,19 @@ def update_epoch(self, epoch, sess):
for k in self.if_init_list:
self.if_init_list[k] = True


class FPGMPruner(Pruner):
"""
A filter pruner via geometric median.
"Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration",
https://arxiv.org/pdf/1811.00250.pdf
Parameters
----------
model : tensorflow model
Model to be pruned
config_list : list
Supported keys:
- sparsity : percentage of convolutional filters to be pruned.
- op_types : Only Conv2d is supported in FPGM Pruner.
"""

def __init__(self, model, config_list):
"""
Parameters
----------
model : pytorch model
the model user wants to compress
config_list: list
support key for each list item:
- sparsity: percentage of convolutional filters to be pruned.
"""
super().__init__(model, config_list)
self.mask_dict = {}
self.assign_handler = []
Expand Down
90 changes: 41 additions & 49 deletions src/sdk/pynni/nni/compression/torch/pruning/admm_pruner.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,58 +15,50 @@

class ADMMPruner(OneshotPruner):
"""
This is a Pytorch implementation of ADMM Pruner algorithm.
A Pytorch implementation of ADMM Pruner algorithm.

Parameters
----------
model : torch.nn.Module
Model to be pruned.
config_list : list
List on pruning configs.
trainer : function
Function used for the first subproblem.
Users should write this function as a normal function to train the Pytorch model
and include `model, optimizer, criterion, epoch, callback` as function arguments.
Here `callback` acts as an L2 regulizer as presented in the formula (7) of the original paper.
The logic of `callback` is implemented inside the Pruner,
users are just required to insert `callback()` between `loss.backward()` and `optimizer.step()`.
Example::

def trainer(model, criterion, optimizer, epoch, callback):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
train_loader = ...
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
# callback should be inserted between loss.backward() and optimizer.step()
if callback:
callback()
optimizer.step()
num_iterations : int
Total number of iterations.
training_epochs : int
Training epochs of the first subproblem.
row : float
Penalty parameters for ADMM training.
base_algo : str
Base pruning algorithm. `level`, `l1` or `l2`, by default `l1`. Given the sparsity distribution among the ops,
the assigned `base_algo` is used to decide which filters/channels/weights to prune.

Alternating Direction Method of Multipliers (ADMM) is a mathematical optimization technique,
by decomposing the original nonconvex problem into two subproblems that can be solved iteratively.
In weight pruning problem, these two subproblems are solved via 1) gradient descent algorithm and 2) Euclidean projection respectively.
This solution framework applies both to non-structured and different variations of structured pruning schemes.

For more details, please refer to the paper: https://arxiv.org/abs/1804.03294.
"""

def __init__(self, model, config_list, trainer, num_iterations=30, training_epochs=5, row=1e-4, base_algo='l1'):
"""
Parameters
----------
model : torch.nn.module
Model to be pruned
config_list : list
List on pruning configs
trainer : function
Function used for the first subproblem.
Users should write this function as a normal function to train the Pytorch model
and include `model, optimizer, criterion, epoch, callback` as function arguments.
Here `callback` acts as an L2 regulizer as presented in the formula (7) of the original paper.
The logic of `callback` is implemented inside the Pruner,
users are just required to insert `callback()` between `loss.backward()` and `optimizer.step()`.
Example::
```
>>> def trainer(model, criterion, optimizer, epoch, callback):
>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
>>> train_loader = ...
>>> model.train()
>>> for batch_idx, (data, target) in enumerate(train_loader):
>>> data, target = data.to(device), target.to(device)
>>> optimizer.zero_grad()
>>> output = model(data)
>>> loss = criterion(output, target)
>>> loss.backward()
>>> # callback should be inserted between loss.backward() and optimizer.step()
>>> if callback:
>>> callback()
>>> optimizer.step()
```
num_iterations : int
Total number of iterations.
training_epochs : int
Training epochs of the first subproblem.
row : float
Penalty parameters for ADMM training.
base_algo : str
Base pruning algorithm. `level`, `l1` or `l2`, by default `l1`. Given the sparsity distribution among the ops,
the assigned `base_algo` is used to decide which filters/channels/weights to prune.
"""
self._base_algo = base_algo

super().__init__(model, config_list)
Expand All @@ -83,7 +75,7 @@ def validate_config(self, model, config_list):
"""
Parameters
----------
model : torch.nn.module
model : torch.nn.Module
Model to be pruned
config_list : list
List on pruning configs
Expand Down
49 changes: 27 additions & 22 deletions src/sdk/pynni/nni/compression/torch/pruning/agp.py
Original file line number Diff line number Diff line change
@@ -1,41 +1,46 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.

"""
An automated gradual pruning algorithm that prunes the smallest magnitude
weights to achieve a preset level of network sparsity.
Michael Zhu and Suyog Gupta, "To prune, or not to prune: exploring the
efficacy of pruning for model compression", 2017 NIPS Workshop on Machine
Learning of Phones and other Consumer Devices.
"""

import logging
import torch
from schema import And, Optional
from .constants import MASKER_DICT
from ..utils.config_validation import CompressorSchema
from ..compressor import Pruner

__all__ = ['AGP_Pruner']
__all__ = ['AGPPruner']

logger = logging.getLogger('torch pruner')

class AGP_Pruner(Pruner):
class AGPPruner(Pruner):
"""
An automated gradual pruning algorithm that prunes the smallest magnitude
weights to achieve a preset level of network sparsity.
Michael Zhu and Suyog Gupta, "To prune, or not to prune: exploring the
efficacy of pruning for model compression", 2017 NIPS Workshop on Machine
Learning of Phones and other Consumer Devices,
https://arxiv.org/pdf/1710.01878.pdf
Parameters
----------
model : torch.nn.Module
Model to be pruned.
config_list : listlist
Supported keys:
- initial_sparsity: This is to specify the sparsity when compressor starts to compress.
- final_sparsity: This is to specify the sparsity when compressor finishes to compress.
- start_epoch: This is to specify the epoch number when compressor starts to compress, default start from epoch 0.
- end_epoch: This is to specify the epoch number when compressor finishes to compress.
- frequency: This is to specify every *frequency* number epochs compressor compress once, default frequency=1.
optimizer: torch.optim.Optimizer
Optimizer used to train model.
pruning_algorithm: str
Algorithms being used to prune model,
choose from `['level', 'slim', 'l1', 'l2', 'fpgm', 'taylorfo', 'apoz', 'mean_activation']`, by default `level`
"""

def __init__(self, model, config_list, optimizer, pruning_algorithm='level'):
"""
Parameters
----------
model : torch.nn.module
Model to be pruned
config_list : list
List on pruning configs
optimizer: torch.optim.Optimizer
Optimizer used to train model
pruning_algorithm: str
algorithms being used to prune model
"""

super().__init__(model, config_list, optimizer)
assert isinstance(optimizer, torch.optim.Optimizer), "AGP pruner is an iterative pruner, please pass optimizer of the model to it"
self.masker = MASKER_DICT[pruning_algorithm](model, self)
Expand All @@ -47,7 +52,7 @@ def validate_config(self, model, config_list):
"""
Parameters
----------
model : torch.nn.module
model : torch.nn.Module
Model to be pruned
config_list : list
List on pruning configs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ def apply_compression_results(model, masks_file, map_location=None):

Parameters
----------
model : torch.nn.module
model : torch.nn.Module
The model to be compressed
masks_file : str
The path of the mask file
Expand Down
Loading