Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
suiguoxin committed Jul 17, 2020
1 parent 642d4a7 commit 15a2b0d
Show file tree
Hide file tree
Showing 8 changed files with 67 additions and 71 deletions.
45 changes: 27 additions & 18 deletions docs/en_US/Compressor/Pruner.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ We provide several pruning algorithms that support fine-grained weight pruning a
* [FPGM Pruner](#fpgm-pruner)
* [L1Filter Pruner](#l1filter-pruner)
* [L2Filter Pruner](#l2filter-pruner)
* [APoZ Rank Pruner](#activationapozrankfilterpruner)
* [Activation Mean Rank Pruner](#activationmeanrankfilterpruner)
* [Taylor FO On Weight Pruner](#taylorfoweightfilterpruner)
* [Activation APoZ Rank Filter Pruner](#activationAPoZRankFilter-pruner)
* [Activation Mean Rank Filter Pruner](#activationmeanrankfilter-pruner)
* [Taylor FO On Weight Pruner](#taylorfoweightfilter-pruner)

**Pruning Schedule**
* [AGP Pruner](#agp-pruner)
Expand Down Expand Up @@ -51,16 +51,16 @@ pruner.compress()

#### User configuration for Level Pruner

##### Tensorflow
##### PyTorch

```eval_rst
.. autoclass:: nni.compression.tensorflow.LevelPruner
.. autoclass:: nni.compression.torch.LevelPruner
```

##### PyTorch
##### Tensorflow

```eval_rst
.. autoclass:: nni.compression.torch.LevelPruner
.. autoclass:: nni.compression.tensorflow.LevelPruner
```


Expand Down Expand Up @@ -140,17 +140,15 @@ pruner.compress()

#### User configuration for FPGM Pruner

##### Tensorflow
```eval_rst
.. autoclass:: nni.compression.tensorflow.FPGMPruner
```

##### PyTorch
```eval_rst
.. autoclass:: nni.compression.torch.FPGMPruner
```
***

##### Tensorflow
```eval_rst
.. autoclass:: nni.compression.tensorflow.FPGMPruner
```

## L1Filter Pruner

Expand Down Expand Up @@ -311,7 +309,6 @@ pruner = TaylorFOWeightFilterPruner(model, config_list, statistics_batch_num=1)
pruner.compress()
```

You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/model_prune_torch.py) for more information.

#### User configuration for TaylorFOWeightFilterPruner

Expand All @@ -323,13 +320,16 @@ You can view [example](https://github.com/microsoft/nni/blob/master/examples/mod


## AGP Pruner

This is an iterative pruner, In [To prune, or not to prune: exploring the efficacy of pruning for model compression](https://arxiv.org/abs/1710.01878), authors Michael Zhu and Suyog Gupta provide an algorithm to prune the weight gradually.

>We introduce a new automated gradual pruning algorithm in which the sparsity is increased from an initial sparsity value si (usually 0) to a final sparsity value sf over a span of n pruning steps, starting at training step t0 and with pruning frequency ∆t:
![](../../img/agp_pruner.png)
>The binary weight masks are updated every ∆t steps as the network is trained to gradually increase the sparsity of the network while allowing the network training steps to recover from any pruning-induced loss in accuracy. In our experience, varying the pruning frequency ∆t between 100 and 1000 training steps had a negligible impact on the final model quality. Once the model achieves the target sparsity sf , the weight masks are no longer updated. The intuition behind this sparsity function in equation

>The binary weight masks are updated every ∆t steps as the network is trained to gradually increase the sparsity of the network while allowing the network training steps to recover from any pruning-induced loss in accuracy. In our experience, varying the pruning frequency ∆t between 100 and 1000 training steps had a negligible impact on the final model quality. Once the model achieves the target sparsity sf , the weight masks are no longer updated. The intuition behind this sparsity function in equation (1).
### Usage

You can prune all weight from 0% to 80% sparsity in 10 epoch with the code below.

PyTorch code
Expand Down Expand Up @@ -372,7 +372,7 @@ PyTorch code
```python
pruner.update_epoch(epoch)
```
You can view example for more information.
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/model_prune_torch.py) for more information.

#### User configuration for AGP Pruner

Expand All @@ -382,6 +382,12 @@ You can view example for more information.
.. autoclass:: nni.compression.torch.AGP_Pruner
```

##### Tensorflow

```eval_rst
.. autoclass:: nni.compression.tensorflow.AGP_Pruner
```

***

## NetAdapt Pruner
Expand Down Expand Up @@ -569,8 +575,11 @@ The above configuration means that there are 5 times of iterative pruning. As th

#### User configuration for LotteryTicketPruner

* **prune_iterations:** The number of rounds for the iterative pruning, i.e., the number of iterative pruning.
* **sparsity:** The final sparsity when the compression is done.
##### PyTorch

```eval_rst
.. autoclass:: nni.compression.torch.ADMMPruner
```

### Reproduced Experiment

Expand Down
Binary file modified docs/img/agp_pruner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
26 changes: 12 additions & 14 deletions src/sdk/pynni/nni/compression/tensorflow/builtin_pruners.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,23 +41,21 @@ def calc_mask(self, layer, config):


class AGP_Pruner(Pruner):
"""An automated gradual pruning algorithm that prunes the smallest magnitude
weights to achieve a preset level of network sparsity.
Michael Zhu and Suyog Gupta, "To prune, or not to prune: exploring the
efficacy of pruning for model compression", 2017 NIPS Workshop on Machine
Learning of Phones and other Consumer Devices,
https://arxiv.org/pdf/1710.01878.pdf
"""
Parameters
----------
model : torch.nn.module
Model to be pruned.
config_list : listlist
Supported keys:
- initial_sparsity: This is to specify the sparsity when compressor starts to compress.
- final_sparsity: This is to specify the sparsity when compressor finishes to compress.
- start_epoch: This is to specify the epoch number when compressor starts to compress, default start from epoch 0.
- end_epoch: This is to specify the epoch number when compressor finishes to compress.
- frequency: This is to specify every *frequency* number epochs compressor compress once, default frequency=1.
"""

def __init__(self, model, config_list):
"""
config_list: supported keys:
- initial_sparsity
- final_sparsity: you should make sure initial_sparsity <= final_sparsity
- start_epoch: start epoch numer begin update mask
- end_epoch: end epoch number stop update mask
- frequency: if you want update every 2 epoch, you can set it 2
"""
super().__init__(model, config_list)
self.mask_list = {}
self.if_init_list = {}
Expand Down
39 changes: 14 additions & 25 deletions src/sdk/pynni/nni/compression/torch/pruning/lottery_ticket.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,33 +13,22 @@

class LotteryTicketPruner(Pruner):
"""
This is a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks",
following NNI model compression interface.
1. Randomly initialize a neural network f(x;theta_0) (where theta_0 follows D_{theta}).
2. Train the network for j iterations, arriving at parameters theta_j.
3. Prune p% of the parameters in theta_j, creating a mask m.
4. Reset the remaining parameters to their values in theta_0, creating the winning ticket f(x;m*theta_0).
5. Repeat step 2, 3, and 4.
Parameters
----------
model : pytorch model
The model to be pruned
config_list : list
Supported keys:
- prune_iterations : The number of rounds for the iterative pruning.
- sparsity : The final sparsity when the compression is done.
optimizer : pytorch optimizer
The optimizer for the model
lr_scheduler : pytorch lr scheduler
The lr scheduler for the model if used
reset_weights : bool
Whether reset weights and optimizer at the beginning of each round.
"""

def __init__(self, model, config_list, optimizer=None, lr_scheduler=None, reset_weights=True):
"""
Parameters
----------
model : pytorch model
The model to be pruned
config_list : list
Supported keys:
- prune_iterations : The number of rounds for the iterative pruning.
- sparsity : The final sparsity when the compression is done.
optimizer : pytorch optimizer
The optimizer for the model
lr_scheduler : pytorch lr scheduler
The lr scheduler for the model if used
reset_weights : bool
Whether reset weights and optimizer at the beginning of each round.
"""
# save init weights and optimizer
self.reset_weights = reset_weights
if self.reset_weights:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def short_term_fine_tuner(model, epoch=3):
function to evaluate the masked model.
This function should include `model` as the only parameter, and returns a scalar value.
Example::
def evaluator(model):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
val_loader = ...
Expand Down
12 changes: 6 additions & 6 deletions src/sdk/pynni/nni/compression/torch/pruning/one_shot.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,8 +176,8 @@ class FPGMPruner(_StructuredFilterPruner):
- sparsity : This is to specify the sparsity operations to be compressed to.
- op_types : Only Conv2d is supported in FPGM Pruner.
"""
def __init__(self, model, config_list, optimizer=None):
super().__init__(model, config_list, pruning_algorithm='fpgm', optimizer=optimizer)
def __init__(self, model, config_list):
super().__init__(model, config_list, pruning_algorithm='fpgm')

class TaylorFOWeightFilterPruner(_StructuredFilterPruner):
"""
Expand All @@ -204,8 +204,8 @@ class ActivationAPoZRankFilterPruner(_StructuredFilterPruner):
- sparsity : How much percentage of convolutional filters are to be pruned.
- op_types : Only Conv2d is supported in ActivationAPoZRankFilterPruner.
"""
def __init__(self, model, config_list, optimizer=None, activation='relu', statistics_batch_num=1):
super().__init__(model, config_list, pruning_algorithm='apoz', optimizer=optimizer, \
def __init__(self, model, config_list, activation='relu', statistics_batch_num=1):
super().__init__(model, config_list, pruning_algorithm='apoz', optimizer=None, \
activation=activation, statistics_batch_num=statistics_batch_num)

class ActivationMeanRankFilterPruner(_StructuredFilterPruner):
Expand All @@ -219,6 +219,6 @@ class ActivationMeanRankFilterPruner(_StructuredFilterPruner):
- sparsity : How much percentage of convolutional filters are to be pruned.
- op_types : Only Conv2d is supported in ActivationMeanRankFilterPruner.
"""
def __init__(self, model, config_list, optimizer=None, activation='relu', statistics_batch_num=1):
super().__init__(model, config_list, pruning_algorithm='mean_activation', optimizer=optimizer, \
def __init__(self, model, config_list, activation='relu', statistics_batch_num=1):
super().__init__(model, config_list, pruning_algorithm='mean_activation', optimizer=None, \
activation=activation, statistics_batch_num=statistics_batch_num)
10 changes: 4 additions & 6 deletions src/sdk/pynni/tests/test_compressor.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,9 +88,8 @@ def test_torch_quantizer_modules_detection(self):

def test_torch_level_pruner(self):
model = TorchModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
configure_list = [{'sparsity': 0.8, 'op_types': ['default']}]
torch_compressor.LevelPruner(model, configure_list, optimizer).compress()
torch_compressor.LevelPruner(model, configure_list).compress()

@tf2
def test_tf_level_pruner(self):
Expand Down Expand Up @@ -129,7 +128,7 @@ def test_torch_fpgm_pruner(self):

model = TorchModel()
config_list = [{'sparsity': 0.6, 'op_types': ['Conv2d']}, {'sparsity': 0.2, 'op_types': ['Conv2d']}]
pruner = torch_compressor.FPGMPruner(model, config_list, torch.optim.SGD(model.parameters(), lr=0.01))
pruner = torch_compressor.FPGMPruner(model, config_list)

model.conv2.module.weight.data = torch.tensor(w).float()
masks = pruner.calc_mask(model.conv2)
Expand Down Expand Up @@ -315,7 +314,7 @@ def test_torch_QAT_quantizer(self):
def test_torch_pruner_validation(self):
# test bad configuraiton
pruner_classes = [torch_compressor.__dict__[x] for x in \
['LevelPruner', 'SlimPruner', 'FPGMPruner', 'L1FilterPruner', 'L2FilterPruner', 'AGP_Pruner', \
['LevelPruner', 'SlimPruner', 'FPGMPruner', 'L1FilterPruner', 'L2FilterPruner', \
'ActivationMeanRankFilterPruner', 'ActivationAPoZRankFilterPruner']]

bad_configs = [
Expand All @@ -337,11 +336,10 @@ def test_torch_pruner_validation(self):
]
]
model = TorchModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for pruner_class in pruner_classes:
for config_list in bad_configs:
try:
pruner_class(model, config_list, optimizer)
pruner_class(model, config_list)
print(config_list)
assert False, 'Validation error should be raised for bad configuration'
except schema.SchemaError:
Expand Down
4 changes: 3 additions & 1 deletion src/sdk/pynni/tests/test_pruners.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,9 @@ def pruners_test(pruner_names=['level', 'agp', 'slim', 'fpgm', 'l1', 'l2', 'tayl
pruner = prune_config[pruner_name]['pruner_class'](model, config_list, trainer=prune_config[pruner_name]['trainer'])
elif pruner_name == 'autocompress':
pruner = prune_config[pruner_name]['pruner_class'](model, config_list, trainer=prune_config[pruner_name]['trainer'], evaluator=prune_config[pruner_name]['evaluator'], dummy_input=x)
else:
elif pruner_name in ['level', 'slim', 'fpgm', 'l1', 'l2', 'mean_activation', 'apoz']:
pruner = prune_config[pruner_name]['pruner_class'](model, config_list)
else: # 'agp', 'taylorfo'
pruner = prune_config[pruner_name]['pruner_class'](model, config_list, optimizer)
pruner.compress()

Expand Down

0 comments on commit 15a2b0d

Please sign in to comment.