Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Compression doc structure refactor #2676

Merged
merged 116 commits into from
Jul 31, 2020
Merged

Conversation

suiguoxin
Copy link
Member

@suiguoxin suiguoxin commented Jul 10, 2020

  • Reuse class doc string in "Read the Docs" as in NAS
  • remove redundant parameter 'optimizer ' from Pruners ('level', 'slim', 'fpgm', 'l1', 'l2') and change related unit test accordingly;
  • fix some typos

@suiguoxin suiguoxin force-pushed the doc-refactor branch 2 times, most recently from bc039ef to 3f0eb81 Compare July 17, 2020 06:27
@suiguoxin suiguoxin marked this pull request as ready for review July 17, 2020 06:28
@scarlett2018 scarlett2018 mentioned this pull request Jul 17, 2020
66 tasks
@ultmaster ultmaster requested a review from chicm-ms July 17, 2020 09:28

- **sparsity:** How much percentage of convolutional filters are to be pruned.
- **op_types:** Currently only Conv2d is supported in TaylorFOWeightFilterPruner.
#### User configuration for TaylorFOWeightFilterPruner
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TaylorFOWeightFilterPruner -> TaylorFOWeightFilter Pruner

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@@ -223,16 +246,18 @@ pruner.compress()

Note: ActivationAPoZRankFilterPruner is used to prune convolutional layers within deep neural networks, therefore the `op_types` field supports only convolutional layers.

You can view example for more information
You can view [example](https://github.com/microsoft/nni/blob/master/examples/model_compress/model_prune_torch.py) for more information.

### User configuration for ActivationAPoZRankFilterPruner
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ActivationAPoZRankFilterPruner -> ActivationAPoZRankFilter Pruner

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

##### PyTorch

```eval_rst
.. autoclass:: nni.compression.torch.AGP_Pruner
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you help change all AGP_Pruner to AGPPruner? this pruner's name is different from all others...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

##### PyTorch

```eval_rst
.. autoclass:: nni.compression.torch.ADMMPruner
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not ADMMPruner

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

optimizer: torch.optim.Optimizer
Optimizer used to train model.
pruning_algorithm: str
Algorithms being used to prune model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to list all the options here

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Supported keys:
- sparsity : This is to specify the sparsity operations to be compressed to.
- op_types : Only BatchNorm2d is supported in Slim Pruner.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this docstring is put above __init__, @chicm-ms @liuzhe-lz do you think this is a good practice?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to keep consistent with NAS classes and generate correct docstring in readthedocs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good practice since __init__ is the constructor of the class, I checked pytorch and tensorflow, they put docstring of class before __init__, including the parameters of __init__. I was not aware of this before. For example: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/engine/base_layer.py#L114

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it, thanks.

"""
Parameters
----------
model : torch.nn.module
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.nn.module should be torch.nn.Module

Base pruning algorithm. `level`, `l1` or `l2`, by default `l1`. Given the sparsity distribution among the ops,
the assigned `base_algo` is used to decide which filters/channels/weights to prune.
start_temperature : float
Simualated Annealing related parameter.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest to provide more description for start_temperature, stop_temperature, and cool_down_rate

@ultmaster ultmaster merged commit 41312de into microsoft:master Jul 31, 2020
@suiguoxin suiguoxin deleted the doc-refactor branch August 1, 2020 06:54
- sparsity : This is to specify the sparsity operations to be compressed to.
- op_types : Operation types to prune.
"""
def __init__(self, model, config_list):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optimizer=None should not be removed. The example does not work without this parameter.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants