Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
Refactor code hierarchy part 3: Unit test (#3037)
Browse files Browse the repository at this point in the history
  • Loading branch information
liuzhe-lz authored Oct 30, 2020
1 parent 80b6cb3 commit bc0f8f3
Show file tree
Hide file tree
Showing 116 changed files with 813 additions and 580 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@
/test/model_path/
/test/temp.json
/test/ut/sdk/*.pth
/test/ut/tools/annotation/_generated/
/ts/nni_manager/exp_profile.json
/ts/nni_manager/metrics.json
/ts/nni_manager/trial_jobs.json


# Logs
Expand Down
53 changes: 0 additions & 53 deletions archive-ut/nni_annotation/test_annotation.py

This file was deleted.

1 change: 0 additions & 1 deletion archive-ut/nni_cmd/tests/mock/nnictl_metadata/.experiment

This file was deleted.

This file was deleted.

123 changes: 0 additions & 123 deletions archive-ut/nni_trial_tool/test/test_file_channel.py

This file was deleted.

86 changes: 0 additions & 86 deletions archive-ut/nni_trial_tool/test/test_hdfsClientUtility.py

This file was deleted.

6 changes: 3 additions & 3 deletions docs/en_US/Compression/AutoPruningUsingTuners.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,13 @@ It's convenient to implement auto model pruning with NNI compression and NNI tun
You can easily compress a model with NNI compression. Take pruning for example, you can prune a pretrained model with LevelPruner like this

```python
from nni.compression.torch import LevelPruner
from nni.algorithms.compression.pytorch.pruning import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': ['default'] }]
pruner = LevelPruner(model, config_list)
pruner.compress()
```

The 'default' op_type stands for the module types defined in [default_layers.py](https://github.com/microsoft/nni/blob/v1.9/src/sdk/pynni/nni/compression/torch/default_layers.py) for pytorch.
The 'default' op_type stands for the module types defined in [default_layers.py](https://github.com/microsoft/nni/blob/v1.9/src/sdk/pynni/nni/compression/pytorch/default_layers.py) for pytorch.

Therefore ```{ 'sparsity': 0.8, 'op_types': ['default'] }```means that **all layers with specified op_types will be compressed with the same 0.8 sparsity**. When ```pruner.compress()``` called, the model is compressed with masks and after that you can normally fine tune this model and **pruned weights won't be updated** which have been masked.

Expand Down Expand Up @@ -71,7 +71,7 @@ Then we need to modify our codes for few lines

```python
import nni
from nni.compression.torch import *
from nni.algorithms.compression.pytorch.pruning import *
params = nni.get_parameters()
conv0_sparsity = params['prune_method']['conv0_sparsity']
conv1_sparsity = params['prune_method']['conv1_sparsity']
Expand Down
16 changes: 8 additions & 8 deletions docs/en_US/Compression/CompressionReference.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,34 +7,34 @@
## Sensitivity Utilities

```eval_rst
.. autoclass:: nni.compression.torch.utils.sensitivity_analysis.SensitivityAnalysis
.. autoclass:: nni.compression.pytorch.utils.sensitivity_analysis.SensitivityAnalysis
:members:
```

## Topology Utilities

```eval_rst
.. autoclass:: nni.compression.torch.utils.shape_dependency.ChannelDependency
.. autoclass:: nni.compression.pytorch.utils.shape_dependency.ChannelDependency
:members:
.. autoclass:: nni.compression.torch.utils.shape_dependency.GroupDependency
.. autoclass:: nni.compression.pytorch.utils.shape_dependency.GroupDependency
:members:
.. autoclass:: nni.compression.torch.utils.mask_conflict.CatMaskPadding
.. autoclass:: nni.compression.pytorch.utils.mask_conflict.CatMaskPadding
:members:
.. autoclass:: nni.compression.torch.utils.mask_conflict.GroupMaskConflict
.. autoclass:: nni.compression.pytorch.utils.mask_conflict.GroupMaskConflict
:members:
.. autoclass:: nni.compression.torch.utils.mask_conflict.ChannelMaskConflict
.. autoclass:: nni.compression.pytorch.utils.mask_conflict.ChannelMaskConflict
:members:
```

## Model FLOPs/Parameters Counter

```eval_rst
.. autofunction:: nni.compression.torch.utils.counter.count_flops_params
.. autofunction:: nni.compression.pytorch.utils.counter.count_flops_params
```
```
10 changes: 5 additions & 5 deletions docs/en_US/Compression/CompressionUtils.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ First, we provide a sensitivity analysis tool (**SensitivityAnalysis**) for user

The following codes show the basic usage of the SensitivityAnalysis.
```python
from nni.compression.torch.utils.sensitivity_analysis import SensitivityAnalysis
from nni.compression.pytorch.utils.sensitivity_analysis import SensitivityAnalysis

def val(model):
model.eval()
Expand Down Expand Up @@ -88,7 +88,7 @@ If the layers have channel dependency are assigned with different sparsities (he

#### Usage
```python
from nni.compression.torch.utils.shape_dependency import ChannelDependency
from nni.compression.pytorch.utils.shape_dependency import ChannelDependency
data = torch.ones(1, 3, 224, 224).cuda()
channel_depen = ChannelDependency(net, data)
channel_depen.export('dependency.csv')
Expand Down Expand Up @@ -116,7 +116,7 @@ Set 12,layer4.1.conv1
When the masks of different layers in a model have conflict (for example, assigning different sparsities for the layers that have channel dependency), we can fix the mask conflict by MaskConflict. Specifically, the MaskConflict loads the masks exported by the pruners(L1FilterPruner, etc), and check if there is mask conflict, if so, MaskConflict sets the conflicting masks to the same value.

```
from nni.compression.torch.utils.mask_conflict import fix_mask_conflict
from nni.compression.pytorch.utils.mask_conflict import fix_mask_conflict
fixed_mask = fix_mask_conflict('./resnet18_mask', net, data)
```

Expand All @@ -125,10 +125,10 @@ We provide a model counter for calculating the model FLOPs and parameters. This

### Usage
```
from nni.compression.torch.utils.counter import count_flops_params
from nni.compression.pytorch.utils.counter import count_flops_params
# Given input size (1, 1, 28, 28)
flops, params = count_flops_params(model, (1, 1, 28, 28))
# Format output size to M (i.e., 10^6)
print(f'FLOPs: {flops/1e6:.3f}M, Params: {params/1e6:.3f}M)
```
```
Loading

0 comments on commit bc0f8f3

Please sign in to comment.