Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pull code #52

Merged
merged 4 commits into from
Nov 26, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search
<li><b>Examples</b></li>
<ul>
<li><a href="examples/trials/mnist-pytorch">MNIST-pytorch</li></a>
<li><a href="examples/trials/mnist">MNIST-tensorflow</li></a>
<li><a href="examples/trials/mnist-tfv1">MNIST-tensorflow</li></a>
<li><a href="examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="docs/en_US/TrialExample/GbdtExample.md">Auto-gbdt</a></li>
<li><a href="docs/en_US/TrialExample/Cifar10Examples.md">Cifar10-pytorch</li></a>
Expand Down Expand Up @@ -245,15 +245,15 @@ Linux and MacOS
* Run the MNIST example.

```bash
nnictl create --config nni/examples/trials/mnist/config.yml
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml
```

Windows

* Run the MNIST example.

```bash
nnictl create --config nni\examples\trials\mnist\config_windows.yml
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml
```

* Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`.
Expand Down
4 changes: 3 additions & 1 deletion README_zh_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -342,13 +342,15 @@ You can use these commands to get more information about the experiment
* 在 NNI 中运行 [神经网络架构结构搜索](examples/trials/nas_cifar10/README_zh_CN.md)
* [NNI 中的自动特征工程](examples/trials/auto-feature-engineering/README_zh_CN.md)
* 使用 NNI 的 [矩阵分解超参调优](https://github.com/microsoft/recommenders/blob/master/notebooks/04_model_select_and_optimize/nni_surprise_svd.ipynb)
* [scikit-nni](https://github.com/ksachdeva/scikit-nni) 使用 NNI 为 scikit-learn 开发的超参搜索。
* ### **相关文章**

* [超参数优化的对比](docs/zh_CN/CommunitySharings/HpoComparision.md)
* [神经网络结构搜索的对比](docs/zh_CN/CommunitySharings/NasComparision.md)
* [并行化顺序算法:TPE](docs/zh_CN/CommunitySharings/ParallelizingTpeSearch.md)
* [使用 NNI 为 SVD 自动调参](docs/zh_CN/CommunitySharings/RecommendersSvd.md)
* [使用 NNI 为 SPTAG 自动调参](docs/zh_CN/CommunitySharings/SptagAutoTune.md)
* [使用 NNI 为 scikit-learn 查找超参](https://towardsdatascience.com/find-thy-hyper-parameters-for-scikit-learn-pipelines-using-microsoft-nni-f1015b1224c1)
* **博客** - [AutoML 工具(Advisor,NNI 与 Google Vizier)的对比](http://gaocegege.com/Blog/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/katib-new#%E6%80%BB%E7%BB%93%E4%B8%8E%E5%88%86%E6%9E%90) 作者:[@gaocegege](https://github.com/gaocegege) - kubeflow/katib 的设计与实现的总结与分析章节

## **反馈**
Expand All @@ -359,4 +361,4 @@ You can use these commands to get more information about the experiment

## **许可协议**

代码库遵循 [MIT 许可协议](LICENSE)
代码库遵循 [MIT 许可协议](LICENSE)
18 changes: 13 additions & 5 deletions docs/en_US/TrialExample/MnistExamples.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@

CNN MNIST classifier for deep learning is similar to `hello world` for programming languages. Thus, we use MNIST as example to introduce different features of NNI. The examples are listed below:

- [MNIST with NNI API](#mnist)
- [MNIST with NNI API (TensorFlow v1.x)](#mnist-tfv1)
- [MNIST with NNI API (TensorFlow v2.x)](#mnist-tfv2)
- [MNIST with NNI annotation](#mnist-annotation)
- [MNIST in keras](#mnist-keras)
- [MNIST -- tuning with batch tuner](#mnist-batch)
Expand All @@ -11,12 +12,19 @@ CNN MNIST classifier for deep learning is similar to `hello world` for programmi
- [distributed MNIST (tensorflow) using kubeflow](#mnist-kubeflow-tf)
- [distributed MNIST (pytorch) using kubeflow](#mnist-kubeflow-pytorch)

<a name="mnist"></a>
**MNIST with NNI API**
<a name="mnist-tfv1"></a>
**MNIST with NNI API (TensorFlow v1.x)**

This is a simple network which has two convolutional layers, two pooling layers and a fully connected layer. We tune hyperparameters, such as dropout rate, convolution size, hidden size, etc. It can be tuned with most NNI built-in tuners, such as TPE, SMAC, Random. We also provide an exmaple YAML file which enables assessor.

`code directory: examples/trials/mnist/`
`code directory: examples/trials/mnist-tfv1/`

<a name="mnist-tfv2"></a>
**MNIST with NNI API (TensorFlow v2.x)**

Same network to the example above, but written in TensorFlow v2.x Keras API.

`code directory: examples/trials/mnist-tfv2/`

<a name="mnist-annotation"></a>
**MNIST with NNI annotation**
Expand Down Expand Up @@ -65,4 +73,4 @@ This example is to show how to run distributed training on kubeflow through NNI.

Similar to the previous example, the difference is that this example is implemented in pytorch, thus, it uses kubeflow pytorch operator.

`code directory: examples/trials/mnist-distributed-pytorch/`
`code directory: examples/trials/mnist-distributed-pytorch/`
2 changes: 1 addition & 1 deletion docs/en_US/Tutorial/QuickStart.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ If you want to use NNI to automatically train your model and find the optimal hy
with tf.Session() as sess:
mnist_network.train(sess, mnist)
test_acc = mnist_network.evaluate(mnist)
+ nni.report_final_result(acc)
+ nni.report_final_result(test_acc)

if __name__ == '__main__':
- params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64,
Expand Down
2 changes: 1 addition & 1 deletion docs/zh_CN/AdvancedFeature/MultiPhase.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ trial_end

### 支持多阶段 Experiment 的 Tuner:

[TPE](../Tuner/HyperoptTuner.md), [Random](../Tuner/HyperoptTuner.md), [Anneal](../Tuner/HyperoptTuner.md), [Evolution](../Tuner/EvolutionTuner.md), [SMAC](../Tuner/SmacTuner.md), [NetworkMorphism](../Tuner/NetworkmorphismTuner.md), [MetisTuner](../Tuner/MetisTuner.md), [BOHB](../Tuner/BohbAdvisor.md), [Hyperband](../Tuner/HyperbandAdvisor.md), [ENAS Tuner ](https://github.com/countif/enas_nni/blob/master/nni/examples/tuners/enas/nni_controller_ptb.py).
[TPE](../Tuner/HyperoptTuner.md), [Random](../Tuner/HyperoptTuner.md), [Anneal](../Tuner/HyperoptTuner.md), [Evolution](../Tuner/EvolutionTuner.md), [SMAC](../Tuner/SmacTuner.md), [NetworkMorphism](../Tuner/NetworkmorphismTuner.md), [MetisTuner](../Tuner/MetisTuner.md), [BOHB](../Tuner/BohbAdvisor.md), [Hyperband](../Tuner/HyperbandAdvisor.md).

### 支持多阶段 Experiment 的训练平台:

Expand Down
10 changes: 10 additions & 0 deletions docs/zh_CN/CommunitySharings/TuningSystems.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# 使用 NNI 自动调优系统

随着计算机系统和网络变得越来越复杂,通过显式的规则和启发式的方法来手工优化已经越来越难,甚至不可能了。 下面是使用 NNI 来优化系统的两个示例。 可根据这些示例来调优自己的系统。

* [在 NNI 上调优 RocksDB](../TrialExample/RocksdbExamples.md)
* [使用 NNI 调优 SPTAG (Space Partition Tree And Graph) 参数](SptagAutoTune.md)

参考[论文](https://dl.acm.org/citation.cfm?id=3352031)了解详情:

Mike Liang, Chieh-Jan, et al. "The Case for Learning-and-System Co-design." ACM SIGOPS Operating Systems Review 53.1 (2019): 68-74.
10 changes: 5 additions & 5 deletions docs/zh_CN/Compressor/AutoCompression.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@
```python
from nni.compression.torch import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': ['default'] }]
pruner = LevelPruner(config_list)
pruner(model)
pruner = LevelPruner(model, config_list)
pruner.compress()
```

op_type 为 'default' 表示模块类型定义在了 [default_layers.py](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/compression/torch/default_layers.py)。

因此 `{ 'sparsity': 0.8, 'op_types': ['default'] }` 表示 **所有指定 op_types 的层都会被压缩到 0.8 的稀疏度**。 当调用 `pruner(model)` 时,模型会通过掩码进行压缩。随后还可以微调模型,此时**被剪除的权重不会被更新**。
因此 `{ 'sparsity': 0.8, 'op_types': ['default'] }` 表示 **所有指定 op_types 的层都会被压缩到 0.8 的稀疏度**。 当调用 `pruner.compress()` 时,模型会通过掩码进行压缩。随后还可以微调模型,此时**被剪除的权重不会被更新**。

## 然后,进行自动化

Expand Down Expand Up @@ -84,9 +84,9 @@ config_list_agp = [{'initial_sparsity': 0, 'final_sparsity': conv0_sparsity,
{'initial_sparsity': 0, 'final_sparsity': conv1_sparsity,
'start_epoch': 0, 'end_epoch': 3,
'frequency': 1,'op_name': 'conv1' },]
PRUNERS = {'level':LevelPruner(config_list_level),'agp':AGP_Pruner(config_list_agp)}
PRUNERS = {'level':LevelPruner(model, config_list_level),'agp':AGP_Pruner(model, config_list_agp)}
pruner = PRUNERS(params['prune_method']['_name'])
pruner(model)
pruner.compress()
... # fine tuning
acc = evaluate(model) # evaluation
nni.report_final_results(acc)
Expand Down
51 changes: 51 additions & 0 deletions docs/zh_CN/Compressor/L1FilterPruner.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
NNI Compressor 中的 L1FilterPruner
===

## 1. 介绍

L1FilterPruner 是在卷积层中用来修剪过滤器的通用剪枝算法。

在 ['PRUNING FILTERS FOR EFFICIENT CONVNETS'](https://arxiv.org/abs/1608.08710) 中提出,作者 Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet 以及 Hans Peter Graf。

![](../../img/l1filter_pruner.png)

> L1Filter Pruner 修剪**卷积层**中的过滤器
>
> 从第 i 个卷积层修剪 m 个过滤器的过程如下:
>
> 1. 对于每个过滤器 ![](http://latex.codecogs.com/gif.latex?F_{i,j}),计算其绝对内核权重之和![](http://latex.codecogs.com/gif.latex?s_j=\sum_{l=1}^{n_i}\sum|K_l|)
> 2. 将过滤器按 ![](http://latex.codecogs.com/gif.latex?s_j) 排序。
> 3. 修剪 ![](http://latex.codecogs.com/gif.latex?m) 具有最小求和值及其相应特征图的筛选器。 在 下一个卷积层中,被剪除的特征图所对应的内核也被移除。
> 4. 为第 ![](http://latex.codecogs.com/gif.latex?i) 和 ![](http://latex.codecogs.com/gif.latex?i+1) 层创建新的内核举证,并保留剩余的内核 权重,并复制到新模型中。

## 2. 用法

PyTorch 代码

```
from nni.compression.torch import L1FilterPruner
config_list = [{ 'sparsity': 0.8, 'op_types': ['Conv2d'], 'op_names': ['conv1', 'conv2'] }]
pruner = L1FilterPruner(model, config_list)
pruner.compress()
```

#### L1Filter Pruner 的用户配置

- **sparsity:**,指定压缩的稀疏度。
- **op_types:** 在 L1Filter Pruner 中仅支持 Conv2d。

## 3. 实验

我们实现了 ['PRUNING FILTERS FOR EFFICIENT CONVNETS'](https://arxiv.org/abs/1608.08710) 中的一项实验, 即论文中,在 CIFAR-10 数据集上修剪 **VGG-16** 的 **VGG-16-pruned-A**,其中大约剪除了 $64\%$ 的参数。 我们的实验结果如下:

| 模型 | 错误率(论文/我们的) | 参数量 | 剪除率 |
| --------------- | ----------- | -------- | ----- |
| VGG-16 | 6.75/6.49 | 1.5x10^7 | |
| VGG-16-pruned-A | 6.60/6.47 | 5.4x10^6 | 64.0% |

实验代码在 [examples/model_compress](https://github.com/microsoft/nni/tree/master/examples/model_compress/)





23 changes: 23 additions & 0 deletions docs/zh_CN/Compressor/LotteryTicketHypothesis.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
Lottery Ticket 假设
===

## 介绍

[The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks](https://arxiv.org/abs/1803.03635) 是主要衡量和分析的论文,它提供了非常有意思的见解。 为了在 NNI 上支持此算法,主要实现了找到*获奖彩票*的训练方法。

本文中,作者使用叫做*迭代*修剪的方法:
> 1. 随机初始化一个神经网络 f(x;theta_0) (其中 theta_0 为 D_{theta}).
> 2. 将网络训练 j 次,得出参数 theta_j。
> 3. 在 theta_j 修剪参数的 p%,创建掩码 m。
> 4. 将其余参数重置为 theta_0 的值,创建获胜彩票 f(x;m*theta_0)。
> 5. 重复步骤 2、3 和 4。

如果配置的最终稀疏度为 P (e.g., 0.8) 并且有 n 次修建迭代,每次迭代修剪前一轮中剩余权重的 1-(1-P)^(1/n)。

## 重现结果

在重现时,在 MNIST 使用了与论文相同的配置。 [此处](https://github.com/microsoft/nni/tree/master/examples/model_compress/lottery_torch_mnist_fc.py)为实现代码。 在次实验中,修剪了10次,在每次修剪后,训练了 50 个 epoch。

![](../../img/lottery_ticket_mnist_fc.png)

上图展示了全连接网络的结果。 `round0-sparsity-0.0` 是没有剪枝的性能。 与论文一致,修剪约 80% 也能获得与不修剪时相似的性能,收敛速度也会更快。 如果修剪过多(例如,大于 94%),则精度会降低,收敛速度会稍慢。 与本文稍有不同,论文中数据的趋势比较明显。
Loading