Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge master #289

Merged
merged 8 commits into from
Feb 25, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ If you cannot find an existing issue that describes your bug or feature, create
# Writing good bug reports or feature requests
File a single issue per problem and feature request. Do not enumerate multiple bugs or feature requests in the same issue.

Provide as many information as you think might relevant to the context (thinking the issue is assigning to you, what kinds of info you will need to debug it!!!). To give you a general idea about what kinds of info are useful for developers to dig out the issue, we had provided issue template for you.
Provide as much information as you think might relevant to the context (thinking the issue is assigning to you, what kinds of info you will need to debug it!!!). To give you a general idea about what kinds of info are useful for developers to dig out the issue, we had provided issue template for you.

Once you had submitted an issue, be sure to follow it for questions and discussions.

Expand All @@ -58,11 +58,11 @@ contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additio
After getting familiar with contribution agreements, you are ready to create your first PR =), follow the NNI developer tutorials to get start:

* We recommend new contributors to start with simple issues: ['good first issue'](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) or ['help-wanted'](https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22).
* [NNI developer environment installation tutorial](docs/en_US/Tutorial/SetupNniDeveloperEnvironment.md)
* [How to debug](docs/en_US/Tutorial/HowToDebug.md)
* If you have any questions on usage, review [FAQ](https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/FAQ.md) first, if there are no relevant issues and answers to your question, try contact NNI dev team and users in [Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) or [File an issue](https://github.com/microsoft/nni/issues/new/choose) on GitHub.
* [Customize your own Tuner](docs/en_US/Tuner/CustomizeTuner.md)
* [Implement customized TrainingService](docs/en_US/TrainingService/HowToImplementTrainingService.md)
* [Implement a new NAS trainer on NNI](docs/en_US/NAS/Advanced.md)
* [Customize your own Advisor](docs/en_US/Tuner/CustomizeAdvisor.md)
* [NNI developer environment installation tutorial](docs/en_US/Tutorial/SetupNniDeveloperEnvironment.rst)
* [How to debug](docs/en_US/Tutorial/HowToDebug.rst)
* If you have any questions on usage, review [FAQ](https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/FAQ.rst) first, if there are no relevant issues and answers to your question, try contact NNI dev team and users in [Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) or [File an issue](https://github.com/microsoft/nni/issues/new/choose) on GitHub.
* [Customize your own Tuner](docs/en_US/Tuner/CustomizeTuner.rst)
* [Implement customized TrainingService](docs/en_US/TrainingService/HowToImplementTrainingService.rst)
* [Implement a new NAS trainer on NNI](docs/en_US/NAS/Advanced.rst)
* [Customize your own Advisor](docs/en_US/Tuner/CustomizeAdvisor.rst)

1 change: 1 addition & 0 deletions dependencies/required.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ scikit-learn >= 0.23.2
websockets
filelock
prettytable
ipython
dataclasses ; python_version < "3.7"
numpy < 1.19.4 ; sys_platform == "win32"
numpy < 1.20 ; sys_platform != "win32" and python_version < "3.7"
Expand Down
9 changes: 5 additions & 4 deletions docs/en_US/CommunitySharings/community_sharings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,13 @@ External Repositories

Relevant Articles
=================
* `Cost-effective Hyper-parameter Tuning using AdaptDL with NNI - Feb 23, 2021 <https://medium.com/casl-project/cost-effective-hyper-parameter-tuning-using-adaptdl-with-nni-e55642888761>`__
* `(in Chinese) A summary of NNI new capabilities in NNI 2.0 - Jan 21, 2021 <https://www.msra.cn/zh-cn/news/features/nni-2>`__
* `(in Chinese) A summary of NNI new capabilities in 2019 - Dec 26, 2019 <https://mp.weixin.qq.com/s/7_KRT-rRojQbNuJzkjFMuA>`__
* `Find thy hyper-parameters for scikit-learn pipelines using Microsoft NNI - Nov 6, 2019 <https://towardsdatascience.com/find-thy-hyper-parameters-for-scikit-learn-pipelines-using-microsoft-nni-f1015b1224c1>`__
* `(in Chinese) AutoML tools (Advisor, NNI and Google Vizier) comparison - Aug 05, 2019 <http://gaocegege.com/Blog/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/katib-new#%E6%80%BB%E7%BB%93%E4%B8%8E%E5%88%86%E6%9E%90>`__
* `Hyper Parameter Optimization Comparison <./HpoComparison.rst>`__
* `Neural Architecture Search Comparison <./NasComparison.rst>`__
* `Parallelizing a Sequential Algorithm TPE <./ParallelizingTpeSearch.rst>`__
* `Automatically tuning SVD with NNI <./RecommendersSvd.rst>`__
* `Automatically tuning SPTAG with NNI <./SptagAutoTune.rst>`__
* `Find thy hyper-parameters for scikit-learn pipelines using Microsoft NNI <https://towardsdatascience.com/find-thy-hyper-parameters-for-scikit-learn-pipelines-using-microsoft-nni-f1015b1224c1>`__
* `(in Chinese) AutoML tools (Advisor, NNI and Google Vizier) comparison <http://gaocegege.com/Blog/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/katib-new#%E6%80%BB%E7%BB%93%E4%B8%8E%E5%88%86%E6%9E%90>`__
* `(in Chinese) A summary of NNI new capabilities in 2019 <https://mp.weixin.qq.com/s/7_KRT-rRojQbNuJzkjFMuA>`__

115 changes: 110 additions & 5 deletions docs/en_US/Compression/CompressionReference.rst
Original file line number Diff line number Diff line change
@@ -1,16 +1,121 @@
Python API Reference of Compression Utilities
=============================================
Model Compression API Reference
===============================

.. contents::

Sensitivity Utilities
Compressors
-----------

Compressor
^^^^^^^^^^

.. autoclass:: nni.compression.pytorch.compressor.Compressor
:members:

.. autoclass:: nni.compression.pytorch.compressor.Pruner
:members:

.. autoclass:: nni.compression.pytorch.compressor.Quantizer
:members:


Module Wrapper
^^^^^^^^^^^^^^

.. autoclass:: nni.compression.pytorch.compressor.PrunerModuleWrapper
:members:


.. autoclass:: nni.compression.pytorch.compressor.QuantizerModuleWrapper
:members:

Weight Masker
^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.compression.pytorch.pruning.weight_masker.WeightMasker
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.structured_pruning.StructuredWeightMasker
:members:


Pruners
^^^^^^^
.. autoclass:: nni.algorithms.compression.pytorch.pruning.sensitivity_pruner.SensitivityPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.OneshotPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.LevelPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.SlimPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.L1FilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.L2FilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.FPGMPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.TaylorFOWeightFilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.ActivationAPoZRankFilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.ActivationMeanRankFilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.lottery_ticket.LotteryTicketPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.agp.AGPPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.admm_pruner.ADMMPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.auto_compress_pruner.AutoCompressPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.net_adapt_pruner.NetAdaptPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.simulated_annealing_pruner.SimulatedAnnealingPruner
:members:


Quantizers
^^^^^^^^^^
.. autoclass:: nni.algorithms.compression.pytorch.quantization.quantizers.NaiveQuantizer
:members:

.. autoclass:: nni.algorithms.compression.pytorch.quantization.quantizers.QAT_Quantizer
:members:

.. autoclass:: nni.algorithms.compression.pytorch.quantization.quantizers.DoReFaQuantizer
:members:

.. autoclass:: nni.algorithms.compression.pytorch.quantization.quantizers.BNNQuantizer
:members:



Compression Utilities
---------------------

Sensitivity Utilities
^^^^^^^^^^^^^^^^^^^^^

.. autoclass:: nni.compression.pytorch.utils.sensitivity_analysis.SensitivityAnalysis
:members:

Topology Utilities
------------------
^^^^^^^^^^^^^^^^^^

.. autoclass:: nni.compression.pytorch.utils.shape_dependency.ChannelDependency
:members:
Expand All @@ -28,6 +133,6 @@ Topology Utilities
:members:

Model FLOPs/Parameters Counter
------------------------------
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. autofunction:: nni.compression.pytorch.utils.counter.count_flops_params
12 changes: 4 additions & 8 deletions docs/en_US/Compression/Overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,11 +87,6 @@ Quantization algorithms compress the original network by reducing the number of
- Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. `Reference Paper <https://arxiv.org/abs/1602.02830>`__


Automatic Model Compression
---------------------------

Given targeted compression ratio, it is pretty hard to obtain the best compressed ratio in a one shot manner. An automatic model compression algorithm usually need to explore the compression space by compressing different layers with different sparsities. NNI provides such algorithms to free users from specifying sparsity of each layer in a model. Moreover, users could leverage NNI's auto tuning power to automatically compress a model. Detailed document can be found `here <./AutoPruningUsingTuners.rst>`__.

Model Speedup
-------------

Expand All @@ -102,10 +97,11 @@ Compression Utilities

Compression utilities include some useful tools for users to understand and analyze the model they want to compress. For example, users could check sensitivity of each layer to pruning. Users could easily calculate the FLOPs and parameter size of a model. Please refer to `here <./CompressionUtils.rst>`__ for a complete list of compression utilities.

Customize Your Own Compression Algorithms
-----------------------------------------
Advanced Usage
--------------

NNI model compression leaves simple interface for users to customize a new compression algorithm. The design philosophy of the interface is making users focus on the compression logic while hiding framework specific implementation details from users. Users can learn more about our compression framework and customize a new compression algorithm (pruning algorithm or quantization algorithm) based on our framework. Moreover, users could leverage NNI's auto tuning power to automatically compress a model. Please refer to `here <./advanced.rst>`__ for more details.

NNI model compression leaves simple interface for users to customize a new compression algorithm. The design philosophy of the interface is making users focus on the compression logic while hiding framework specific implementation details from users. The detailed tutorial for customizing a new compression algorithm (pruning algorithm or quantization algorithm) can be found `here <./Framework.rst>`__.

Reference and Feedback
----------------------
Expand Down
Loading