Skip to content

Commit

Permalink
Merge pull request #296 from microsoft/master
Browse files Browse the repository at this point in the history
merge master
  • Loading branch information
SparkSnail authored May 19, 2021
2 parents 5453841 + 03ff374 commit 09f977e
Show file tree
Hide file tree
Showing 49 changed files with 883 additions and 582 deletions.
2 changes: 1 addition & 1 deletion dependencies/recommended.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ torch == 1.6.0+cpu ; sys_platform != "darwin"
torch == 1.6.0 ; sys_platform == "darwin"
torchvision == 0.7.0+cpu ; sys_platform != "darwin"
torchvision == 0.7.0 ; sys_platform == "darwin"
pytorch-lightning >= 1.1.1, < 1.2
pytorch-lightning >= 1.1.1
onnx
peewee
graphviz
2 changes: 2 additions & 0 deletions docs/en_US/Compression/Overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,8 @@ Quantization algorithms compress the original network by reducing the number of
- DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. `Reference Paper <https://arxiv.org/abs/1606.06160>`__
* - `BNN Quantizer <../Compression/Quantizer.rst#bnn-quantizer>`__
- Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. `Reference Paper <https://arxiv.org/abs/1602.02830>`__
* - `LSQ Quantizer <../Compression/Quantizer.rst#lsq-quantizer>`__
- Learned step size quantization. `Reference Paper <https://arxiv.org/pdf/1902.08153.pdf>`__


Model Speedup
Expand Down
56 changes: 56 additions & 0 deletions docs/en_US/Compression/Quantizer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Index of supported quantization algorithms
* `QAT Quantizer <#qat-quantizer>`__
* `DoReFa Quantizer <#dorefa-quantizer>`__
* `BNN Quantizer <#bnn-quantizer>`__
* `LSQ Quantizer <#lsq-quantizer>`__

Naive Quantizer
---------------
Expand Down Expand Up @@ -86,6 +87,61 @@ note

batch normalization folding is currently not supported.

----

LSQ Quantizer
-------------

In `LEARNED STEP SIZE QUANTIZATION <https://arxiv.org/pdf/1902.08153.pdf>`__\ , authors Steven K. Esser and Jeffrey L. McKinstry provide an algorithm to train the scales with gradients.

..
The authors introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer’s quantizer step size, such that it can be learned in conjunction with other network parameters.


Usage
^^^^^
You can add codes below before your training codes. Three things must be done:


1. configure which layer to be quantized and which tensor (input/output/weight) of that layer to be quantized.
2. construct the lsq quantizer
3. call the `compress` API


PyTorch code

.. code-block:: python
from nni.algorithms.compression.pytorch.quantization import LsqQuantizer
model = Mnist()
configure_list = [{
'quant_types': ['weight', 'input'],
'quant_bits': {
'weight': 8,
'input': 8,
},
'op_names': ['conv1']
}, {
'quant_types': ['output'],
'quant_bits': {'output': 8,},
'op_names': ['relu1']
}]
quantizer = LsqQuantizer(model, configure_list, optimizer)
quantizer.compress()
You can view example for more information. :githublink:`examples/model_compress/quantization/LSQ_torch_quantizer.py <examples/model_compress/quantization/LSQ_torch_quantizer.py>`

User configuration for LSQ Quantizer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

common configuration needed by compression algorithms can be found at `Specification of `config_list <./QuickStart.rst>`__.

configuration needed by this algorithm :


----

DoReFa Quantizer
Expand Down
10 changes: 8 additions & 2 deletions docs/en_US/NAS/retiarii/retiarii_index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,13 @@
Retiarii Overview
#################

`Retiarii <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__ is a new framework to support neural architecture search and hyper-parameter tuning. It allows users to express various search space with high flexibility, to reuse many SOTA search algorithms, and to leverage system level optimizations to speed up the search process. This framework provides the following new user experiences.
`Retiarii <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__ is a deep learning framework that supports the exploratory training on a neural network model space, rather than on a single neural network model.

Exploratory training with Retiarii allows user to express various search space for **Neural Architecture Search** and **Hyper-Parameter Tuning** with high flexibility.

As previous NAS and HPO supports, the new framework continued the ability for allowing user to reuse SOTA search algorithms, and to leverage system level optimizations to speed up the search process.

Follow the instructions below to start your journey with Retiarii.

.. toctree::
:maxdepth: 2
Expand All @@ -12,4 +18,4 @@ Retiarii Overview
One-shot NAS <OneshotTrainer>
Advanced Tutorial <Advanced>
Customize a New Strategy <WriteStrategy>
Retiarii APIs <ApiReference>
Retiarii APIs <ApiReference>
51 changes: 18 additions & 33 deletions docs/en_US/TrainingService/HybridMode.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,40 +15,25 @@ Use ``examples/trials/mnist-tfv1`` as an example. The NNI config YAML file's con

.. code-block:: yaml
authorName: default
experimentName: example_mnist
experimentName: MNIST
searchSpaceFile: search_space.json
trialCommand: python3 mnist.py
trialCodeDirectory: .
trialConcurrency: 2
maxExecDuration: 1h
maxTrialNum: 10
trainingServicePlatform: hybrid
searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
trialGpuNumber: 0
maxExperimentDuration: 24h
maxTrialNumber: 100
tuner:
builtinTunerName: TPE
name: TPE
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
trial:
command: python3 mnist.py
codeDir: .
gpuNum: 1
hybridConfig:
trainingServicePlatforms:
- local
- remote
remoteConfig:
reuse: true
machineList:
- ip: 10.1.1.1
username: bob
passwd: bob123
Configurations for hybrid mode:

hybridConfig:

* trainingServicePlatforms. required key. This field specify the platforms used in hybrid mode, the values using yaml list format. NNI support setting ``local``, ``remote``, ``aml``, ``pai`` in this field.


.. Note:: If setting a platform in trainingServicePlatforms mode, users should also set the corresponding configuration for the platform. For example, if set ``remote`` as one of the platform, should also set ``machineList`` and ``remoteConfig`` configuration.
trainingService:
- platform: remote
machineList:
- host: 127.0.0.1
user: bob
password: bob
- platform: local
To use hybrid training services, users should set training service configurations as a list in `trainingService` field.
Currently, hybrid support setting `local`, `remote`, `pai` and `aml` training services.
92 changes: 0 additions & 92 deletions docs/en_US/Tutorial/Nnictl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ nnictl support commands:
* `nnictl config <#config>`__
* `nnictl log <#log>`__
* `nnictl webui <#webui>`__
* `nnictl tensorboard <#tensorboard>`__
* `nnictl algo <#algo>`__
* `nnictl ss_gen <#ss_gen>`__
* `nnictl --version <#version>`__
Expand Down Expand Up @@ -1311,97 +1310,6 @@ Manage webui
- Experiment ID


:raw-html:`<a name="tensorboard"></a>`

Manage tensorboard
^^^^^^^^^^^^^^^^^^


*
**nnictl tensorboard start**


*
Description

Start the tensorboard process.

*
Usage

.. code-block:: bash
nnictl tensorboard start
*
Options

.. list-table::
:header-rows: 1
:widths: auto

* - Name, shorthand
- Required
- Default
- Description
* - id
- False
-
- ID of the experiment you want to set
* - --trial_id, -T
- False
-
- ID of the trial
* - --port
- False
- 6006
- The port of the tensorboard process



*
Detail


#. NNICTL support tensorboard function in local and remote platform for the moment, other platforms will be supported later.
#. If you want to use tensorboard, you need to write your tensorboard log data to environment variable [NNI_OUTPUT_DIR] path.
#. In local mode, nnictl will set --logdir=[NNI_OUTPUT_DIR] directly and start a tensorboard process.
#. In remote mode, nnictl will create a ssh client to copy log data from remote machine to local temp directory firstly, and then start a tensorboard process in your local machine. You need to notice that nnictl only copy the log data one time when you use the command, if you want to see the later result of tensorboard, you should execute nnictl tensorboard command again.
#. If there is only one trial job, you don't need to set trial id. If there are multiple trial jobs running, you should set the trial id, or you could use [nnictl tensorboard start --trial_id all] to map --logdir to all trial log paths.


*
**nnictl tensorboard stop**


*
Description

Stop all of the tensorboard process.

*
Usage

.. code-block:: bash
nnictl tensorboard stop
*
Options

.. list-table::
:header-rows: 1
:widths: auto

* - Name, shorthand
- Required
- Default
- Description
* - id
- False
-
- ID of the experiment you want to set


:raw-html:`<a name="algo"></a>`

Expand Down
4 changes: 1 addition & 3 deletions docs/en_US/builtin_tuner.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,7 @@ Tuner receives metrics from `Trial` to evaluate the performance of a specific pa
:maxdepth: 1

Overview <Tuner/BuiltinTuner>
TPE <Tuner/HyperoptTuner>
Random Search <Tuner/HyperoptTuner>
Anneal <Tuner/HyperoptTuner>
TPE / Random Search / Anneal <Tuner/HyperoptTuner>
Naive Evolution <Tuner/EvolutionTuner>
SMAC <Tuner/SmacTuner>
Metis Tuner <Tuner/MetisTuner>
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,4 +201,4 @@

# -- Extension configuration -------------------------------------------------
def setup(app):
app.add_stylesheet('css/custom.css')
app.add_css_file('css/custom.css')
2 changes: 1 addition & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
sphinx>=3.3.1
sphinx>=4.0
sphinx-argparse
sphinx-rtd-theme
sphinxcontrib-websupport
Expand Down
Loading

0 comments on commit 09f977e

Please sign in to comment.