Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge master #296

Merged
merged 22 commits into from
May 19, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
3a1416c
Pin yargs<17.0 (#3608)
ultmaster May 8, 2021
9b845fc
Fix Hybrid mode doc (#3600)
SparkSnail May 10, 2021
75d73d3
Bump lodash from 4.17.20 to 4.17.21 in /ts/nni_manager (#3618)
dependabot[bot] May 10, 2021
78d37a3
Bump hosted-git-info from 2.7.1 to 2.8.9 in /ts/nni_manager (#3616)
dependabot[bot] May 10, 2021
2ca227f
Support sphinx v4 (#3622)
ultmaster May 10, 2021
b7f374c
delete tensorboard on nnictl (#3613)
SparkSnail May 10, 2021
051ed9e
Unpin `pytorch_lightning<1.2` (#3598)
ultmaster May 10, 2021
246450b
Bump hosted-git-info from 2.8.8 to 2.8.9 in /ts/webui (#3619)
dependabot[bot] May 11, 2021
48c9c97
Bump url-parse from 1.4.7 to 1.5.1 in /ts/webui (#3617)
dependabot[bot] May 11, 2021
0513330
fix retiarii config experiment_working_directory (#3607)
J-shang May 11, 2021
6098314
Support aml pipeline (#3477)
SparkSnail May 11, 2021
290558c
Improve NNI manager logging (#3624)
liuzhe-lz May 12, 2021
aef9028
Bump ssri from 6.0.1 to 6.0.2 in /ts/webui (#3601)
dependabot[bot] May 12, 2021
e878e72
Bump ssri from 6.0.1 to 6.0.2 in /ts/nni_manager (#3602)
dependabot[bot] May 12, 2021
85cff74
Fix on log and metric issue on Windows when using pytorch with multit…
Ivanfangsc May 12, 2021
dddf0b9
Update retiarii_index.rst (#3639)
scarlett2018 May 14, 2021
7add1c6
modify the comments of supporting resnet in channel_pruning_env.py (#…
ichejun May 17, 2021
e5fb9c5
Make selected trials consistent across auto-refresh in detail table (…
Lijiaoa May 17, 2021
761732a
fix syntax error on windows (#3634)
acured May 17, 2021
af929fd
Add LSQ quantizer (#3503)
chenbohua3 May 18, 2021
797b963
[webui] add basename in router (#3625)
Lijiaoa May 19, 2021
03ff374
fix the bug in line no.336 in mask_conflict.py (#3629)
Davidxswang May 19, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion dependencies/recommended.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ torch == 1.6.0+cpu ; sys_platform != "darwin"
torch == 1.6.0 ; sys_platform == "darwin"
torchvision == 0.7.0+cpu ; sys_platform != "darwin"
torchvision == 0.7.0 ; sys_platform == "darwin"
pytorch-lightning >= 1.1.1, < 1.2
pytorch-lightning >= 1.1.1
onnx
peewee
graphviz
2 changes: 2 additions & 0 deletions docs/en_US/Compression/Overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,8 @@ Quantization algorithms compress the original network by reducing the number of
- DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. `Reference Paper <https://arxiv.org/abs/1606.06160>`__
* - `BNN Quantizer <../Compression/Quantizer.rst#bnn-quantizer>`__
- Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. `Reference Paper <https://arxiv.org/abs/1602.02830>`__
* - `LSQ Quantizer <../Compression/Quantizer.rst#lsq-quantizer>`__
- Learned step size quantization. `Reference Paper <https://arxiv.org/pdf/1902.08153.pdf>`__


Model Speedup
Expand Down
56 changes: 56 additions & 0 deletions docs/en_US/Compression/Quantizer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Index of supported quantization algorithms
* `QAT Quantizer <#qat-quantizer>`__
* `DoReFa Quantizer <#dorefa-quantizer>`__
* `BNN Quantizer <#bnn-quantizer>`__
* `LSQ Quantizer <#lsq-quantizer>`__

Naive Quantizer
---------------
Expand Down Expand Up @@ -86,6 +87,61 @@ note

batch normalization folding is currently not supported.

----

LSQ Quantizer
-------------

In `LEARNED STEP SIZE QUANTIZATION <https://arxiv.org/pdf/1902.08153.pdf>`__\ , authors Steven K. Esser and Jeffrey L. McKinstry provide an algorithm to train the scales with gradients.

..
The authors introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer’s quantizer step size, such that it can be learned in conjunction with other network parameters.


Usage
^^^^^
You can add codes below before your training codes. Three things must be done:


1. configure which layer to be quantized and which tensor (input/output/weight) of that layer to be quantized.
2. construct the lsq quantizer
3. call the `compress` API


PyTorch code

.. code-block:: python
from nni.algorithms.compression.pytorch.quantization import LsqQuantizer
model = Mnist()
configure_list = [{
'quant_types': ['weight', 'input'],
'quant_bits': {
'weight': 8,
'input': 8,
},
'op_names': ['conv1']
}, {
'quant_types': ['output'],
'quant_bits': {'output': 8,},
'op_names': ['relu1']
}]
quantizer = LsqQuantizer(model, configure_list, optimizer)
quantizer.compress()
You can view example for more information. :githublink:`examples/model_compress/quantization/LSQ_torch_quantizer.py <examples/model_compress/quantization/LSQ_torch_quantizer.py>`

User configuration for LSQ Quantizer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

common configuration needed by compression algorithms can be found at `Specification of `config_list <./QuickStart.rst>`__.

configuration needed by this algorithm :


----

DoReFa Quantizer
Expand Down
10 changes: 8 additions & 2 deletions docs/en_US/NAS/retiarii/retiarii_index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,13 @@
Retiarii Overview
#################

`Retiarii <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__ is a new framework to support neural architecture search and hyper-parameter tuning. It allows users to express various search space with high flexibility, to reuse many SOTA search algorithms, and to leverage system level optimizations to speed up the search process. This framework provides the following new user experiences.
`Retiarii <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__ is a deep learning framework that supports the exploratory training on a neural network model space, rather than on a single neural network model.

Exploratory training with Retiarii allows user to express various search space for **Neural Architecture Search** and **Hyper-Parameter Tuning** with high flexibility.

As previous NAS and HPO supports, the new framework continued the ability for allowing user to reuse SOTA search algorithms, and to leverage system level optimizations to speed up the search process.

Follow the instructions below to start your journey with Retiarii.

.. toctree::
:maxdepth: 2
Expand All @@ -12,4 +18,4 @@ Retiarii Overview
One-shot NAS <OneshotTrainer>
Advanced Tutorial <Advanced>
Customize a New Strategy <WriteStrategy>
Retiarii APIs <ApiReference>
Retiarii APIs <ApiReference>
51 changes: 18 additions & 33 deletions docs/en_US/TrainingService/HybridMode.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,40 +15,25 @@ Use ``examples/trials/mnist-tfv1`` as an example. The NNI config YAML file's con

.. code-block:: yaml
authorName: default
experimentName: example_mnist
experimentName: MNIST
searchSpaceFile: search_space.json
trialCommand: python3 mnist.py
trialCodeDirectory: .
trialConcurrency: 2
maxExecDuration: 1h
maxTrialNum: 10
trainingServicePlatform: hybrid
searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
trialGpuNumber: 0
maxExperimentDuration: 24h
maxTrialNumber: 100
tuner:
builtinTunerName: TPE
name: TPE
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
trial:
command: python3 mnist.py
codeDir: .
gpuNum: 1
hybridConfig:
trainingServicePlatforms:
- local
- remote
remoteConfig:
reuse: true
machineList:
- ip: 10.1.1.1
username: bob
passwd: bob123
Configurations for hybrid mode:

hybridConfig:

* trainingServicePlatforms. required key. This field specify the platforms used in hybrid mode, the values using yaml list format. NNI support setting ``local``, ``remote``, ``aml``, ``pai`` in this field.


.. Note:: If setting a platform in trainingServicePlatforms mode, users should also set the corresponding configuration for the platform. For example, if set ``remote`` as one of the platform, should also set ``machineList`` and ``remoteConfig`` configuration.
trainingService:
- platform: remote
machineList:
- host: 127.0.0.1
user: bob
password: bob
- platform: local
To use hybrid training services, users should set training service configurations as a list in `trainingService` field.
Currently, hybrid support setting `local`, `remote`, `pai` and `aml` training services.
92 changes: 0 additions & 92 deletions docs/en_US/Tutorial/Nnictl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ nnictl support commands:
* `nnictl config <#config>`__
* `nnictl log <#log>`__
* `nnictl webui <#webui>`__
* `nnictl tensorboard <#tensorboard>`__
* `nnictl algo <#algo>`__
* `nnictl ss_gen <#ss_gen>`__
* `nnictl --version <#version>`__
Expand Down Expand Up @@ -1311,97 +1310,6 @@ Manage webui
- Experiment ID


:raw-html:`<a name="tensorboard"></a>`

Manage tensorboard
^^^^^^^^^^^^^^^^^^


*
**nnictl tensorboard start**


*
Description

Start the tensorboard process.

*
Usage

.. code-block:: bash

nnictl tensorboard start

*
Options

.. list-table::
:header-rows: 1
:widths: auto

* - Name, shorthand
- Required
- Default
- Description
* - id
- False
-
- ID of the experiment you want to set
* - --trial_id, -T
- False
-
- ID of the trial
* - --port
- False
- 6006
- The port of the tensorboard process



*
Detail


#. NNICTL support tensorboard function in local and remote platform for the moment, other platforms will be supported later.
#. If you want to use tensorboard, you need to write your tensorboard log data to environment variable [NNI_OUTPUT_DIR] path.
#. In local mode, nnictl will set --logdir=[NNI_OUTPUT_DIR] directly and start a tensorboard process.
#. In remote mode, nnictl will create a ssh client to copy log data from remote machine to local temp directory firstly, and then start a tensorboard process in your local machine. You need to notice that nnictl only copy the log data one time when you use the command, if you want to see the later result of tensorboard, you should execute nnictl tensorboard command again.
#. If there is only one trial job, you don't need to set trial id. If there are multiple trial jobs running, you should set the trial id, or you could use [nnictl tensorboard start --trial_id all] to map --logdir to all trial log paths.


*
**nnictl tensorboard stop**


*
Description

Stop all of the tensorboard process.

*
Usage

.. code-block:: bash

nnictl tensorboard stop

*
Options

.. list-table::
:header-rows: 1
:widths: auto

* - Name, shorthand
- Required
- Default
- Description
* - id
- False
-
- ID of the experiment you want to set


:raw-html:`<a name="algo"></a>`

Expand Down
4 changes: 1 addition & 3 deletions docs/en_US/builtin_tuner.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,7 @@ Tuner receives metrics from `Trial` to evaluate the performance of a specific pa
:maxdepth: 1

Overview <Tuner/BuiltinTuner>
TPE <Tuner/HyperoptTuner>
Random Search <Tuner/HyperoptTuner>
Anneal <Tuner/HyperoptTuner>
TPE / Random Search / Anneal <Tuner/HyperoptTuner>
Naive Evolution <Tuner/EvolutionTuner>
SMAC <Tuner/SmacTuner>
Metis Tuner <Tuner/MetisTuner>
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,4 +201,4 @@

# -- Extension configuration -------------------------------------------------
def setup(app):
app.add_stylesheet('css/custom.css')
app.add_css_file('css/custom.css')
2 changes: 1 addition & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
sphinx>=3.3.1
sphinx>=4.0
sphinx-argparse
sphinx-rtd-theme
sphinxcontrib-websupport
Expand Down
Loading