Skip to content

Commit

Permalink
Merge pull request #25 from microsoft/master
Browse files Browse the repository at this point in the history
pull code
  • Loading branch information
chicm-ms authored Jul 10, 2019
2 parents ea5f58f + eb5afd7 commit c0ffc18
Show file tree
Hide file tree
Showing 23 changed files with 242 additions and 154 deletions.
2 changes: 1 addition & 1 deletion docs/en_US/BohbAdvisor.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ nnictl package install --name=BOHB

To use BOHB, you should add the following spec in your experiment's YAML config file:

```yml
```yaml
advisor:
builtinAdvisorName: BOHB
classArgs:
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/BuiltinTuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ advisor:

**Installation**

NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally), so users should install it first.
NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally) and [Keras](https://keras.io/#installation), so users should install them first. The corresponding requirements file is [here](https://github.com/microsoft/nni/blob/master/examples/trials/network_morphism/requirements.txt).

**Suggested scenario**

Expand Down Expand Up @@ -356,7 +356,7 @@ Similar to Hyperband, it is suggested when you have limited computation resource

**Usage example**

```yml
```yaml
advisor:
builtinAdvisorName: BOHB
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/CommunitySharings/HpoComparision.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ is running in docker?: no

### Problem Description

Nonconvex problem on the hyper-parameter search of [AutoGBDT](../gbdt_example.md) example.
Nonconvex problem on the hyper-parameter search of [AutoGBDT](../GbdtExample.md) example.

### Search Space

Expand Down
8 changes: 4 additions & 4 deletions docs/en_US/GeneralNasInterfaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ To facilitate NAS innovations (e.g., design/implement new NAS models, compare di

### Example: choose an operator for a layer

When designing the following model there might be several choices in the fourth layer that may make this model perform good. In the script of this model, we can use annotation for the fourth layer as shown in the figure. In this annotation, there are five fields in total:
When designing the following model there might be several choices in the fourth layer that may make this model perform well. In the script of this model, we can use annotation for the fourth layer as shown in the figure. In this annotation, there are five fields in total:

![](../img/example_layerchoice.png)

Expand Down Expand Up @@ -50,7 +50,7 @@ To illustrate the convenience of the programming interface, we use the interface

After finishing the trial code through the annotation above, users have implicitly specified the search space of neural architectures in the code. Based on the code, NNI will automatically generate a search space file which could be fed into tuning algorithms. This search space file follows the following JSON format.

```json
```javascript
{
"mutable_1": {
"layer_1": {
Expand All @@ -67,7 +67,7 @@ After finishing the trial code through the annotation above, users have implicit

Accordingly, a specified neural architecture (generated by tuning algorithm) is expressed as follows:

```json
```javascript
{
"mutable_1": {
"layer_1": {
Expand Down Expand Up @@ -111,7 +111,7 @@ Example of weight sharing on NNI.

One-Shot NAS is a popular approach to find good neural architecture within a limited time and resource budget. Basically, it builds a full graph based on the search space, and uses gradient descent to at last find the best subgraph. There are different training approaches, such as [training subgraphs (per mini-batch)][1], [training full graph through dropout][6], [training with architecture weights (regularization)][3]. Here we focus on the first approach, i.e., training subgraphs (ENAS).

With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](./multiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.
With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](./MultiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.

![](../img/one-shot_training.png)

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/HowToImplementTrainingService.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ TrainingService is a module related to platform management and job schedule in N
## System architecture
![](../img/NNIDesign.jpg)

The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md).
The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkControllerMode.md).

In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules.

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Nnictl.md
Original file line number Diff line number Diff line change
Expand Up @@ -463,7 +463,7 @@ Debug mode will disable version check function in Trialkeeper.

Currently, following tuner and advisor support import data:

```yml
```yaml
builtinTunerName: TPE, Anneal, GridSearch, MetisTuner
builtinAdvisorName: BOHB
```
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ More details about how to run an experiment, please refer to [Get Started](Quick
* [How to adapt your trial code on NNI?](Trials.md)
* [What are tuners supported by NNI?](BuiltinTuner.md)
* [How to customize your own tuner?](CustomizeTuner.md)
* [What are assessors supported by NNI?](BuiltinAssessors.md)
* [What are assessors supported by NNI?](BuiltinAssessor.md)
* [How to customize your own assessor?](CustomizeAssessor.md)
* [How to run an experiment on local?](LocalMode.md)
* [How to run an experiment on multiple machines?](RemoteMachineMode.md)
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/QuickStart.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ The above code can only try one set of parameters at a time, if we want to tune

NNI is born for helping user do the tuning jobs, the NNI working process is presented below:

```pseudo
```
input: search space, trial code, config file
output: one optimal hyperparameter configuration
Expand Down Expand Up @@ -240,7 +240,7 @@ Below is the status of the all trials. Specifically:
## Related Topic

* [Try different Tuners](BuiltinTuner.md)
* [Try different Assessors](BuiltinAssessors.md)
* [Try different Assessors](BuiltinAssessor.md)
* [How to use command line tool nnictl](Nnictl.md)
* [How to write a trial](Trials.md)
* [How to run an experiment on local (with multiple GPUs)?](LocalMode.md)
Expand Down
10 changes: 5 additions & 5 deletions docs/en_US/Release.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@
* Added multiphase capability for the following builtin tuners:
* TPE, Random Search, Anneal, Naïve Evolution, SMAC, Network Morphism, Metis Tuner.

For details, please refer to [Write a tuner that leverages multi-phase](./MultiPhase.md#write-a-tuner-that-leverages-multi-phase)
For details, please refer to [Write a tuner that leverages multi-phase](./MultiPhase.md)

* Web Portal
* Enable trial comparation in Web Portal. For details, refer to [View trials status](WebUI.md#view-trials-status)
* Allow users to adjust rendering interval of Web Portal. For details, refer to [View Summary Page](WebUI.md#view-summary-page)
* show intermediate results more friendly. For details, refer to [View trials status](WebUI.md#view-trials-status)
* Enable trial comparation in Web Portal. For details, refer to [View trials status](WebUI.md)
* Allow users to adjust rendering interval of Web Portal. For details, refer to [View Summary Page](WebUI.md)
* show intermediate results more friendly. For details, refer to [View trials status](WebUI.md)
* [Commandline Interface](Nnictl.md)
* `nnictl experiment delete`: delete one or all experiments, it includes log, result, environment information and cache. It uses to delete useless experiment result, or save disk space.
* `nnictl platform clean`: It uses to clean up disk on a target platform. The provided YAML file includes the information of target platform, and it follows the same schema as the NNI configuration file.
Expand Down Expand Up @@ -68,7 +68,7 @@

### Major Features

* [Support NNI on Windows](./WindowsLocalMode.md)
* [Support NNI on Windows](./NniOnWindows.md)
* NNI running on windows for local mode
* [New advisor: BOHB](./BohbAdvisor.md)
* Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Trials.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ RECEIVED_PARAMS = nni.get_next_parameter()
nni.report_intermediate_result(metrics)
```

`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessor.md). Usually, `metrics` could be periodically evaluated loss or accuracy.

- Report performance of the configuration

Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/community_sharings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ In addtion to the official tutorilas and examples, we encourage community contri
:maxdepth: 2

NNI Practice Sharing<nni_practice_sharing>
Neural Architecture Search Comparison<CommunitySharings/NasComparison>
Hyper-parameter Tuning Algorithm Comparsion<CommunitySharings/HpoComparison>
Neural Architecture Search Comparison<./CommunitySharings/NasComparison>
Hyper-parameter Tuning Algorithm Comparsion<./CommunitySharings/HpoComparison>
2 changes: 1 addition & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ json_tricks
numpy
scipy
coverage
sklearn
sklearn
1 change: 1 addition & 0 deletions docs/src/configspace
Submodule configspace added at f389e1
5 changes: 5 additions & 0 deletions docs/src/pip-delete-this-directory.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
This file is placed here by pip to indicate the source was put
here by pip.

Once this package is successfully installed this source code will be
deleted (unless you remove this file).
2 changes: 1 addition & 1 deletion examples/trials/cifar10_pytorch/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@
import os
import sys
import time
import math

import torch
import torch.nn as nn
import torch.nn.init as init

Expand Down
2 changes: 1 addition & 1 deletion examples/trials/kaggle-tgs-salt/predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ def generate_preds(outputs, target_size, pad_mode, threshold=0.5):
if pad_mode == 'resize':
cropped = resize_image(output, target_size=target_size)
else:
cropped = crop_image_softmax(output, target_size=target_size)
cropped = crop_image(output, target_size=target_size)
pred = binarize(cropped, threshold)
preds.append(pred)

Expand Down
4 changes: 4 additions & 0 deletions src/nni_manager/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,10 @@
"tslint-microsoft-contrib": "^6.0.0",
"typescript": "^3.2.2"
},
"resolutions": {
"mem": "^4.0.0",
"handlebars": "^4.1.0"
},
"engines": {
"node": ">=10.0.0"
},
Expand Down
Loading

0 comments on commit c0ffc18

Please sign in to comment.