Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge master #246

Merged
merged 8 commits into from
May 12, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
87 changes: 0 additions & 87 deletions docs/en_US/AdvancedFeature/MultiPhase.md

This file was deleted.

17 changes: 14 additions & 3 deletions docs/en_US/NAS/NasGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,12 +156,23 @@ model = Net()
apply_fixed_architecture(model, "model_dir/final_architecture.json")
```

The JSON is simply a mapping from mutable keys to one-hot or multi-hot representation of choices. For example
The JSON is simply a mapping from mutable keys to choices. Choices can be expressed in:

* A string: select the candidate with corresponding name.
* A number: select the candidate with corresponding index.
* A list of string: select the candidates with corresponding names.
* A list of number: select the candidates with corresponding indices.
* A list of boolean values: a multi-hot array.

For example,

```json
{
"LayerChoice1": [false, true, false, false],
"InputChoice2": [true, true, false]
"LayerChoice1": "conv5x5",
"LayerChoice2": 6,
"InputChoice3": ["layer1", "layer3"],
"InputChoice4": [1, 2],
"InputChoice5": [false, true, false, false, true]
}
```

Expand Down
7 changes: 2 additions & 5 deletions docs/en_US/Release.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@

* Documentation
- Update the docs structure -Issue #1231
- [Multi phase document improvement](AdvancedFeature/MultiPhase.md) -Issue #1233 -PR #1242
- (deprecated) Multi phase document improvement -Issue #1233 -PR #1242
+ Add configuration example
- [WebUI description improvement](Tutorial/WebUI.md) -PR #1419

Expand Down Expand Up @@ -234,12 +234,10 @@
* Add `enas-mode` and `oneshot-mode` for NAS interface: [PR #1201](https://github.com/microsoft/nni/pull/1201#issue-291094510)
* [Gaussian Process Tuner with Matern kernel](Tuner/GPTuner.md)

* Multiphase experiment supports
* (deprecated) Multiphase experiment supports
* Added new training service support for multiphase experiment: PAI mode supports multiphase experiment since v0.9.
* Added multiphase capability for the following builtin tuners:
* TPE, Random Search, Anneal, Naïve Evolution, SMAC, Network Morphism, Metis Tuner.

For details, please refer to [Write a tuner that leverages multi-phase](AdvancedFeature/MultiPhase.md)

* Web Portal
* Enable trial comparation in Web Portal. For details, refer to [View trials status](Tutorial/WebUI.md)
Expand Down Expand Up @@ -549,4 +547,3 @@ Initial release of Neural Network Intelligence (NNI).
* Support CI by providing out-of-box integration with [travis-ci](https://github.com/travis-ci) on ubuntu
* Others
* Support simple GPU job scheduling

3 changes: 2 additions & 1 deletion docs/en_US/TrainingService/FrameworkControllerMode.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ NNI supports running experiment using [FrameworkController](https://github.com/M

## Setup FrameworkController

Follow the [guideline](https://github.com/Microsoft/frameworkcontroller/tree/master/example/run) to set up FrameworkController in the Kubernetes cluster, NNI supports FrameworkController by the stateful set mode.
Follow the [guideline](https://github.com/Microsoft/frameworkcontroller/tree/master/example/run) to set up FrameworkController in the Kubernetes cluster, NNI supports FrameworkController by the stateful set mode. If your cluster enforces authorization, you need to create a service account with granted permission for FrameworkController, and then pass the name of the FrameworkController service account to the NNI Experiment Config. [refer](https://github.com/Microsoft/frameworkcontroller/tree/master/example/run#run-by-kubernetes-statefulset)

## Design

Expand Down Expand Up @@ -83,6 +83,7 @@ If you use Azure Kubernetes Service, you should set `frameworkcontrollerConfig`
```yaml
frameworkcontrollerConfig:
storage: azureStorage
serviceAccountName: {your_frameworkcontroller_service_account_name}
keyVault:
vaultName: {your_vault_name}
name: {your_secert_name}
Expand Down
4 changes: 0 additions & 4 deletions docs/en_US/Tuner/BuiltinTuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,10 +68,6 @@ tuner:

Random search is suggested when each trial does not take very long (e.g., each trial can be completed very quickly, or early stopped by the assessor), and you have enough computational resources. It's also useful if you want to uniformly explore the search space. Random Search can be considered a baseline search algorithm. [Detailed Description](./HyperoptTuner.md)

**classArgs Requirements:**

* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', the tuner will try to maximize metrics. If 'minimize', the tuner will try to minimize metrics.

**Example Configuration:**

```yaml
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Tuner/CustomizeTuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ class CustomizedTuner(Tuner):
...
```

`receive_trial_result` will receive the `parameter_id, parameters, value` as parameters input. Also, Tuner will receive the `value` object are exactly same value that Trial send. If `multiPhase` is set to `true` in the experiment configuration file, an additional `trial_job_id` parameter is passed to `receive_trial_result` and `generate_parameters` through the `**kwargs` parameter.
`receive_trial_result` will receive the `parameter_id, parameters, value` as parameters input. Also, Tuner will receive the `value` object are exactly same value that Trial send.

The `your_parameters` return from `generate_parameters` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same `your_parameters` from Tuner.

Expand Down Expand Up @@ -109,4 +109,4 @@ More detail example you could see:

### Write a more advanced automl algorithm

The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor.
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor.
13 changes: 0 additions & 13 deletions docs/en_US/Tutorial/ExperimentConfig.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ This document describes the rules to write the config file, and provides some ex
+ [trainingServicePlatform](#trainingserviceplatform)
+ [searchSpacePath](#searchspacepath)
+ [useAnnotation](#useannotation)
+ [multiPhase](#multiphase)
+ [multiThread](#multithread)
+ [nniManagerIp](#nnimanagerip)
+ [logDir](#logdir)
Expand Down Expand Up @@ -94,8 +93,6 @@ searchSpacePath:
#choice: true, false, default: false
useAnnotation:
#choice: true, false, default: false
multiPhase:
#choice: true, false, default: false
multiThread:
tuner:
#choice: TPE, Random, Anneal, Evolution
Expand Down Expand Up @@ -130,8 +127,6 @@ searchSpacePath:
#choice: true, false, default: false
useAnnotation:
#choice: true, false, default: false
multiPhase:
#choice: true, false, default: false
multiThread:
tuner:
#choice: TPE, Random, Anneal, Evolution
Expand Down Expand Up @@ -171,8 +166,6 @@ trainingServicePlatform:
#choice: true, false, default: false
useAnnotation:
#choice: true, false, default: false
multiPhase:
#choice: true, false, default: false
multiThread:
tuner:
#choice: TPE, Random, Anneal, Evolution
Expand Down Expand Up @@ -283,12 +276,6 @@ Use annotation to analysis trial code and generate search space.

Note: if __useAnnotation__ is true, the searchSpacePath field should be removed.

### multiPhase

Optional. Bool. Default: false.

Enable [multi-phase experiment](../AdvancedFeature/MultiPhase.md).

### multiThread

Optional. Bool. Default: false.
Expand Down
6 changes: 2 additions & 4 deletions docs/en_US/Tutorial/SetupNniDeveloperEnvironment.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,10 +67,8 @@ It doesn't need to redeploy, but the nnictl may need to be restarted.

#### TypeScript

* If `src/nni_manager` will be changed, run `yarn watch` continually under this folder. It will rebuild code instantly.
* If `src/webui` or `src/nasui` is changed, use **step 3** to rebuild code.

The nnictl may need to be restarted.
* If `src/nni_manager` is changed, run `yarn watch` continually under this folder. It will rebuild code instantly. The nnictl may need to be restarted to reload NNI manager.
* If `src/webui` or `src/nasui` are changed, run `yarn start` under the corresponding folder. The web UI will refresh automatically if code is changed.


---
Expand Down
1 change: 0 additions & 1 deletion docs/en_US/hpo_advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ Advanced Features
.. toctree::
:maxdepth: 2

Enable Multi-phase <AdvancedFeature/MultiPhase>
Write a New Tuner <Tuner/CustomizeTuner>
Write a New Assessor <Assessor/CustomizeAssessor>
Write a New Advisor <Tuner/CustomizeAdvisor>
Expand Down
88 changes: 0 additions & 88 deletions docs/zh_CN/AdvancedFeature/MultiPhase.md

This file was deleted.

4 changes: 3 additions & 1 deletion examples/nas/spos/network.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,8 +147,10 @@ def _initialize_weights(self):

def load_and_parse_state_dict(filepath="./data/checkpoint-150000.pth.tar"):
checkpoint = torch.load(filepath, map_location=torch.device("cpu"))
if "state_dict" in checkpoint:
checkpoint = checkpoint["state_dict"]
result = dict()
for k, v in checkpoint["state_dict"].items():
for k, v in checkpoint.items():
if k.startswith("module."):
k = k[len("module."):]
result[k] = v
Expand Down
Loading