Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Add BOHB Advisor #910

Merged
merged 64 commits into from
Apr 12, 2019
Merged
Show file tree
Hide file tree
Changes from 62 commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
a9ffae9
init
PurityFan Feb 20, 2019
a024c59
update
PurityFan Feb 22, 2019
7ba3b91
add kernel
PurityFan Feb 26, 2019
2e58fea
update
PurityFan Feb 27, 2019
67725b4
Merge remote-tracking branch 'upstream/master' into bohb
PurityFan Feb 27, 2019
6a709f0
refactor
PurityFan Feb 27, 2019
0c895a4
update
PurityFan Feb 28, 2019
8b33bd5
update
PurityFan Mar 1, 2019
092d95c
update
PurityFan Mar 1, 2019
564272d
update
PurityFan Mar 1, 2019
9d27fc0
update
PurityFan Mar 1, 2019
9334dae
update
PurityFan Mar 2, 2019
d461f1c
update
PurityFan Mar 3, 2019
51b9b82
test
PurityFan Mar 3, 2019
a8223a2
update
PurityFan Mar 3, 2019
d520437
update
PurityFan Mar 3, 2019
d057ef9
unittest_done
PurityFan Mar 4, 2019
6b8c501
init version
PurityFan Mar 4, 2019
c40a84c
checkpoint
PurityFan Mar 4, 2019
68d4393
debug
PurityFan Mar 5, 2019
490fc16
work version
PurityFan Mar 5, 2019
89fc21d
distinguish value and loss
PurityFan Mar 5, 2019
2c9794b
stable v0.1 of BOHB
PurityFan Mar 10, 2019
1d64982
add document for bohb
PurityFan Mar 10, 2019
0236a06
fix-pic
PurityFan Mar 10, 2019
3cb75f9
fix-bug
PurityFan Mar 11, 2019
c19375c
unify example
PurityFan Mar 11, 2019
6199884
fix bug
PurityFan Mar 11, 2019
1bcb0c2
Merge branch 'master' of https://github.com/Microsoft/nni into bohb
PurityFan Mar 13, 2019
11f5f7b
stable version0.2
PurityFan Mar 20, 2019
46570cf
checkpoint
PurityFan Mar 20, 2019
570c14b
set hyperband back to s=s_max after the iteration
PurityFan Mar 24, 2019
5f5cff8
add change
PurityFan Mar 24, 2019
38730bc
remove unused change
PurityFan Mar 24, 2019
631dda2
update
PurityFan Mar 25, 2019
9bc6c98
update doc
PurityFan Mar 25, 2019
fd1062e
update
PurityFan Mar 25, 2019
ca8b3b2
modify png
PurityFan Mar 25, 2019
db0ca7c
update README.md
PurityFan Mar 25, 2019
28737f3
pass pylint
PurityFan Mar 25, 2019
c387310
fix typo
PurityFan Mar 25, 2019
29f52c9
remove change of hyperband
PurityFan Mar 25, 2019
326467f
update with the comments
PurityFan Mar 25, 2019
778e9f3
turn loss metrics to generate
PurityFan Mar 25, 2019
f2d71ad
update by comments
PurityFan Mar 26, 2019
9ed1304
fix bug in usage example
PurityFan Mar 26, 2019
a31f2ff
update for leelaylay's comments
PurityFan Mar 26, 2019
ca03ab3
change function of r n
PurityFan Mar 26, 2019
259c831
update doc to notifty float type input
PurityFan Mar 26, 2019
cb3dbfe
update doc
PurityFan Mar 26, 2019
f206582
fix bug
PurityFan Mar 26, 2019
5c483ee
update
PurityFan Mar 28, 2019
df195b0
update by hui's comments
PurityFan Mar 31, 2019
a3d1baf
Merge branch 'master' of https://github.com/Microsoft/nni into bohb
PurityFan Apr 2, 2019
8a1a537
Merge branch 'master' of https://github.com/Microsoft/nni into bohb
PurityFan Apr 3, 2019
c38202d
update by quanlu's comments
PurityFan Apr 3, 2019
9474d97
update picture size
PurityFan Apr 3, 2019
8d132d5
delete return True
PurityFan Apr 3, 2019
15eaece
solve conflicts
PurityFan Apr 10, 2019
aab6b8b
use extract_scalar_reward by common function
PurityFan Apr 10, 2019
6a86b96
update doc
PurityFan Apr 10, 2019
96a35d4
update doc of Builtin_Tuner
PurityFan Apr 11, 2019
faaab1b
add the prompt of BOHB nnictl install
PurityFan Apr 11, 2019
e477f8f
add statsmodels dependency
PurityFan Apr 11, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,8 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search
<li><a href="docs/en_US/Builtin_Tuner.md#NetworkMorphism">Network Morphism</a></li>
<li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#NetworkMorphism#MetisTuner">Metis Tuner</a></li>
</ul>
<li><a href="docs/en_US/Builtin_Tuner.md#BOHB">BOHB</a></li>
</ul>
<a href="docs/en_US/Builtin_Assessors.md#assessor">Assessor</a>
<ul>
<li><a href="docs/en_US/Builtin_Assessors.md#Medianstop">Median Stop</a></li>
Expand Down
48 changes: 48 additions & 0 deletions docs/en_US/Builtin_Tuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Currently we support the following algorithms:
|[__Hyperband__](#Hyperband)|Hyperband tries to use the limited resource to explore as many configurations as possible, and finds out the promising ones to get the final result. The basic idea is generating many configurations and to run them for the small number of trial budget to find out promising one, then further training those promising ones to select several more promising one.[Reference Paper](https://arxiv.org/pdf/1603.06560.pdf)|
|[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)|
|[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)|
|[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Byesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)|

<br>

Expand Down Expand Up @@ -317,3 +318,50 @@ tuner:
classArgs:
optimize_mode: maximize
```

<br>

<a name="BOHB"></a>

![](https://placehold.it/15/1589F0/000000?text=+) `BOHB Advisor`

> Builtin Tuner Name: **BOHB**

**Installation**

BOHB advisor requires [ConfigSpace](https://github.com/automl/ConfigSpace) package, ConfigSpace need to be installed by following command before first use.

```bash
nnictl package install --name=BOHB
```

**Suggested scenario**

Similar to Hyperband, it is suggested when you have limited computation resource but have relatively large search space. It performs well in the scenario that intermediate result (e.g., accuracy) can reflect good or bad of final result (e.g., accuracy) to some extent. In this case, it may converges to a better configuration due to bayesian optimization usage.

**Requirement of classArg**

* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', tuners will target to maximize metrics. If 'minimize', tuner will target to minimize metrics.
* **min_budget** (*int, optional, default = 1*) - The smallest budget assign to a trial job, (budget could be the number of mini-batches or epochs). Needs to be positive.
* **max_budget** (*int, optional, default = 3*) - The largest budget assign to a trial job, (budget could be the number of mini-batches or epochs). Needs to be larger than min_budget.
* **eta** (*int, optional, default = 3*) - In each iteration, a complete run of sequential halving is executed. In it, after evaluating each configuration on the same subset size, only a fraction of 1/eta of them 'advances' to the next round. Must be greater or equal to 2.
* **min_points_in_model**(*int, optional, default = None*): number of observations to start building a KDE. Default 'None' means dim+1, when the number of completed trial in this budget is equal or larger than `max{dim+1, min_points_in_model}`, BOHB will start to build a KDE model of this budget, then use KDE model to guide the configuration selection. Need to be positive.(dim means the number of hyperparameters in search space)
* **top_n_percent**(*int, optional, default = 15*): percentage (between 1 and 99, default 15) of the observations that are considered good. Good points and bad points are used for building KDE models. For example, if you have 100 observed trials and top_n_percent is 15, then top 15 point will used for building good point models "l(x)", the remaining 85 point will used for building bad point models "g(x)".
* **num_samples**(*int, optional, default = 64*): number of samples to optimize EI (default 64). In this case, we will sample "num_samples"(default = 64) points, and compare the result of l(x)/g(x), then return one with the maximum l(x)/g(x) value as the next configuration if the optimize_mode is maximize. Otherwise, we return the smallest one.
* **random_fraction**(*float, optional, default = 0.33*): fraction of purely random configurations that are sampled from the prior without the model.
* **bandwidth_factor**(*float, optional, default = 3.0*): to encourage diversity, the points proposed to optimize EI, are sampled from a 'widened' KDE where the bandwidth is multiplied by this factor. Suggest to use default value if you are not familiar with KDE.
* **min_bandwidth**(*float, optional, default = 0.001*): to keep diversity, even when all (good) samples have the same value for one of the parameters, a minimum bandwidth (default: 1e-3) is used instead of zero. Suggest to use default value if you are not familiar with KDE.

*Please note that currently float type only support decimal representation, you have to use 0.333 instead of 1/3 and 0.001 instead of 1e-3.*

**Usage example**

```yml
advisor:
builtinAdvisorName: BOHB
classArgs:
optimize_mode: maximize
min_budget: 1
max_budget: 27
eta: 3
```
101 changes: 101 additions & 0 deletions docs/en_US/bohbAdvisor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
BOHB Advisor on NNI
===

## 1. Introduction
BOHB is a robust and efficient hyperparameter tuning algorithm mentioned in [reference paper](https://arxiv.org/abs/1807.01774). BO is the abbreviation of Bayesian optimization and HB is the abbreviation of Hyperband.

BOHB relies on HB(Hyperband) to determine how many configurations to evaluate with which budget, but it **replaces the random selection of configurations at the beginning of each HB iteration by a model-based search(Byesian Optimization)**. Once the desired number of configurations for the iteration is reached, the standard successive halving procedure is carried out using these configurations. We keep track of the performance of all function evaluations g(x, b) of configurations x on all budgets b to use as a basis for our models in later iterations.

Below we divide introduction of the BOHB process into two parts:

### HB (Hyperband)

We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](hyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below.

![](../img/bohb_1.png)

### BO (Bayesian Optimization)

The BO part of BOHB closely resembles TPE, with one major difference: we opted for a single multidimensional KDE compared to the hierarchy of one-dimensional KDEs used in TPE in order to better handle interaction effects in the input space.

Tree Parzen Estimator(TPE): uses a KDE(kernel density estimator) to model the densities.

![](../img/bohb_2.png)

To fit useful KDEs, we require a minimum number of data points Nmin; this is set to d + 1 for our experiments, where d is the number of hyperparameters. To build a model as early as possible, we do not wait until Nb = |Db|, the number of observations for budget b, is large enough to satisfy q · Nb ≥ Nmin. Instead, after initializing with Nmin + 2 random configurations, we choose the

![](../img/bohb_3.png)

best and worst configurations, respectively, to model the two densities.

Note that we alse sample a constant fraction named **random fraction** of the configurations uniformly at random.

## 2. Workflow

![](../img/bohb_6.jpg)

This image shows the workflow of BOHB. Here we set max_budget = 9, min_budget = 1, eta = 3, others as default. In this case, s_max = 2, so we will continuesly run the {s=2, s=1, s=0, s=2, s=1, s=0, ...} cycle. In each stage of SuccessiveHalving (the orange box), we will pick the top 1/eta configurations and run them again with more budget, repeated SuccessiveHalving stage until the end of this iteration. At the same time, we collect the configurations, budgets and final metrics of each trial, and use this to build a multidimensional KDEmodel with the key "budget".
Multidimensional KDE is used to guide the selection of configurations for the next iteration.

The way of sampling procedure(use Multidimensional KDE to guide the selection) is summarized by the pseudocode below.

![](../img/bohb_4.png)

## 3. Usage

BOHB advisor requires [ConfigSpace](https://github.com/automl/ConfigSpace) package, ConfigSpace need to be installed by following command before first use.

```bash
nnictl package install --name=BOHB
```

To use BOHB, you should add the following spec in your experiment's YAML config file:

```yml
advisor:
builtinAdvisorName: BOHB
classArgs:
optimize_mode: maximize
min_budget: 1
max_budget: 27
eta: 3
min_points_in_model: 7
top_n_percent: 15
num_samples: 64
random_fraction: 0.33
bandwidth_factor: 3.0
min_bandwidth: 0.001
```

**Requirement of classArg**

* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', tuners will target to maximize metrics. If 'minimize', tuner will target to minimize metrics.
* **min_budget** (*int, optional, default = 1*) - The smallest budget assign to a trial job, (budget could be the number of mini-batches or epochs). Needs to be positive.
* **max_budget** (*int, optional, default = 3*) - The largest budget assign to a trial job, (budget could be the number of mini-batches or epochs). Needs to be larger than min_budget.
* **eta** (*int, optional, default = 3*) - In each iteration, a complete run of sequential halving is executed. In it, after evaluating each configuration on the same subset size, only a fraction of 1/eta of them 'advances' to the next round. Must be greater or equal to 2.
* **min_points_in_model**(*int, optional, default = None*): number of observations to start building a KDE. Default 'None' means dim+1, when the number of completed trial in this budget is equal or larger than `max{dim+1, min_points_in_model}`, BOHB will start to build a KDE model of this budget, then use KDE model to guide the configuration selection. Need to be positive.(dim means the number of hyperparameters in search space)
* **top_n_percent**(*int, optional, default = 15*): percentage (between 1 and 99, default 15) of the observations that are considered good. Good points and bad points are used for building KDE models. For example, if you have 100 observed trials and top_n_percent is 15, then top 15 point will used for building good point models "l(x)", the remaining 85 point will used for building bad point models "g(x)".
* **num_samples**(*int, optional, default = 64*): number of samples to optimize EI (default 64). In this case, we will sample "num_samples"(default = 64) points, and compare the result of l(x)/g(x), then return one with the maximum l(x)/g(x) value as the next configuration if the optimize_mode is maximize. Otherwise, we return the smallest one.
* **random_fraction**(*float, optional, default = 0.33*): fraction of purely random configurations that are sampled from the prior without the model.
* **bandwidth_factor**(*float, optional, default = 3.0*): to encourage diversity, the points proposed to optimize EI, are sampled from a 'widened' KDE where the bandwidth is multiplied by this factor. Suggest to use default value if you are not familiar with KDE.
* **min_bandwidth**(*float, optional, default = 0.001*): to keep diversity, even when all (good) samples have the same value for one of the parameters, a minimum bandwidth (default: 1e-3) is used instead of zero. Suggest to use default value if you are not familiar with KDE.

*Please note that currently float type only support decimal representation, you have to use 0.333 instead of 1/3 and 0.001 instead of 1e-3.*

## 4. File Structure
The advisor has a lot of different files, functions and classes. Here we will only give most of those files a brief introduction:

* `bohb_advisor.py` Defination of BOHB, handle the interaction with the dispatcher, including generating new trial and processing results. Also includes the implementation of HB(Hyperband) part.
* `config_generator.py` includes the implementation of BO(Bayesian Optimization) part. The function *get_config* can generate new configuration base on BO, the function *new_result* will update model with the new result.

## 5. Experiment

### MNIST with BOHB

code implementation: [examples/trials/mnist-advisor](https://github.com/Microsoft/nni/tree/master/examples/trials/)

We chose BOHB to build CNN on the MNIST dataset. The following is our experimental final results:

![](../img/bohb_5.png)

More experimental result can be found in the [reference paper](https://arxiv.org/abs/1807.01774), we can see that BOHB makes good use of previous results, and has a balance trade-off in exploration and exploitation.
3 changes: 2 additions & 1 deletion docs/en_US/builtinTuner.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,5 @@ Builtin-Tuners
Grid Search<gridsearchTuner>
Hyperband<hyperbandAdvisor>
Network Morphism<networkmorphismTuner>
Metis Tuner<metisTuner>
Metis Tuner<metisTuner>
BOHB<bohbAdvisor>
3 changes: 3 additions & 0 deletions docs/en_US/sdk_reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,7 @@ Assessor
Advisor
------------------------
.. autoclass:: nni.hyperband_advisor.hyperband_advisor.Hyperband
:members:

.. autoclass:: nni.bohb_advisor.bohb_advisor.BOHB
:members:
Binary file added docs/img/bohb_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/bohb_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/bohb_3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/bohb_4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/bohb_5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/bohb_6.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
22 changes: 22 additions & 0 deletions examples/trials/mnist-advisor/config_bohb.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
authorName: default
experimentName: example_mnist_bohb
trialConcurrency: 1
maxExecDuration: 10h
maxTrialNum: 1000
#choice: local, remote, pai
trainingServicePlatform: local
searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
advisor:
#choice: Hyperband, BOHB
builtinAdvisorName: BOHB
classArgs:
max_budget: 27
min_budget: 1
eta: 3
optimize_mode: maximize
trial:
command: python3 mnist.py
codeDir: .
gpuNum: 0
2 changes: 1 addition & 1 deletion examples/trials/mnist-advisor/config_hyperband.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
authorName: default
experimentName: example_mnist
experimentName: example_mnist_hyperband
trialConcurrency: 2
maxExecDuration: 100h
maxTrialNum: 10000
Expand Down
2 changes: 1 addition & 1 deletion src/nni_manager/rest_server/restValidationSchemas.ts
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ export namespace ValidationSchemas {
versionCheck: joi.boolean(),
logCollection: joi.string(),
advisor: joi.object({
builtinAdvisorName: joi.string().valid('Hyperband'),
builtinAdvisorName: joi.string().valid('Hyperband', 'BOHB'),
codeDir: joi.string(),
classFileName: joi.string(),
className: joi.string(),
Expand Down
Empty file.
Loading