Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Nested search space refinement #1048

Merged
merged 49 commits into from
May 16, 2019
Merged
Show file tree
Hide file tree
Changes from 45 commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
17ac8cb
add different tuner config files for config_test
Feb 19, 2019
b6f2c56
change MetisTuner config test due to no lightgbm python module in int…
Feb 20, 2019
b3afb3a
install smac package in azure-pipelines
Feb 20, 2019
1b850df
SMAC need swig to be installed
Feb 20, 2019
130dec7
Try to install swig from source code
Feb 20, 2019
1504034
remove SMAC test because the dependency can not be installed
Feb 21, 2019
1d55da4
Merge branch 'master' of https://github.com/Microsoft/nni
Feb 21, 2019
ee26306
use sudo to install the swig
Feb 21, 2019
fdca615
sleep 10s to make sure the port has been released
Feb 22, 2019
eae6265
remove tuner test for networkmorphism because it uses more than 30s t…
Feb 22, 2019
6cdf264
word "down" to "done"
Feb 22, 2019
55b5e3c
add config test for Curvefitting assessor
Feb 22, 2019
94430b8
change file name
Feb 22, 2019
ac00024
Merge branch 'master' of https://github.com/Microsoft/nni
Feb 25, 2019
a43b1ba
Fix data type not match bug
Mar 4, 2019
d87ae8c
Optimize MetisTunner
Mar 4, 2019
48e5df3
Merge branch 'master' of https://github.com/Microsoft/nni
Mar 4, 2019
74cdcf8
pretty the code
Mar 4, 2019
7c999c3
Follow the review comment
Mar 5, 2019
dce983e
add exploration probability
Mar 5, 2019
30bfa10
Avoid None type object generating
Mar 5, 2019
680ea9a
Merge branch 'master' of https://github.com/Microsoft/nni
Mar 5, 2019
5508b6c
Merge branch 'master' of https://github.com/Microsoft/nni
Mar 11, 2019
65a0874
Merge branch 'master' of https://github.com/Microsoft/nni
Mar 13, 2019
55c410f
fix nnictl log trial bug
Mar 21, 2019
e1cb4d1
Merge branch 'master' of https://github.com/Microsoft/nni
Mar 21, 2019
504c364
rollback chinese doc
Mar 22, 2019
96ead5c
Merge branch 'master' of https://github.com/Microsoft/nni
Mar 25, 2019
1f27261
add argument 'experiment' to parser_log_trial and parser_trial_kill
Mar 25, 2019
82d129a
update doc
Mar 26, 2019
53f2aa2
Merge branch 'master' of https://github.com/Microsoft/nni
Apr 2, 2019
8419f26
add NASComparison for ResearchBlog
Apr 2, 2019
c4303a4
Fix format of table
Apr 2, 2019
07a235b
update doc and add index to toctree
Apr 2, 2019
80adbc9
Update NASComparison.md
Apr 2, 2019
937aa2d
Update NASComparison.md
Apr 2, 2019
6e7d847
Move ResearchBlog to bottom
Apr 2, 2019
a63846b
Follow the review comments
Apr 2, 2019
79a01ac
change the file structure
May 5, 2019
d0b57ae
add utils
May 5, 2019
c45bfa0
Merge branch 'master' of https://github.com/Microsoft/nni into search…
May 5, 2019
9e99891
slight change
May 5, 2019
3ceb3c0
Remove unrelated files
May 5, 2019
257f78a
add doc in SearchSpaceSpec.md and add config test for nested search s…
May 7, 2019
5b86260
add unittest
May 8, 2019
1767cb8
add unittest for hyperopt_tuner
May 13, 2019
b9bfaf6
update as comment
May 13, 2019
77aa604
Update SearchSpaceSepc doc
May 13, 2019
46ae82d
Delete unnecessary space change and correct a mistake
May 13, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions docs/en_US/SearchSpaceSpec.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ In NNI, tuner will sample parameters/architecture according to the search space,

To define a search space, users should define the name of variable, the type of sampling strategy and its parameters.

* A example of search space definition as follow:
* An example of search space definition as follow:

```yaml
{
Expand All @@ -26,9 +26,19 @@ Take the first line as an example. `dropout_rate` is defined as a variable whose
All types of sampling strategies and their parameter are listed here:

* {"_type":"choice","_value":options}
* Which means the variable value is one of the options, which should be a list. The elements of options can themselves be [nested] stochastic expressions. In this case, the stochastic choices that only appear in some of the options become conditional parameters.

* Which means the variable value is one of the options, which should be a list. The elements of options can themselves be **nested** stochastic expressions. In this case, the stochastic choices that only appear in some of the options become conditional parameters.

* An simple [example](../../examples/trials/mnist-cascading-search-space/search_space.json) of [nested] search space definition. The elements of options in the list must be dictionary with one key `_name` and its value pair. Here is a [sample]((../../examples/trials/mnist-cascading-search-space/sample.json) ) which users can get from nni with **nested** search space definition. Tuners which support this feature is as follows:

- Random Search
- TPE

- Anneal
- Evolution

* {"_type":"randint","_value":[upper]}

* Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now.

* {"_type":"uniform","_value":[low, high]}
Expand All @@ -48,13 +58,15 @@ All types of sampling strategies and their parameter are listed here:
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.

* {"_type":"normal","_value":[mu, sigma]}

* Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.

* {"_type":"qnormal","_value":[mu, sigma, q]}
* Which means the variable value is a value like round(normal(mu, sigma) / q) * q
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.

* {"_type":"lognormal","_value":[mu, sigma]}

* Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.

* {"_type":"qlognormal","_value":[mu, sigma, q]}
Expand Down
26 changes: 17 additions & 9 deletions examples/trials/mnist-cascading-search-space/mnist.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,21 +131,29 @@ def main(params):

nni.report_final_result(test_acc)

def generate_defualt_params():
params = {'data_dir': '/tmp/tensorflow/mnist/input_data',
'batch_num': 1000,
'batch_size': 200}
return params

def get_params():
''' Get parameters from command line '''
parser = argparse.ArgumentParser()
parser.add_argument("--data_dir", type=str, default='/tmp/tensorflow/mnist/input_data', help="data directory")
parser.add_argument("--batch_num", type=int, default=1000)
parser.add_argument("--batch_size", type=int, default=200)
args, _ = parser.parse_known_args()
return args

def parse_init_json(data):
params = {}
for key in data:
value = data[key]
if value == 'Empty':
layer_name = value["_name"]
if layer_name == 'Empty':
# Empty Layer
params[key] = ['Empty']
elif layer_name == 'Conv':
# Conv layer
params[key] = [layer_name, value['kernel_size'], value['kernel_size']]
else:
params[key] = [value[0], value[1], value[1]]
# Pooling Layer
params[key] = [layer_name, value['pooling_size'], value['pooling_size']]
return params


Expand All @@ -157,7 +165,7 @@ def parse_init_json(data):

RCV_PARAMS = parse_init_json(data)
logger.debug(RCV_PARAMS)
params = generate_defualt_params()
params = vars(get_params())
params.update(RCV_PARAMS)
print(RCV_PARAMS)

Expand Down
27 changes: 16 additions & 11 deletions examples/trials/mnist-cascading-search-space/sample.json
Original file line number Diff line number Diff line change
@@ -1,12 +1,17 @@
{
"layer2": "Empty",
"layer8": ["Conv", 2],
"layer3": ["Avg_pool", 5],
"layer0": ["Max_pool", 5],
"layer1": ["Conv", 2],
"layer6": ["Max_pool", 3],
"layer7": ["Max_pool", 5],
"layer9": ["Conv", 2],
"layer4": ["Avg_pool", 3],
"layer5": ["Avg_pool", 5]
}
"layer0": {
"_name": "Avg_pool",
"pooling_size": 3
},
"layer1": {
"_name": "Conv",
"kernel_size": 2
},
"layer2": {
"_name": "Empty"
},
"layer3": {
"_name": "Conv",
"kernel_size": 5
}
}
172 changes: 112 additions & 60 deletions examples/trials/mnist-cascading-search-space/search_space.json
Original file line number Diff line number Diff line change
@@ -1,62 +1,114 @@
{
"layer0":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]},
"layer1":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]},
"layer2":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]},
"layer3":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]},
"layer4":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]},
"layer5":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]},
"layer6":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]},
"layer7":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]},
"layer8":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]},
"layer9":{"_type":"choice","_value":[
"Empty",
["Conv", {"_type":"choice","_value":[2,3,5]}],
["Max_pool", {"_type":"choice","_value":[2,3,5]}],
["Avg_pool", {"_type":"choice","_value":[2,3,5]}]
]}
"layer0": {
"_type": "choice",
"_value": [{
"_name": "Empty"
},
{
"_name": "Conv",
"kernel_size": {
"_type": "choice",
"_value": [1, 2, 3, 5]
}
},
{
"_name": "Max_pool",
"pooling_size": {
"_type": "choice",
"_value": [2, 3, 5]
}
},
{
"_name": "Avg_pool",
"pooling_size": {
"_type": "choice",
"_value": [2, 3, 5]
}
}
]
},
"layer1": {
"_type": "choice",
"_value": [{
"_name": "Empty"
},
{
"_name": "Conv",
"kernel_size": {
"_type": "choice",
"_value": [1, 2, 3, 5]
}
},
{
"_name": "Max_pool",
"pooling_size": {
"_type": "choice",
"_value": [2, 3, 5]
}
},
{
"_name": "Avg_pool",
"pooling_size": {
"_type": "choice",
"_value": [2, 3, 5]
}
}
]
},
"layer2": {
"_type": "choice",
"_value": [{
"_name": "Empty"
},
{
"_name": "Conv",
"kernel_size": {
"_type": "choice",
"_value": [1, 2, 3, 5]
}
},
{
"_name": "Max_pool",
"pooling_size": {
"_type": "choice",
"_value": [2, 3, 5]
}
},
{
"_name": "Avg_pool",
"pooling_size": {
"_type": "choice",
"_value": [2, 3, 5]
}
}
]
},
"layer3": {
"_type": "choice",
"_value": [{
"_name": "Empty"
},
{
"_name": "Conv",
"kernel_size": {
"_type": "choice",
"_value": [1, 2, 3, 5]
}
},
{
"_name": "Max_pool",
"pooling_size": {
"_type": "choice",
"_value": [2, 3, 5]
}
},
{
"_name": "Avg_pool",
"pooling_size": {
"_type": "choice",
"_value": [2, 3, 5]
}
}
]
}
}
9 changes: 1 addition & 8 deletions src/sdk/pynni/nni/bohb_advisor/bohb_advisor.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@
bohb_advisor.py
'''

from enum import Enum, unique
import sys
import math
import logging
Expand All @@ -32,7 +31,7 @@

from nni.protocol import CommandType, send
from nni.msg_dispatcher_base import MsgDispatcherBase
from nni.utils import extract_scalar_reward
from nni.utils import OptimizeMode, extract_scalar_reward

from .config_generator import CG_BOHB

Expand All @@ -42,12 +41,6 @@
_KEY = 'TRIAL_BUDGET'
_epsilon = 1e-6

@unique
class OptimizeMode(Enum):
"""Optimize Mode class"""
Minimize = 'minimize'
Maximize = 'maximize'


def create_parameter_id():
"""Create an id
Expand Down
Loading