Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Update table style by ouyang's design #2711

Merged
merged 8 commits into from
Jul 22, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/en_US/Tutorial/ExperimentConfig.md
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ Note: run `ifconfig` on NNI manager's machine to check if eth0 device exists. If

### logDir

Optional. Path to a directory. Default: `<user home directory>/nni/experiment`.
Optional. Path to a directory. Default: `<user home directory>/nni-experiments`.

Configures the directory to store logs and data of the experiment.

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Tutorial/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Unable to open the WebUI may have the following reasons:

* `http://127.0.0.1`, `http://172.17.0.1` and `http://10.0.0.15` are referred to localhost, if you start your experiment on the server or remote machine. You can replace the IP to your server IP to view the WebUI, like `http://[your_server_ip]:8080`
* If you still can't see the WebUI after you use the server IP, you can check the proxy and the firewall of your machine. Or use the browser on the machine where you start your NNI experiment.
* Another reason may be your experiment is failed and NNI may fail to get the experiment information. You can check the log of NNIManager in the following directory: `~/nni/experiment/[your_experiment_id]` `/log/nnimanager.log`
* Another reason may be your experiment is failed and NNI may fail to get the experiment information. You can check the log of NNIManager in the following directory: `~/nni-experiments/[your_experiment_id]` `/log/nnimanager.log`

### Restful server start failed

Expand Down
176 changes: 88 additions & 88 deletions docs/en_US/Tutorial/HowToDebug.md
Original file line number Diff line number Diff line change
@@ -1,89 +1,89 @@
**How to Debug in NNI**
===
## Overview
There are three parts that might have logs in NNI. They are nnimanager, dispatcher and trial. Here we will introduce them succinctly. More information please refer to [Overview](../Overview.md).
- **NNI controller**: NNI controller (nnictl) is the nni command-line tool that is used to manage experiments (e.g., start an experiment).
- **nnimanager**: nnimanager is the core of NNI, whose log is important when the whole experiment fails (e.g., no webUI or training service fails)
- **Dispatcher**: Dispatcher calls the methods of **Tuner** and **Assessor**. Logs of dispatcher are related to the tuner or assessor code.
- **Tuner**: Tuner is an AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.
- **Assessor**: Assessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.
- **Trial**: Trial code is the code you write to run your experiment, which is an individual attempt at applying a new configuration (e.g., a set of hyperparameter values, a specific nerual architecture).
## Where is the log
There are three kinds of log in NNI. When creating a new experiment, you can specify log level as debug by adding `--debug`. Besides, you can set more detailed log level in your configuration file by using
`logLevel` keyword. Available logLevels are: `trace`, `debug`, `info`, `warning`, `error`, `fatal`.
### NNI controller
All possible errors that happen when launching an NNI experiment can be found here.
You can use `nnictl log stderr` to find error information. For more options please refer to [NNICTL](Nnictl.md)
### Experiment Root Directory
Every experiment has a root folder, which is shown on the right-top corner of webUI. Or you could assemble it by replacing the `experiment_id` with your actual experiment_id in path `~/nni/experiment/experiment_id/` in case of webUI failure. `experiment_id` could be seen when you run `nnictl create ...` to create a new experiment.
> For flexibility, we also offer a `logDir` option in your configuration, which specifies the directory to store all experiments (defaults to `~/nni/experiment`). Please refer to [Configuration](ExperimentConfig.md) for more details.
Under that directory, there is another directory named `log`, where `nnimanager.log` and `dispatcher.log` are placed.
### Trial Root Directory
Usually in webUI, you can click `+` in the left of every trial to expand it to see each trial's log path.
Besides, there is another directory under experiment root directory, named `trials`, which stores all the trials.
Every trial has a unique id as its directory name. In this directory, a file named `stderr` records trial error and another named `trial.log` records this trial's log.
## Different kinds of errors
There are different kinds of errors. However, they can be divided into three categories based on their severity. So when nni fails, check each part sequentially.
Generally, if webUI is started successfully, there is a `Status` in the `Overview` tab, serving as a possible indicator of what kind of error happens. Otherwise you should check manually.
### **NNI** Fails
This is the most serious error. When this happens, the whole experiment fails and no trial will be run. Usually this might be related to some installation problem.
When this happens, you should check `nnictl`'s error output file `stderr` (i.e., nnictl log stderr) and then the `nnimanager`'s log to find if there is any error.
### **Dispatcher** Fails
Dispatcher fails. Usually, for some new users of NNI, it means that tuner fails. You could check dispatcher's log to see what happens to your dispatcher. For built-in tuner, some common errors might be invalid search space (unsupported type of search space or inconsistence between initializing args in configuration file and actual tuner's \_\_init\_\_ function args).
Take the later situation as an example. If you write a customized tuner who's \_\_init\_\_ function has an argument called `optimize_mode`, which you do not provide in your configuration file, NNI will fail to run your tuner so the experiment fails. You can see errors in the webUI like:
![](../../img/dispatcher_error.jpg)
Here we can see it is a dispatcher error. So we can check dispatcher's log, which might look like:
```
[2019-02-19 19:36:45] DEBUG (nni.main/MainThread) START
[2019-02-19 19:36:47] ERROR (nni.main/MainThread) __init__() missing 1 required positional arguments: 'optimize_mode'
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/nni/__main__.py", line 202, in <module>
main()
File "/usr/lib/python3.7/site-packages/nni/__main__.py", line 164, in main
args.tuner_args)
File "/usr/lib/python3.7/site-packages/nni/__main__.py", line 81, in create_customized_class_instance
instance = class_constructor(**class_args)
TypeError: __init__() missing 1 required positional arguments: 'optimize_mode'.
```
### **Trial** Fails
In this situation, NNI can still run and create new trials.
It means your trial code (which is run by NNI) fails. This kind of error is strongly related to your trial code. Please check trial's log to fix any possible errors shown there.
A common example of this would be run the mnist example without installing tensorflow. Surely there is an Import Error (that is, not installing tensorflow but trying to import it in your trial code) and thus every trial fails.
![](../../img/trial_error.jpg)
As it shows, every trial has a log path, where you can find trial's log and stderr.
**How to Debug in NNI**
===

## Overview

There are three parts that might have logs in NNI. They are nnimanager, dispatcher and trial. Here we will introduce them succinctly. More information please refer to [Overview](../Overview.md).

- **NNI controller**: NNI controller (nnictl) is the nni command-line tool that is used to manage experiments (e.g., start an experiment).
- **nnimanager**: nnimanager is the core of NNI, whose log is important when the whole experiment fails (e.g., no webUI or training service fails)
- **Dispatcher**: Dispatcher calls the methods of **Tuner** and **Assessor**. Logs of dispatcher are related to the tuner or assessor code.
- **Tuner**: Tuner is an AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.
- **Assessor**: Assessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.
- **Trial**: Trial code is the code you write to run your experiment, which is an individual attempt at applying a new configuration (e.g., a set of hyperparameter values, a specific nerual architecture).

## Where is the log

There are three kinds of log in NNI. When creating a new experiment, you can specify log level as debug by adding `--debug`. Besides, you can set more detailed log level in your configuration file by using
`logLevel` keyword. Available logLevels are: `trace`, `debug`, `info`, `warning`, `error`, `fatal`.

### NNI controller

All possible errors that happen when launching an NNI experiment can be found here.

You can use `nnictl log stderr` to find error information. For more options please refer to [NNICTL](Nnictl.md)


### Experiment Root Directory
Every experiment has a root folder, which is shown on the right-top corner of webUI. Or you could assemble it by replacing the `experiment_id` with your actual experiment_id in path `~/nni-experiments/experiment_id/` in case of webUI failure. `experiment_id` could be seen when you run `nnictl create ...` to create a new experiment.

> For flexibility, we also offer a `logDir` option in your configuration, which specifies the directory to store all experiments (defaults to `~/nni-experiments`). Please refer to [Configuration](ExperimentConfig.md) for more details.

Under that directory, there is another directory named `log`, where `nnimanager.log` and `dispatcher.log` are placed.

### Trial Root Directory

Usually in webUI, you can click `+` in the left of every trial to expand it to see each trial's log path.

Besides, there is another directory under experiment root directory, named `trials`, which stores all the trials.
Every trial has a unique id as its directory name. In this directory, a file named `stderr` records trial error and another named `trial.log` records this trial's log.

## Different kinds of errors

There are different kinds of errors. However, they can be divided into three categories based on their severity. So when nni fails, check each part sequentially.

Generally, if webUI is started successfully, there is a `Status` in the `Overview` tab, serving as a possible indicator of what kind of error happens. Otherwise you should check manually.

### **NNI** Fails

This is the most serious error. When this happens, the whole experiment fails and no trial will be run. Usually this might be related to some installation problem.

When this happens, you should check `nnictl`'s error output file `stderr` (i.e., nnictl log stderr) and then the `nnimanager`'s log to find if there is any error.


### **Dispatcher** Fails

Dispatcher fails. Usually, for some new users of NNI, it means that tuner fails. You could check dispatcher's log to see what happens to your dispatcher. For built-in tuner, some common errors might be invalid search space (unsupported type of search space or inconsistence between initializing args in configuration file and actual tuner's \_\_init\_\_ function args).

Take the later situation as an example. If you write a customized tuner who's \_\_init\_\_ function has an argument called `optimize_mode`, which you do not provide in your configuration file, NNI will fail to run your tuner so the experiment fails. You can see errors in the webUI like:

![](../../img/dispatcher_error.jpg)

Here we can see it is a dispatcher error. So we can check dispatcher's log, which might look like:

```
[2019-02-19 19:36:45] DEBUG (nni.main/MainThread) START
[2019-02-19 19:36:47] ERROR (nni.main/MainThread) __init__() missing 1 required positional arguments: 'optimize_mode'
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/nni/__main__.py", line 202, in <module>
main()
File "/usr/lib/python3.7/site-packages/nni/__main__.py", line 164, in main
args.tuner_args)
File "/usr/lib/python3.7/site-packages/nni/__main__.py", line 81, in create_customized_class_instance
instance = class_constructor(**class_args)
TypeError: __init__() missing 1 required positional arguments: 'optimize_mode'.
```

### **Trial** Fails

In this situation, NNI can still run and create new trials.

It means your trial code (which is run by NNI) fails. This kind of error is strongly related to your trial code. Please check trial's log to fix any possible errors shown there.

A common example of this would be run the mnist example without installing tensorflow. Surely there is an Import Error (that is, not installing tensorflow but trying to import it in your trial code) and thus every trial fails.

![](../../img/trial_error.jpg)

As it shows, every trial has a log path, where you can find trial's log and stderr.

In addition to experiment level debug, NNI also provides the capability for debugging a single trial without the need to start the entire experiment. Refer to [standalone mode](../TrialExample/Trials#standalone-mode-for-debugging) for more information about debug single trial code.
51 changes: 29 additions & 22 deletions docs/en_US/_templates/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -79,25 +79,25 @@ <h1 class="title">NNI capabilities in a glance</h1>
</tr>
<!-- Frameworks & Libraries -->
<tr>
<td>Frameworks & Libraries</td>
<td class="inline">
<ul>
<td class="align-td">Frameworks & Libraries</td>
<td class="inline core-td">
<ul class="core">
<li><b>Supported Frameworks</b></li>
<li>PyTorch</li>
<li>Keras</li>
<li>TensorFlow</li>
<li>MXNet</li>
<li>Caffe2</li>
<a href="{{ pathto('SupportedFramework_Library') }}">More...</a><br />
<li><a href="{{ pathto('SupportedFramework_Library') }}">More...</a><br /></li>
</ul>
<ul>
<ul class="core">
<li><b>Supported Libraries</b></li>
<li>Scikit-learn</li>
<li>XGBoost</li>
<li>LightGBM</li>
<a href="{{ pathto('SupportedFramework_Library') }}">More...</a><br />
</ul>
<ul>
<ul class="core">
<li><b>Examples</b></li>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-pytorch">MNIST-pytorch
</li>
Expand All @@ -111,7 +111,8 @@ <h1 class="title">NNI capabilities in a glance</h1>
<li><a href="{{ pathto('TrialExample/Cifar10Examples') }}">Cifar10-pytorch</li></a>
<li><a href="{{ pathto('TrialExample/SklearnExamples') }}">Scikit-learn</a></li>
<li><a href="{{ pathto('TrialExample/EfficientNet') }}">EfficientNet</a></li>
<a href="{{ pathto('SupportedFramework_Library') }}">More...</a><br />
<li><a href="{{ pathto('TrialExample/OpEvoExamples') }}">Kernel Tunning</a></li>
<li><a href="{{ pathto('SupportedFramework_Library') }}">More...</a><br /></li>
</ul>
</td>
<td>
Expand All @@ -125,8 +126,8 @@ <h1 class="title">NNI capabilities in a glance</h1>
<!-- algorithms -->
<tr>
<td class="title">Algorithms</td>
<td class="inline">
<ul>
<td class="inline core-td">
<ul class="core">
<li><a href="{{ pathto('Tuner/BuiltinTuner') }}"><b>Hyperparameter Tuning</b></a></li>
<li>Exhaustive search</li>
<li><a href="{{ pathto('Tuner/BuiltinTuner') }}">Random Search</a></li>
Expand All @@ -136,6 +137,7 @@ <h1 class="title">NNI capabilities in a glance</h1>
<li><a href="{{ pathto('Tuner/BuiltinTuner') }}">Naïve Evolution</a></li>
<li><a href="{{ pathto('Tuner/BuiltinTuner') }}">Anneal</a></li>
<li><a href="{{ pathto('Tuner/BuiltinTuner') }}">Hyperband</a></li>
<li><a href="{{ pathto('Tuner/BuiltinTuner') }}">PBT</a></li>
<li>Bayesian optimization</li>
<li><a href="{{ pathto('Tuner/BuiltinTuner') }}">BOHB</a></li>
<li><a href="{{ pathto('Tuner/BuiltinTuner') }}">TPE</a></li>
Expand All @@ -145,7 +147,7 @@ <h1 class="title">NNI capabilities in a glance</h1>
<li>RL Based</li>
<li><a href="{{ pathto('Tuner/BuiltinTuner') }}">PPO Tuner</a> </li>
</ul>
<ul>
<ul class="core">
<li><a href="{{ pathto('NAS/Overview') }}"><b>Neural Architecture Search</b></a></li>
<li><a href="{{ pathto('NAS/ENAS') }}">ENAS</a></li>
<li><a href="{{ pathto('NAS/DARTS') }}">DARTS</a></li>
Expand All @@ -156,17 +158,21 @@ <h1 class="title">NNI capabilities in a glance</h1>
<li><a href="{{ pathto('Tuner/NetworkmorphismTuner') }}">Network Morphism</a> </li>
<li><a href="{{ pathto('NAS/TextNAS') }}">TextNAS</a> </li>
</ul>
<ul>
<ul class="core">
<li><a href="{{ pathto('Compressor/Overview') }}"><b>Model Compression</b></a></li>
<li><b>Pruning</b></li>
<li>Pruning</li>
<li><a href="{{ pathto('Compressor/Pruner') }}">AGP Pruner</a></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">Slim Pruner</a></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">FPGM Pruner</a></li>
<li><b>Quantization</b></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">NetAdapt Pruner</a></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">SimulatedAnnealing Pruner</a></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">ADMM Pruner</a></li>
<li><a href="{{ pathto('Compressor/Pruner') }}">AutoCompress Pruner</a></li>
<li>Quantization</li>
<li><a href="{{ pathto('Compressor/Quantizer') }}">QAT Quantizer</a></li>
<li><a href="{{ pathto('Compressor/Quantizer') }}">DoReFa Quantizer</a></li>
</ul>
<ul>
<ul class="core">
<li><a href="{{ pathto('FeatureEngineering/Overview') }}"><b>Feature Engineering (Beta)</b></a>
</li>
<li><a href="{{ pathto('FeatureEngineering/GradientFeatureSelector') }}">GradientFeatureSelector</a>
Expand All @@ -181,18 +187,19 @@ <h1 class="title">NNI capabilities in a glance</h1>
<ul>
<li><a href="{{ pathto('Tuner/CustomizeTuner') }}">CustomizeTuner</a></li>
<li><a href="{{ pathto('Assessor/CustomizeAssessor') }}">CustomizeAssessor</a></li>
<!-- <li><a href="{{ pathto('Tutorial/InstallCustomizedAlgos') }}">Install Customized Algorithms as -->
<!-- Builtin Tuners/Assessors/Advisors</a></li> -->
<li><a href="{{ pathto('Tutorial/InstallCustomizedAlgos') }}">Install Customized Algorithms as
Builtin Tuners/Assessors/Advisors</a></li>
</ul>
</td>
</tr>
<!-- training Services -->
<tr>
<td>Training Services</td>
<td>
<ul><b>Local Machine</b></ul>
<ul><b>Remote Servers</b></ul>
<ul>
<td class="align-td">Training Services</td>
<td class="core-td">
<ul class="core"><li><a href="{{ pathto('TrainingService/LocalMode') }}"><b>Local Machine</b></a></li></ul>
<ul class="core"><li><a href="{{ pathto('TrainingService/RemoteMachineMode') }}"><b>Remote Servers</b></a></li></ul>
<ul class="core"><li><a href="{{ pathto('TrainingService/AMLMode') }}"><b>AML(Azure Machine Learning)</b></a></li></ul>
<ul class="core">
<li><b>Kubernetes based services</b></li>
<li><a href="{{ pathto('TrainingService/PaiMode') }}">OpenPAI</a></li>
<li><a href="{{ pathto('TrainingService/KubeflowMode') }}">Kubeflow</a></li>
Expand Down Expand Up @@ -255,7 +262,7 @@ <h2 class="second-title">Verify installation</h2>
<ul>
<li>
<p>Download the examples via clone the source code.</p>
<div class="command">git clone -b v{{ release }} https://github.com/Microsoft/nni.git</div>
<div class="command">git clone -b {{ release }} https://github.com/Microsoft/nni.git</div>
</li>
<li>
<p>Run the MNIST example.</p>
Expand Down
Loading