diff --git a/docs/en_US/ExperimentConfig.md b/docs/en_US/ExperimentConfig.md
index 39dcae37da..4fb73518b6 100644
--- a/docs/en_US/ExperimentConfig.md
+++ b/docs/en_US/ExperimentConfig.md
@@ -4,10 +4,10 @@ A config file is needed when create an experiment, the path of the config file i
The config file is written in YAML format, and need to be written correctly.
This document describes the rule to write config file, and will provide some examples and templates.
-- [Experiment config reference](#experiment-config-reference)
- - [Template](#template)
- - [Configuration spec](#configuration-spec)
- - [Examples](#examples)
+- [Experiment config reference](#Experiment-config-reference)
+ - [Template](#Template)
+ - [Configuration spec](#Configuration-spec)
+ - [Examples](#Examples)
## Template
@@ -128,12 +128,14 @@ machineList:
* Description
__authorName__ is the name of the author who create the experiment.
- TBD: add default value
+
+ TBD: add default value
* __experimentName__
* Description
__experimentName__ is the name of the experiment created.
+
TBD: add default value
* __trialConcurrency__
@@ -153,7 +155,7 @@ machineList:
* __versionCheck__
* Description
- NNI will check the version of nniManager process and the version of trialKeeper in remote, pai and kubernetes platform. If you want to disable version check, you could set versionCheck be false.
+ NNI will check the version of nniManager process and the version of trialKeeper in remote, pai and kubernetes platform. If you want to disable version check, you could set versionCheck be false.
* __debug__
* Description
diff --git a/docs/en_US/FrameworkControllerMode.md b/docs/en_US/FrameworkControllerMode.md
index a889ae663e..041245d510 100644
--- a/docs/en_US/FrameworkControllerMode.md
+++ b/docs/en_US/FrameworkControllerMode.md
@@ -1,36 +1,42 @@
-**Run an Experiment on FrameworkController**
+# Run an Experiment on FrameworkController
+
===
-NNI supports running experiment using [FrameworkController](https://github.com/Microsoft/frameworkcontroller), called frameworkcontroller mode. FrameworkController is built to orchestrate all kinds of applications on Kubernetes, you don't need to install kubeflow for specific deeplearning framework like tf-operator or pytorch-operator. Now you can use frameworkcontroller as the training service to run NNI experiment.
+NNI supports running experiment using [FrameworkController](https://github.com/Microsoft/frameworkcontroller), called frameworkcontroller mode. FrameworkController is built to orchestrate all kinds of applications on Kubernetes, you don't need to install Kubeflow for specific deep learning framework like tf-operator or pytorch-operator. Now you can use FrameworkController as the training service to run NNI experiment.
## Prerequisite for on-premises Kubernetes Service
+
1. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this [guideline](https://kubernetes.io/docs/setup/) to set up Kubernetes
-2. Prepare a **kubeconfig** file, which will be used by NNI to interact with your kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the **KUBECONFIG** environment variable. Refer this [guideline]( https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig) to learn more about kubeconfig.
+2. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the **KUBECONFIG** environment variable. Refer this [guideline]( https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig) to learn more about kubeconfig.
3. If your NNI trial job needs GPU resource, you should follow this [guideline](https://github.com/NVIDIA/k8s-device-plugin) to configure **Nvidia device plugin for Kubernetes**.
4. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in `root_squash option`, otherwise permission issue may raise when NNI copies files to NFS. Refer this [page](https://linux.die.net/man/5/exports) to learn what root_squash option is), or **Azure File Storage**.
5. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
- ```
+
+ ```bash
apt-get install nfs-common
```
6. Install **NNI**, follow the install guide [here](QuickStart.md).
## Prerequisite for Azure Kubernetes Service
-1. NNI support kubeflow based on Azure Kubernetes Service, follow the [guideline](https://azure.microsoft.com/en-us/services/kubernetes-service/) to set up Azure Kubernetes Service.
+
+1. NNI support Kubeflow based on Azure Kubernetes Service, follow the [guideline](https://azure.microsoft.com/en-us/services/kubernetes-service/) to set up Azure Kubernetes Service.
2. Install [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and __kubectl__. Use `az login` to set azure account, and connect kubectl client to AKS, refer this [guideline](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster).
3. Follow the [guideline](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=portal) to create azure file storage account. If you use Azure Kubernetes Service, NNI need Azure Storage Service to store code files and the output files.
4. To access Azure storage service, NNI need the access key of the storage account, and NNI uses [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/) Service to protect your private key. Set up Azure Key Vault Service, add a secret to Key Vault to store the access key of Azure storage account. Follow this [guideline](https://docs.microsoft.com/en-us/azure/key-vault/quick-create-cli) to store the access key.
+## Setup FrameworkController
-## Set up FrameworkController
-Follow the [guideline](https://github.com/Microsoft/frameworkcontroller/tree/master/example/run) to set up frameworkcontroller in the kubernetes cluster, NNI supports frameworkcontroller by the statefulset mode.
+Follow the [guideline](https://github.com/Microsoft/frameworkcontroller/tree/master/example/run) to set up FrameworkController in the Kubernetes cluster, NNI supports FrameworkController by the stateful set mode.
## Design
-Please refer the design of [kubeflow training service](./KubeflowMode.md), frameworkcontroller training service pipeline is similar.
+
+Please refer the design of [Kubeflow training service](./KubeflowMode.md), FrameworkController training service pipeline is similar.
## Example
-The frameworkcontroller config file format is:
-```
+The FrameworkController config file format is:
+
+```yaml
authorName: default
experimentName: example_mnist
trialConcurrency: 1
@@ -71,8 +77,10 @@ frameworkcontrollerConfig:
server: {your_nfs_server}
path: {your_nfs_server_exported_path}
```
+
If you use Azure Kubernetes Service, you should set `frameworkcontrollerConfig` in your config YAML file as follows:
-```
+
+```yaml
frameworkcontrollerConfig:
storage: azureStorage
keyVault:
@@ -82,22 +90,27 @@ frameworkcontrollerConfig:
accountName: {your_storage_account_name}
azureShare: {your_azure_share_name}
```
+
Note: You should explicitly set `trainingServicePlatform: frameworkcontroller` in NNI config YAML file if you want to start experiment in frameworkcontrollerConfig mode.
-The trial's config format for NNI frameworkcontroller mode is a simple version of frameworkcontroller's offical config, you could refer the [tensorflow example of frameworkcontroller](https://github.com/Microsoft/frameworkcontroller/blob/master/example/framework/scenario/tensorflow/cpu/tensorflowdistributedtrainingwithcpu.yaml) for deep understanding.
+The trial's config format for NNI frameworkcontroller mode is a simple version of FrameworkController's official config, you could refer the [Tensorflow example of FrameworkController](https://github.com/Microsoft/frameworkcontroller/blob/master/example/framework/scenario/tensorflow/cpu/tensorflowdistributedtrainingwithcpu.yaml) for deep understanding.
+
Trial configuration in frameworkcontroller mode have the following configuration keys:
-* taskRoles: you could set multiple task roles in config file, and each task role is a basic unit to process in kubernetes cluster.
- * name: the name of task role specified, like "worker", "ps", "master".
- * taskNum: the replica number of the task role.
- * command: the users' command to be used in the container.
- * gpuNum: the number of gpu device used in container.
- * cpuNum: the number of cpu device used in container.
- * memoryMB: the memory limitaion to be specified in container.
- * image: the docker image used to create pod and run the program.
- * frameworkAttemptCompletionPolicy: the policy to run framework, please refer the [user-manual](https://github.com/Microsoft/frameworkcontroller/blob/master/doc/user-manual.md#frameworkattemptcompletionpolicy) to get the specific information. Users could use the policy to control the pod, for example, if ps does not stop, only worker stops, this completionpolicy could helps stop ps.
+
+* taskRoles: you could set multiple task roles in config file, and each task role is a basic unit to process in Kubernetes cluster.
+ * name: the name of task role specified, like "worker", "ps", "master".
+ * taskNum: the replica number of the task role.
+ * command: the users' command to be used in the container.
+ * gpuNum: the number of gpu device used in container.
+ * cpuNum: the number of cpu device used in container.
+ * memoryMB: the memory limitaion to be specified in container.
+ * image: the docker image used to create pod and run the program.
+ * frameworkAttemptCompletionPolicy: the policy to run framework, please refer the [user-manual](https://github.com/Microsoft/frameworkcontroller/blob/master/doc/user-manual.md#frameworkattemptcompletionpolicy) to get the specific information. Users could use the policy to control the pod, for example, if ps does not stop, only worker stops, The completion policy could helps stop ps.
## How to run example
-After you prepare a config file, you could run your experiment by nnictl. The way to start an experiment on frameworkcontroller is similar to kubeflow, please refer the [document](./KubeflowMode.md) for more information.
+
+After you prepare a config file, you could run your experiment by nnictl. The way to start an experiment on FrameworkController is similar to Kubeflow, please refer the [document](./KubeflowMode.md) for more information.
## version check
-NNI support version check feature in since version 0.6, [refer](PaiMode.md)
\ No newline at end of file
+
+NNI support version check feature in since version 0.6, [refer](PaiMode.md)
diff --git a/docs/en_US/HowToImplementTrainingService.md b/docs/en_US/HowToImplementTrainingService.md
index 37fd531101..3178ab79d3 100644
--- a/docs/en_US/HowToImplementTrainingService.md
+++ b/docs/en_US/HowToImplementTrainingService.md
@@ -2,12 +2,13 @@
===
## Overview
-TrainingService is a module related to platform management and job schedule in NNI. TrainingService is designed to be easily implemented, we define an abstract class TrainingService as the parent class of all kinds of TrainignService, users just need to inherit the parent class and complete their own clild class if they want to implement customized TrainingService.
+TrainingService is a module related to platform management and job schedule in NNI. TrainingService is designed to be easily implemented, we define an abstract class TrainingService as the parent class of all kinds of TrainingService, users just need to inherit the parent class and complete their own child class if they want to implement customized TrainingService.
## System architecture

The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md).
+
In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules.
## Folder structure of code
@@ -63,6 +64,7 @@ abstract class TrainingService {
The parent class of TrainingService has a few abstract functions, users need to inherit the parent class and implement all of these abstract functions.
__setClusterMetadata(key: string, value: string)__
+
ClusterMetadata is the data related to platform details, for examples, the ClusterMetadata defined in remote machine server is:
```
export class RemoteMachineMeta {
@@ -91,9 +93,11 @@ export class RemoteMachineMeta {
The metadata includes the host address, the username or other configuration related to the platform. Users need to define their own metadata format, and set the metadata instance in this function. This function is called before the experiment is started to set the configuration of remote machines.
__getClusterMetadata(key: string)__
+
This function will return the metadata value according to the values, it could be left empty if users don't need to use it.
__submitTrialJob(form: JobApplicationForm)__
+
SubmitTrialJob is a function to submit new trial jobs, users should generate a job instance in TrialJobDetail type. TrialJobDetail is defined as follow:
```
interface TrialJobDetail {
@@ -113,37 +117,49 @@ interface TrialJobDetail {
According to different kinds of implementation, users could put the job detail into a job queue, and keep fetching the job from the queue and start preparing and running them. Or they could finish preparing and running process in this function, and return job detail after the submit work.
__cancelTrialJob(trialJobId: string, isEarlyStopped?: boolean)__
+
If this function is called, the trial started by the platform should be canceled. Different kind of platform has diffenent methods to calcel a running job, this function should be implemented according to specific platform.
__updateTrialJob(trialJobId: string, form: JobApplicationForm)__
+
This function is called to update the trial job's status, trial job's status should be detected according to different platform, and be updated to `RUNNING`, `SUCCEED`, `FAILED` etc.
__getTrialJob(trialJobId: string)__
+
This function returns a trialJob detail instance according to trialJobId.
__listTrialJobs()__
+
Users should put all of trial job detail information into a list, and return the list.
__addTrialJobMetricListener(listener: (metric: TrialJobMetric) => void)__
+
NNI will hold an EventEmitter to get job metrics, if there is new job metrics detected, the EventEmitter will be triggered. Users should start the EventEmitter in this function.
__removeTrialJobMetricListener(listener: (metric: TrialJobMetric) => void)__
+
Close the EventEmitter.
__run()__
+
The run() function is a main loop function in TrainingService, users could set a while loop to execute their logic code, and finish executing them when the experiment is stopped.
__cleanUp()__
+
This function is called to clean up the environment when a experiment is stopped. Users should do the platform-related cleaning operation in this function.
## TrialKeeper tool
NNI offers a TrialKeeper tool to help maintaining trial jobs. Users can find the source code in `nni/tools/nni_trial_tool`. If users want to run trial jobs in cloud platform, this tool will be a fine choice to help keeping trial running in the platform.
+
The running architecture of TrialKeeper is show as follow:
+

+
When users submit a trial job to cloud platform, they should wrap their trial command into TrialKeeper, and start a TrialKeeper process in cloud platform. Notice that TrialKeeper use restful server to communicate with TrainingService, users should start a restful server in local machine to receive metrics sent from TrialKeeper. The source code about restful server could be found in `nni/src/nni_manager/training_service/common/clusterJobRestServer.ts`.
## Reference
For more information about how to debug, please [refer](HowToDebug.md).
-The guide line of how to contribute, please [refer](Contributing.md).
+
+The guideline of how to contribute, please [refer](Contributing.md).
diff --git a/docs/en_US/KubeflowMode.md b/docs/en_US/KubeflowMode.md
index 4fd9fe99c9..e1dc68dcea 100644
--- a/docs/en_US/KubeflowMode.md
+++ b/docs/en_US/KubeflowMode.md
@@ -1,11 +1,14 @@
-**Run an Experiment on Kubeflow**
+# Run an Experiment on Kubeflow
+
===
-Now NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/kubeflow), called kubeflow mode. Before starting to use NNI kubeflow mode, you should have a kubernetes cluster, either on-prem or [Azure Kubernetes Service(AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/), a Ubuntu machine on which [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) is setup to connect to your kubernetes cluster. If you are not familiar with kubernetes, [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/) is a good start. In kubeflow mode, your trial program will run as kubeflow job in kubernetes cluster.
+
+Now NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/kubeflow), called kubeflow mode. Before starting to use NNI kubeflow mode, you should have a Kubernetes cluster, either on-premises or [Azure Kubernetes Service(AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/), a Ubuntu machine on which [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) is setup to connect to your Kubernetes cluster. If you are not familiar with Kubernetes, [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/) is a good start. In kubeflow mode, your trial program will run as Kubeflow job in Kubernetes cluster.
## Prerequisite for on-premises Kubernetes Service
+
1. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this [guideline](https://kubernetes.io/docs/setup/) to set up Kubernetes
-2. Download, set up, and deploy **Kubelow** to your Kubernetes cluster. Follow this [guideline](https://www.kubeflow.org/docs/started/getting-started/) to set up Kubeflow
-3. Prepare a **kubeconfig** file, which will be used by NNI to interact with your kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the **KUBECONFIG** environment variable. Refer this [guideline]( https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig) to learn more about kubeconfig.
+2. Download, set up, and deploy **Kubeflow** to your Kubernetes cluster. Follow this [guideline](https://www.kubeflow.org/docs/started/getting-started/) to setup Kubeflow.
+3. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the **KUBECONFIG** environment variable. Refer this [guideline]( https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig) to learn more about kubeconfig.
4. If your NNI trial job needs GPU resource, you should follow this [guideline](https://github.com/NVIDIA/k8s-device-plugin) to configure **Nvidia device plugin for Kubernetes**.
5. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in `root_squash option`, otherwise permission issue may raise when NNI copy files to NFS. Refer this [page](https://linux.die.net/man/5/exports) to learn what root_squash option is), or **Azure File Storage**.
6. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
@@ -16,37 +19,47 @@ Now NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/ku
7. Install **NNI**, follow the install guide [here](QuickStart.md).
## Prerequisite for Azure Kubernetes Service
-1. NNI support kubeflow based on Azure Kubernetes Service, follow the [guideline](https://azure.microsoft.com/en-us/services/kubernetes-service/) to set up Azure Kubernetes Service.
+
+1. NNI support Kubeflow based on Azure Kubernetes Service, follow the [guideline](https://azure.microsoft.com/en-us/services/kubernetes-service/) to set up Azure Kubernetes Service.
2. Install [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and __kubectl__. Use `az login` to set azure account, and connect kubectl client to AKS, refer this [guideline](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster).
-3. Deploy kubeflow on Azure Kubernetes Service, follow the [guideline](https://www.kubeflow.org/docs/started/getting-started/).
+3. Deploy Kubeflow on Azure Kubernetes Service, follow the [guideline](https://www.kubeflow.org/docs/started/getting-started/).
4. Follow the [guideline](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=portal) to create azure file storage account. If you use Azure Kubernetes Service, NNI need Azure Storage Service to store code files and the output files.
5. To access Azure storage service, NNI need the access key of the storage account, and NNI use [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/) Service to protect your private key. Set up Azure Key Vault Service, add a secret to Key Vault to store the access key of Azure storage account. Follow this [guideline](https://docs.microsoft.com/en-us/azure/key-vault/quick-create-cli) to store the access key.
## Design
+

-Kubeflow training service instantiates a kubernetes rest client to interact with your K8s cluster's API server.
+Kubeflow training service instantiates a Kubernetes rest client to interact with your K8s cluster's API server.
-For each trial, we will upload all the files in your local codeDir path (configured in nni_config.yml) together with NNI generated files like parameter.cfg into a storage volumn. Right now we support two kinds of storage volumns: [nfs](https://en.wikipedia.org/wiki/Network_File_System) and [azure file storage](https://azure.microsoft.com/en-us/services/storage/files/), you should configure the storage volumn in NNI config YAML file. After files are prepared, Kubeflow training service will call K8S rest API to create kubeflow jobs ([tf-operator](https://github.com/kubeflow/tf-operator) job or [pytorch-operator](https://github.com/kubeflow/pytorch-operator) job) in K8S, and mount your storage volumn into the job's pod. Output files of kubeflow job, like stdout, stderr, trial.log or model files, will also be copied back to the storage volumn. NNI will show the storage volumn's URL for each trial in WebUI, to allow user browse the log files and job's output files.
+For each trial, we will upload all the files in your local codeDir path (configured in nni_config.yml) together with NNI generated files like parameter.cfg into a storage volumn. Right now we support two kinds of storage volumes: [nfs](https://en.wikipedia.org/wiki/Network_File_System) and [azure file storage](https://azure.microsoft.com/en-us/services/storage/files/), you should configure the storage volumn in NNI config YAML file. After files are prepared, Kubeflow training service will call K8S rest API to create Kubeflow jobs ([tf-operator](https://github.com/kubeflow/tf-operator) job or [pytorch-operator](https://github.com/kubeflow/pytorch-operator) job) in K8S, and mount your storage volume into the job's pod. Output files of Kubeflow job, like stdout, stderr, trial.log or model files, will also be copied back to the storage volumn. NNI will show the storage volumn's URL for each trial in WebUI, to allow user browse the log files and job's output files.
## Supported operator
-NNI only support tf-operator and pytorch-operator of kubeflow, other operators is not tested.
+
+NNI only support tf-operator and pytorch-operator of Kubeflow, other operators is not tested.
Users could set operator type in config file.
The setting of tf-operator:
-```
+
+```yaml
kubeflowConfig:
operator: tf-operator
```
+
The setting of pytorch-operator:
-```
+
+```yaml
kubeflowConfig:
operator: pytorch-operator
```
+
If users want to use tf-operator, he could set `ps` and `worker` in trial config. If users want to use pytorch-operator, he could set `master` and `worker` in trial config.
## Supported storage type
+
NNI support NFS and Azure Storage to store the code and output files, users could set storage type in config file and set the corresponding config.
+
The setting for NFS storage are as follows:
-```
+
+```yaml
kubeflowConfig:
storage: nfs
nfs:
@@ -55,8 +68,10 @@ kubeflowConfig:
# Your NFS server export path, like /var/nfs/nni
path: {your_nfs_server_export_path}
```
+
If you use Azure storage, you should set `kubeflowConfig` in your config YAML file as follows:
-```
+
+```yaml
kubeflowConfig:
storage: azureStorage
keyVault:
@@ -67,10 +82,11 @@ kubeflowConfig:
azureShare: {your_azure_share_name}
```
-
## Run an experiment
-Use `examples/trials/mnist` as an example. This is a tensorflow job, and use tf-operator of kubeflow. The NNI config YAML file's content is like:
-```
+
+Use `examples/trials/mnist` as an example. This is a tensorflow job, and use tf-operator of Kubeflow. The NNI config YAML file's content is like:
+
+```yaml
authorName: default
experimentName: example_mnist
trialConcurrency: 2
@@ -122,7 +138,8 @@ kubeflowConfig:
Note: You should explicitly set `trainingServicePlatform: kubeflow` in NNI config YAML file if you want to start experiment in kubeflow mode.
If you want to run PyTorch jobs, you could set your config files as follow:
-```
+
+```yaml
authorName: default
experimentName: example_mnist_distributed_pytorch
trialConcurrency: 1
@@ -166,37 +183,41 @@ kubeflowConfig:
```
Trial configuration in kubeflow mode have the following configuration keys:
+
* codeDir
- * code directory, where you put training code and config files
+ * code directory, where you put training code and config files
* worker (required). This config section is used to configure tensorflow worker role
- * replicas
- * Required key. Should be positive number depends on how many replication your want to run for tensorflow worker role.
- * command
- * Required key. Command to launch your trial job, like ```python mnist.py```
- * memoryMB
- * Required key. Should be positive number based on your trial program's memory requirement
- * cpuNum
- * gpuNum
- * image
- * Required key. In kubeflow mode, your trial program will be scheduled by Kubernetes to run in [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/). This key is used to specify the Docker image used to create the pod where your trail program will run.
- * We already build a docker image [msranni/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it.
- * apiVersion
- * Required key. The API version of your kubeflow.
-* ps (optional). This config section is used to configure tensorflow parameter server role.
-* master(optional). This config section is used to configure pytorch parameter server role.
+ * replicas
+ * Required key. Should be positive number depends on how many replication your want to run for tensorflow worker role.
+ * command
+ * Required key. Command to launch your trial job, like ```python mnist.py```
+ * memoryMB
+ * Required key. Should be positive number based on your trial program's memory requirement
+ * cpuNum
+ * gpuNum
+ * image
+ * Required key. In kubeflow mode, your trial program will be scheduled by Kubernetes to run in [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/). This key is used to specify the Docker image used to create the pod where your trail program will run.
+ * We already build a docker image [msranni/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it.
+ * apiVersion
+ * Required key. The API version of your Kubeflow.
+* ps (optional). This config section is used to configure Tensorflow parameter server role.
+* master(optional). This config section is used to configure PyTorch parameter server role.
Once complete to fill NNI experiment config file and save (for example, save as exp_kubeflow.yml), then run the following command
-```
+
+```bash
nnictl create --config exp_kubeflow.yml
```
+
to start the experiment in kubeflow mode. NNI will create Kubeflow tfjob or pytorchjob for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
-You can see the kubeflow tfjob created by NNI in your Kubernetes dashboard.
+You can see the Kubeflow tfjob created by NNI in your Kubernetes dashboard.
Notice: In kubeflow mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic.
-Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.
+Once a trial job is completed, you can go to NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.
## version check
+
NNI support version check feature in since version 0.6, [refer](PaiMode.md)
-Any problems when using NNI in kubeflow mode, please create issues on [NNI Github repo](https://github.com/Microsoft/nni).
+Any problems when using NNI in Kubeflow mode, please create issues on [NNI Github repo](https://github.com/Microsoft/nni).
diff --git a/docs/en_US/Release.md b/docs/en_US/Release.md
index bdc84812e6..da491065d7 100644
--- a/docs/en_US/Release.md
+++ b/docs/en_US/Release.md
@@ -1,32 +1,36 @@
# ChangeLog
-# Release 0.8 - 6/4/2019
-## Major Features
-* [Support NNI on Windows for PAI/Remote mode]
- * NNI running on windows for remote mode
- * NNI running on windows for PAI mode
-* [Advanced features for using GPU]
- * Run multiple trial jobs on the same GPU for local and remote mode
- * Run trial jobs on the GPU running non-NNI jobs
-* [Kubeflow v1beta2 operator]
- * Support Kubeflow TFJob/PyTorchJob v1beta2
+## Release 0.8 - 6/4/2019
+
+### Major Features
+
+* Support NNI on Windows for OpenPAI/Remote mode
+ * NNI running on windows for remote mode
+ * NNI running on windows for OpenPAI mode
+* Advanced features for using GPU
+ * Run multiple trial jobs on the same GPU for local and remote mode
+ * Run trial jobs on the GPU running non-NNI jobs
+* Kubeflow v1beta2 operator
+ * Support Kubeflow TFJob/PyTorchJob v1beta2
* [General NAS programming interface](./GeneralNasInterfaces.md)
- * Provide NAS programming interface for users to easily express their neural architecture search space through NNI annotation
- * Provide a new command `nnictl trial codegen` for debugging the NAS code
- * Tutorial of NAS programming interface, example of NAS on mnist, customized random tuner for NAS
-* [Support resume tuner/advisor's state for experiment resume]
- * For experiment resume, tuner/advisor will be resumed by replaying finished trial data
-* [Web Portal]
- * Improve the design of copying trial's parameters
- * Support 'randint' type in hyper-parameter graph
- * Use should ComponentUpdate to avoid unnecessary render
-## Bug fix and other changes
-* [Bug fix that `nnictl update` has inconsistent command styles]
-* [Support import data for SMAC tuner]
-* [Bug fix that experiment state transition from ERROR back to RUNNING]
-* [Fix bug of table entries]
-* [Nested search space refinement]
-* [Refine 'randint' type and support lower bound]
+ * Provide NAS programming interface for users to easily express their neural architecture search space through NNI annotation
+ * Provide a new command `nnictl trial codegen` for debugging the NAS code
+ * Tutorial of NAS programming interface, example of NAS on MNIST, customized random tuner for NAS
+* Support resume tuner/advisor's state for experiment resume
+* For experiment resume, tuner/advisor will be resumed by replaying finished trial data
+* Web Portal
+ * Improve the design of copying trial's parameters
+ * Support 'randint' type in hyper-parameter graph
+ * Use should ComponentUpdate to avoid unnecessary render
+
+### Bug fix and other changes
+
+* Bug fix that `nnictl update` has inconsistent command styles
+* Support import data for SMAC tuner
+* Bug fix that experiment state transition from ERROR back to RUNNING
+* Fix bug of table entries
+* Nested search space refinement
+* Refine 'randint' type and support lower bound
* [Comparison of different hyper-parameter tuning algorithm](./CommunitySharings/HpoComparision.md)
* [Comparison of NAS algorithm](./CommunitySharings/NasComparision.md)
* [NNI practice on Recommenders](./CommunitySharings/NniPracticeSharing/RecommendersSvd.md)
@@ -56,7 +60,7 @@
* Unable to kill all python threads after nnictl stop in async dispatcher mode
* nnictl --version does not work with make dev-install
-* All trail jobs status stays on 'waiting' for long time on PAI platform
+* All trail jobs status stays on 'waiting' for long time on OpenPAI platform
## Release 0.6 - 4/2/2019
@@ -73,7 +77,7 @@
### Bug fix
-* [Add shmMB config key for PAI](https://github.com/Microsoft/nni/issues/842)
+* [Add shmMB config key for OpenPAI](https://github.com/Microsoft/nni/issues/842)
* Fix the bug that doesn't show any result if metrics is dict
* Fix the number calculation issue for float types in hyperband
* Fix a bug in the search space conversion in SMAC tuner
diff --git a/docs/en_US/SklearnExamples.md b/docs/en_US/SklearnExamples.md
index 8fa1cc15df..b525c2f7b9 100644
--- a/docs/en_US/SklearnExamples.md
+++ b/docs/en_US/SklearnExamples.md
@@ -1,11 +1,13 @@
# Scikit-learn in NNI
-[Scikit-learn](https://github.com/scikit-learn/scikit-learn) is a pupular meachine learning tool for data mining and data analysis. It supports many kinds of meachine learning models like LinearRegression, LogisticRegression, DecisionTree, SVM etc. How to make the use of scikit-learn more efficiency is a valuable topic.
+[Scikit-learn](https://github.com/scikit-learn/scikit-learn) is a popular machine learning tool for data mining and data analysis. It supports many kinds of machine learning models like LinearRegression, LogisticRegression, DecisionTree, SVM etc. How to make the use of scikit-learn more efficiency is a valuable topic.
+
NNI supports many kinds of tuning algorithms to search the best models and/or hyper-parameters for scikit-learn, and support many kinds of environments like local machine, remote servers and cloud.
## 1. How to run the example
-To start using NNI, you should install the nni package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please [refer](QuickStart.md).
+To start using NNI, you should install the NNI package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please refer [here](QuickStart.md).
+
After you installed NNI, you could enter the corresponding folder and start the experiment using following commands:
```bash
@@ -17,16 +19,18 @@ nnictl create --config ./config.yml
### 2.1 classification
This example uses the dataset of digits, which is made up of 1797 8x8 images, and each image is a hand-written digit, the goal is to classify these images into 10 classes.
+
In this example, we use SVC as the model, and choose some parameters of this model, including `"C", "keral", "degree", "gamma" and "coef0"`. For more information of these parameters, please [refer](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).
### 2.2 regression
-This example uses the Boston Housing Dataset, this dataset consists of price of houses in various places in Boston and the information such as Crime (CRIM), areas of non-retail business in the town (INDUS), the age of people who own the house (AGE) etc to predict the house price of boston.
+This example uses the Boston Housing Dataset, this dataset consists of price of houses in various places in Boston and the information such as Crime (CRIM), areas of non-retail business in the town (INDUS), the age of people who own the house (AGE) etc., to predict the house price of Boston.
+
In this example, we tune different kinds of regression models including `"LinearRegression", "SVR", "KNeighborsRegressor", "DecisionTreeRegressor"` and some parameters like `"svr_kernel", "knr_weights"`. You could get more details about these models from [here](https://scikit-learn.org/stable/supervised_learning.html#supervised-learning).
-## 3. How to write sklearn code using nni
+## 3. How to write scikit-learn code using NNI
-It is easy to use nni in your sklearn code, there are only a few steps.
+It is easy to use NNI in your scikit-learn code, there are only a few steps.
* __step 1__
@@ -51,8 +55,10 @@ It is easy to use nni in your sklearn code, there are only a few steps.
Then you could read these values as a dict from your python code, please get into the step 2.
* __step 2__
+
At the beginning of your python code, you should `import nni` to insure the packages works normally.
- First, you should use `nni.get_next_parameter()` function to get your parameters given by nni. Then you could use these parameters to update your code.
+
+ First, you should use `nni.get_next_parameter()` function to get your parameters given by NNI. Then you could use these parameters to update your code.
For example, if you define your search_space.json like following format:
```json
@@ -79,5 +85,7 @@ It is easy to use nni in your sklearn code, there are only a few steps.
Then you could use these variables to write your scikit-learn code.
* __step 3__
- After you finished your training, you could get your own score of the model, like your percision, recall or MSE etc. NNI needs your score to tuner algorithms and generate next group of parameters, please report the score back to NNI and start next trial job.
- You just need to use `nni.report_final_result(score)` to communitate with NNI after you process your scikit-learn code. Or if you have multiple scores in the steps of training, you could also report them back to NNI using `nni.report_intemediate_result(score)`. Note, you may not report intemediate result of your job, but you must report back your final result.
+
+ After you finished your training, you could get your own score of the model, like your precision, recall or MSE etc. NNI needs your score to tuner algorithms and generate next group of parameters, please report the score back to NNI and start next trial job.
+
+ You just need to use `nni.report_final_result(score)` to communicate with NNI after you process your scikit-learn code. Or if you have multiple scores in the steps of training, you could also report them back to NNI using `nni.report_intemediate_result(score)`. Note, you may not report intermediate result of your job, but you must report back your final result.
diff --git a/docs/en_US/Trials.md b/docs/en_US/Trials.md
index 017c48d222..614199f603 100644
--- a/docs/en_US/Trials.md
+++ b/docs/en_US/Trials.md
@@ -33,7 +33,9 @@ Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search s
```python
RECEIVED_PARAMS = nni.get_next_parameter()
```
+
`RECEIVED_PARAMS` is an object, for example:
+
`{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}`.
- Report metric data periodically (optional)
@@ -41,6 +43,7 @@ RECEIVED_PARAMS = nni.get_next_parameter()
```python
nni.report_intermediate_result(metrics)
```
+
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
- Report performance of the configuration
@@ -63,7 +66,6 @@ You can refer to [here](ExperimentConfig.md) for more information about how to s
*Please refer to [here](https://nni.readthedocs.io/en/latest/sdk_reference.html) for more APIs (e.g., `nni.get_sequence_id()`) provided by NNI.
-
## NNI Python Annotation
@@ -125,7 +127,6 @@ In the YAML configure file, you need to set *useAnnotation* to true to enable NN
useAnnotation: true
```
-
## Where are my trials?
### Local Mode
@@ -133,7 +134,8 @@ useAnnotation: true
In NNI, every trial has a dedicated directory for them to output their own data. In each trial, an environment variable called `NNI_OUTPUT_DIR` is exported. Under this directory, you could find each trial's code, data and other possible log. In addition, each trial's log (including stdout) will be re-directed to a file named `trial.log` under that directory.
If NNI Annotation is used, trial's converted code is in another temporary directory. You can check that in a file named `run.sh` under the directory indicated by `NNI_OUTPUT_DIR`. The second line (i.e., the `cd` command) of this file will change directory to the actual directory where code is located. Below is an example of `run.sh`:
-```shell
+
+```bash
#!/bin/bash
cd /tmp/user_name/nni/annotation/tmpzj0h72x6 #This is the actual directory
export NNI_PLATFORM=local
@@ -149,7 +151,7 @@ echo $? `date +%s%3N` >/home/user_name/nni/experiments/$experiment_id$/trials/$t
### Other Modes
-When runing trials on other platform like remote machine or PAI, the environment variable `NNI_OUTPUT_DIR` only refers to the output directory of the trial, while trial code and `run.sh` might not be there. However, the `trial.log` will be transmitted back to local machine in trial's directory, which defaults to `~/nni/experiments/$experiment_id$/trials/$trial_id$/`
+When running trials on other platform like remote machine or PAI, the environment variable `NNI_OUTPUT_DIR` only refers to the output directory of the trial, while trial code and `run.sh` might not be there. However, the `trial.log` will be transmitted back to local machine in trial's directory, which defaults to `~/nni/experiments/$experiment_id$/trials/$trial_id$/`
For more information, please refer to [HowToDebug](HowToDebug.md)