diff --git a/README.md b/README.md
index a6e6254fa8..52f50182a7 100644
--- a/README.md
+++ b/README.md
@@ -1,47 +1,41 @@
# Neural Network Intelligence
+[](https://github.com/Microsoft/nni/blob/master/LICENSE)
[](https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=6)
[](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen)
[](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
[](https://github.com/Microsoft/nni/pulls?q=is%3Apr+is%3Aopen)
[](https://github.com/Microsoft/nni/releases)
-NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments.
-The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments (e.g. local machine, remote servers and cloud).
+NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning (AutoML) experiments.
+The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud.
## **Who should consider using NNI**
-* You want to try different AutoML algorithms for your training code (model) at local
-* You want to run AutoML trial jobs in different environments to speed up search (e.g. remote servers and cloud)
-* As a researcher and data scientist, you want to implement your own AutoML algorithms and compare with other algorithms
-* As a ML platform owner, you want to support AutoML in your platform
+* Those who want to try different AutoML algorithms in their training code (model) at their local machine.
+* Those who want to run AutoML trial jobs in different environments to speed up search (e.g. remote servers and cloud).
+* Researchers and data scientists who want to implement their own AutoML algorithms and compare it with other algorithms.
+* ML Platform owners who want to support AutoML in their platform.
## **Install & Verify**
-**Install through source code**
+**pip install**
* We only support Linux in current stage, Ubuntu 16.04 or higher are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`, `git` and `wget`.
-```bash
- git clone -b v0.3 https://github.com/Microsoft/nni.git
- cd nni
- source install.sh
+```
+python3 -m pip install -v --user git+https://github.com/Microsoft/nni.git@v0.2
+source ~/.bashrc
```
-**Verify install**
+**verify install**
* The following example is an experiment built on TensorFlow, make sure you have `TensorFlow installed` before running it.
-* And download the examples via clone the source code
-```bash
- cd ~
- git clone -b v0.3 https://github.com/Microsoft/nni.git
-```
-* Then, run the mnist example
```bash
nnictl create --config ~/nni/examples/trials/mnist/config.yml
```
-* In the command terminal, waiting for the message `Info: Start experiment success!` which indicates your experiment had been successfully started. You are able to explore the experiment using the `Web UI url`.
+* Wait for the message `Info: Start experiment success!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`.
```diff
Info: Checking experiment...
...
@@ -49,7 +43,7 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s
Info: Checking web ui...
Info: Starting web ui...
Info: Starting web ui success!
-+ Info: Web UI url: http://127.0.0.1:8080 http://10.172.141.6:8080
++ Info: Web UI url: http://yourlocalhost:8080 http://youripaddress:8080
+ Info: Start experiment success! The experiment id is LrNK4hae, and the restful server post is 51188.
```
@@ -80,9 +74,15 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s
* [Serve NNI as a capability of a ML Platform] - *coming soon*
## **Contribute**
-This project welcomes contributions and suggestions, we are constructing the contribution guidelines, stay tuned =).
+This project welcomes contributions and suggestions, we use [GitHub issues](https://github.com/Microsoft/nni/issues) for tracking requests and bugs.
+
+Issues with the **good first issue** label are simple and easy-to-start ones that we recommend new contributors to start with.
+
+To set up environment for NNI development, refer to the instruction: [Set up NNI developer environment](docs/SetupNNIDeveloperEnvironment.md)
+
+Before start coding, review and get familiar with the NNI Code Contribution Guideline: [Contributing](docs/CONTRIBUTING.md)
-We use [GitHub issues](https://github.com/Microsoft/nni/issues) for tracking requests and bugs.
+We are in construction of the instruction for [How to Debug](docs/HowToDebug.md), you are also welcome to contribute questions or suggestions on this area.
## **License**
The entire codebase is under [MIT license](https://github.com/Microsoft/nni/blob/master/LICENSE)
diff --git a/docs/GetStarted.md b/docs/GetStarted.md
index f01ae914f6..e18fe4f4fc 100644
--- a/docs/GetStarted.md
+++ b/docs/GetStarted.md
@@ -36,7 +36,7 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio
This command will be filled in the yaml configure file below. Please refer to [here]() for how to write your own trial.
-**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
+**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](howto_2_CustomizedTuner.md), but for simplicity, here we choose a tuner provided by NNI as below:
tuner:
builtinTunerName: TPE
diff --git a/docs/InstallNNI_Ubuntu.md b/docs/InstallNNI_Ubuntu.md
index bc5de3089a..fc3f64f798 100644
--- a/docs/InstallNNI_Ubuntu.md
+++ b/docs/InstallNNI_Ubuntu.md
@@ -9,20 +9,19 @@
wget
python pip should also be correctly installed. You could use "which pip" or "pip -V" to check in Linux.
-
- * Note: we don't support virtual environment in current releases.
* __Install NNI through pip__
- python3 -m pip install --user nni-pkg
+ pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.2
+ source ~/.bashrc
* __Install NNI through source code__
- git clone -b v0.3 https://github.com/Microsoft/nni.git
+ git clone -b v0.2 https://github.com/Microsoft/nni.git
cd nni
+ chmod +x install.sh
source install.sh
-
## Further reading
* [Overview](Overview.md)
* [Use command line tool nnictl](NNICTLDOC.md)
diff --git a/docs/HowToContribute.md b/docs/SetupNNIDeveloperEnvironment.md
similarity index 91%
rename from docs/HowToContribute.md
rename to docs/SetupNNIDeveloperEnvironment.md
index 34384df4e0..a9a9cb9d01 100644
--- a/docs/HowToContribute.md
+++ b/docs/SetupNNIDeveloperEnvironment.md
@@ -1,4 +1,4 @@
-**How to contribute**
+**Set up NNI developer environment**
===
## Best practice for debug NNI source code
@@ -51,4 +51,4 @@ After you change some code, just use **step 4** to rebuild your code, then the c
---
At last, wish you have a wonderful day.
-For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [CONTRIBUTING](./docs/CONTRIBUTING.md) document.
+For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [CONTRIBUTING](./CONTRIBUTING.md) document.
diff --git a/docs/howto_2_CustomizedTuner.md b/docs/howto_2_CustomizedTuner.md
index f086f37eed..5b8c65a04b 100644
--- a/docs/howto_2_CustomizedTuner.md
+++ b/docs/howto_2_CustomizedTuner.md
@@ -1,4 +1,4 @@
-# Customized Tuner for Experts
+# **How To** - Customize Your Own Tuner
*Tuner receive result from Trial as a matric to evaluate the performance of a specific parameters/architecture configure. And tuner send next hyper-parameter or architecture configure to Trial.*
diff --git a/examples/trials/ga_squad/README.md b/examples/trials/ga_squad/README.md
index 08024d07be..35b830e08b 100644
--- a/examples/trials/ga_squad/README.md
+++ b/examples/trials/ga_squad/README.md
@@ -90,11 +90,11 @@ The evolution-algorithm based architecture for question answering has two differ
The trial has a lot of different files, functions and classes. Here we will only give most of those files a brief introduction:
-* `attention.py` contains an implementaion for attention mechanism in Tensorflow.
+* `attention.py` contains an implementation for attention mechanism in Tensorflow.
* `data.py` contains functions for data preprocessing.
* `evaluate.py` contains the evaluation script.
* `graph.py` contains the definition of the computation graph.
-* `rnn.py` contains an implementaion for GRU in Tensorflow.
+* `rnn.py` contains an implementation for GRU in Tensorflow.
* `train_model.py` is a wrapper for the whole question answering model.
Among those files, `trial.py` and `graph_to_tf.py` is special.