From c3cd9fe7ffdb20d07f7562592774fe071b235de3 Mon Sep 17 00:00:00 2001 From: QuanluZhang Date: Mon, 9 Mar 2020 13:33:48 +0800 Subject: [PATCH] fix index of readthedocs (#2129) --- docs/en_US/AdvancedFeature/MultiPhase.md | 2 ++ docs/en_US/Compressor/Framework.md | 2 ++ docs/en_US/NAS/Advanced.md | 2 +- docs/en_US/NAS/NasGuide.md | 4 ++-- docs/en_US/hpo_advanced.rst | 2 ++ 5 files changed, 9 insertions(+), 3 deletions(-) diff --git a/docs/en_US/AdvancedFeature/MultiPhase.md b/docs/en_US/AdvancedFeature/MultiPhase.md index 4cdb3a7a99..a2f62b700b 100644 --- a/docs/en_US/AdvancedFeature/MultiPhase.md +++ b/docs/en_US/AdvancedFeature/MultiPhase.md @@ -1,3 +1,5 @@ +# Multi-phase + ## What is multi-phase experiment Typically each trial job gets a single configuration (e.g., hyperparameters) from tuner, tries this configuration and reports result, then exits. But sometimes a trial job may wants to request multiple configurations from tuner. We find this is a very compelling feature. For example: diff --git a/docs/en_US/Compressor/Framework.md b/docs/en_US/Compressor/Framework.md index 9922c019a8..9914e7968b 100644 --- a/docs/en_US/Compressor/Framework.md +++ b/docs/en_US/Compressor/Framework.md @@ -1,3 +1,5 @@ +# Design Doc + ## Overview The model compression framework has two main components: `pruner` and `module wrapper`. diff --git a/docs/en_US/NAS/Advanced.md b/docs/en_US/NAS/Advanced.md index fb051f896b..7761f2133d 100644 --- a/docs/en_US/NAS/Advanced.md +++ b/docs/en_US/NAS/Advanced.md @@ -31,7 +31,7 @@ To demonstrate what mutators are for, we need to know how one-shot NAS normally Finally, mutators provide a method called `mutator.export()` that export a dict with architectures to the model. Note that currently this dict this a mapping from keys of mutables to tensors of selection. So in order to dump to json, users need to convert the tensors explicitly into python list. -Meanwhile, NNI provides some useful tools so that users can implement trainers more easily. See [Trainers](./NasReference.md#trainers) for details. +Meanwhile, NNI provides some useful tools so that users can implement trainers more easily. See [Trainers](./NasReference.md) for details. ## Implement New Mutators diff --git a/docs/en_US/NAS/NasGuide.md b/docs/en_US/NAS/NasGuide.md index e5d1d3461a..07147ea41b 100644 --- a/docs/en_US/NAS/NasGuide.md +++ b/docs/en_US/NAS/NasGuide.md @@ -71,7 +71,7 @@ Input choice can be thought of as a callable module that receives a list of tens `LayerChoice` and `InputChoice` are both **mutables**. Mutable means "changeable". As opposed to traditional deep learning layers/modules which have fixed operation type once defined, models with mutables are essentially a series of possible models. -Users can specify a **key** for each mutable. By default NNI will assign one for you that is globally unique, but in case users want to share choices (for example, there are two `LayerChoice` with the same candidate operations, and you want them to have the same choice, i.e., if first one chooses the i-th op, the second one also chooses the i-th op), they can give them the same key. The key marks the identity for this choice, and will be used in dumped checkpoint. So if you want to increase the readability of your exported architecture, manually assigning keys to each mutable would be a good idea. For advanced usage on mutables, see [Mutables](./NasReference.md#mutables). +Users can specify a **key** for each mutable. By default NNI will assign one for you that is globally unique, but in case users want to share choices (for example, there are two `LayerChoice` with the same candidate operations, and you want them to have the same choice, i.e., if first one chooses the i-th op, the second one also chooses the i-th op), they can give them the same key. The key marks the identity for this choice, and will be used in dumped checkpoint. So if you want to increase the readability of your exported architecture, manually assigning keys to each mutable would be a good idea. For advanced usage on mutables, see [Mutables](./NasReference.md). ## Use a Search Algorithm @@ -163,7 +163,7 @@ The JSON is simply a mapping from mutable keys to one-hot or multi-hot represent } ``` -After applying, the model is then fixed and ready for a final training. The model works as a single model, although it might contain more parameters than expected. This comes with pros and cons. The good side is, you can directly load the checkpoint dumped from supernet during search phase and start retrain from there. However, this is also a model with redundant parameters, which may cause problems when trying to count the number of parameters in model. For deeper reasons and possible workaround, see [Trainers](./NasReference.md#retrain). +After applying, the model is then fixed and ready for a final training. The model works as a single model, although it might contain more parameters than expected. This comes with pros and cons. The good side is, you can directly load the checkpoint dumped from supernet during search phase and start retrain from there. However, this is also a model with redundant parameters, which may cause problems when trying to count the number of parameters in model. For deeper reasons and possible workaround, see [Trainers](./NasReference.md). Also refer to [DARTS](./DARTS.md) for example code of retraining. diff --git a/docs/en_US/hpo_advanced.rst b/docs/en_US/hpo_advanced.rst index 0befd608fc..1e04a7c722 100644 --- a/docs/en_US/hpo_advanced.rst +++ b/docs/en_US/hpo_advanced.rst @@ -2,6 +2,8 @@ Advanced Features ================= .. toctree:: + :maxdepth: 2 + Enable Multi-phase Write a New Tuner Write a New Assessor