Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
fix index of readthedocs (#2129)
Browse files Browse the repository at this point in the history
  • Loading branch information
QuanluZhang authored Mar 9, 2020
1 parent 31afa42 commit c3cd9fe
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 3 deletions.
2 changes: 2 additions & 0 deletions docs/en_US/AdvancedFeature/MultiPhase.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Multi-phase

## What is multi-phase experiment

Typically each trial job gets a single configuration (e.g., hyperparameters) from tuner, tries this configuration and reports result, then exits. But sometimes a trial job may wants to request multiple configurations from tuner. We find this is a very compelling feature. For example:
Expand Down
2 changes: 2 additions & 0 deletions docs/en_US/Compressor/Framework.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Design Doc

## Overview
The model compression framework has two main components: `pruner` and `module wrapper`.

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/NAS/Advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ To demonstrate what mutators are for, we need to know how one-shot NAS normally

Finally, mutators provide a method called `mutator.export()` that export a dict with architectures to the model. Note that currently this dict this a mapping from keys of mutables to tensors of selection. So in order to dump to json, users need to convert the tensors explicitly into python list.

Meanwhile, NNI provides some useful tools so that users can implement trainers more easily. See [Trainers](./NasReference.md#trainers) for details.
Meanwhile, NNI provides some useful tools so that users can implement trainers more easily. See [Trainers](./NasReference.md) for details.

## Implement New Mutators

Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/NAS/NasGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Input choice can be thought of as a callable module that receives a list of tens

`LayerChoice` and `InputChoice` are both **mutables**. Mutable means "changeable". As opposed to traditional deep learning layers/modules which have fixed operation type once defined, models with mutables are essentially a series of possible models.

Users can specify a **key** for each mutable. By default NNI will assign one for you that is globally unique, but in case users want to share choices (for example, there are two `LayerChoice` with the same candidate operations, and you want them to have the same choice, i.e., if first one chooses the i-th op, the second one also chooses the i-th op), they can give them the same key. The key marks the identity for this choice, and will be used in dumped checkpoint. So if you want to increase the readability of your exported architecture, manually assigning keys to each mutable would be a good idea. For advanced usage on mutables, see [Mutables](./NasReference.md#mutables).
Users can specify a **key** for each mutable. By default NNI will assign one for you that is globally unique, but in case users want to share choices (for example, there are two `LayerChoice` with the same candidate operations, and you want them to have the same choice, i.e., if first one chooses the i-th op, the second one also chooses the i-th op), they can give them the same key. The key marks the identity for this choice, and will be used in dumped checkpoint. So if you want to increase the readability of your exported architecture, manually assigning keys to each mutable would be a good idea. For advanced usage on mutables, see [Mutables](./NasReference.md).

## Use a Search Algorithm

Expand Down Expand Up @@ -163,7 +163,7 @@ The JSON is simply a mapping from mutable keys to one-hot or multi-hot represent
}
```

After applying, the model is then fixed and ready for a final training. The model works as a single model, although it might contain more parameters than expected. This comes with pros and cons. The good side is, you can directly load the checkpoint dumped from supernet during search phase and start retrain from there. However, this is also a model with redundant parameters, which may cause problems when trying to count the number of parameters in model. For deeper reasons and possible workaround, see [Trainers](./NasReference.md#retrain).
After applying, the model is then fixed and ready for a final training. The model works as a single model, although it might contain more parameters than expected. This comes with pros and cons. The good side is, you can directly load the checkpoint dumped from supernet during search phase and start retrain from there. However, this is also a model with redundant parameters, which may cause problems when trying to count the number of parameters in model. For deeper reasons and possible workaround, see [Trainers](./NasReference.md).

Also refer to [DARTS](./DARTS.md) for example code of retraining.

Expand Down
2 changes: 2 additions & 0 deletions docs/en_US/hpo_advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@ Advanced Features
=================

.. toctree::
:maxdepth: 2

Enable Multi-phase <AdvancedFeature/MultiPhase>
Write a New Tuner <Tuner/CustomizeTuner>
Write a New Assessor <Assessor/CustomizeAssessor>
Expand Down

0 comments on commit c3cd9fe

Please sign in to comment.