Skip to content

Commit

Permalink
[Refactor]: Refactor the directory structure of docs (#146)
Browse files Browse the repository at this point in the history
* fix links of docs

* [Docs]: Use shared menu from theme instead

* [Refactor]: Refactor the directory structure of docs

* [Fix]: Fix lint

* [Fix]: Fix __version__.py file link bug

Co-authored-by: fangyixiao18 <fangyx18@hotmail.com>
  • Loading branch information
YuanLiuuuuuu and fangyixiao18 authored Dec 16, 2021
1 parent d6a0ce1 commit febe511
Show file tree
Hide file tree
Showing 85 changed files with 268 additions and 442 deletions.
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -124,3 +124,9 @@ benchmarks/detection/output

# Pytorch
*.pth


# readthedocs
docs/zh_cn/_build
src/
docs/en/_build
52 changes: 26 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,14 +46,14 @@ This project is released under the [Apache 2.0 license](LICENSE).

MMSelfSup **v0.5.0** was released with refactor in 16/12/2021.

Please refer to [changelog.md](docs/changelog.md) for details and release history.
Please refer to [changelog.md](docs/en/changelog.md) for details and release history.

Differences between MMSelfSup and OpenSelfSup codebases can be found in [compatibility.md](docs/compatibility.md).
Differences between MMSelfSup and OpenSelfSup codebases can be found in [compatibility.md](docs/en/compatibility.md).

## Model Zoo and Benchmark

### Model Zoo
Please refer to [model_zoo.md](docs/model_zoo.md) for a comprehensive set of pre-trained models and benchmarks.
Please refer to [model_zoo.md](docs/en/model_zoo.md) for a comprehensive set of pre-trained models and benchmarks.

Supported algorithms:

Expand All @@ -74,36 +74,36 @@ More algorithms are in our plan.

### Benchmark

| Benchmarks | Setting |
| -------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| ImageNet Linear Classification (Multi-head) | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| ImageNet Linear Classification (Last) | |
| ImageNet Semi-Sup Classification | |
| Places205 Linear Classification (Multi-head) | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| iNaturalist2018 Linear Classification (Multi-head) | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| PASCAL VOC07 SVM | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| PASCAL VOC07 Low-shot SVM | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| PASCAL VOC07+12 Object Detection | [MoCo](http://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf) |
| COCO17 Object Detection | [MoCo](http://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf) |
| Cityscapes Segmentation | [MMSeg](configs/benchmarks/mmsegmentation/cityscapes/fcn_r50-d8_769x769_40k_cityscapes.py) |
| PASCAL VOC12 Aug Segmentation | [MMSeg](configs/benchmarks/mmsegmentation/voc12aug/fcn_r50-d8_512x512_20k_voc12aug.py) |
| Benchmarks | Setting |
| -------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| ImageNet Linear Classification (Multi-head) | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| ImageNet Linear Classification (Last) | |
| ImageNet Semi-Sup Classification | |
| Places205 Linear Classification (Multi-head) | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| iNaturalist2018 Linear Classification (Multi-head) | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| PASCAL VOC07 SVM | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| PASCAL VOC07 Low-shot SVM | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
| PASCAL VOC07+12 Object Detection | [MoCo](http://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf) |
| COCO17 Object Detection | [MoCo](http://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf) |
| Cityscapes Segmentation | [MMSeg](configs/benchmarks/mmsegmentation/cityscapes/fcn_r50-d8_769x769_40k_cityscapes.py) |
| PASCAL VOC12 Aug Segmentation | [MMSeg](configs/benchmarks/mmsegmentation/voc12aug/fcn_r50-d8_512x512_20k_voc12aug.py) |

## Installation

Please refer to [install.md](docs/install.md) for installation and [prepare_data.md](docs/prepare_data.md) for dataset preparation.
Please refer to [install.md](docs/en/install.md) for installation and [prepare_data.md](docs/en/prepare_data.md) for dataset preparation.

## Get Started

Please see [getting_started.md](docs/getting_started.md) for the basic usage of MMSelfSup.
Please see [getting_started.md](docs/en/getting_started.md) for the basic usage of MMSelfSup.

We also provides tutorials for more details:
- [config](docs/tutorials/0_config.md)
- [add new dataset](docs/tutorials/1_new_dataset.md)
- [data pipeline](docs/tutorials/2_data_pipeline.md)
- [add new module](docs/tutorials/3_new_module.md)
- [customize schedules](docs/tutorials/4_schedule.md)
- [customize runtime](docs/tutorials/5_runtime.md)
- [benchmarks](docs/tutorials/6_benchmarks.md)
- [config](docs/en/tutorials/0_config.md)
- [add new dataset](docs/en/tutorials/1_new_dataset.md)
- [data pipeline](docs/en/tutorials/2_data_pipeline.md)
- [add new module](docs/en/tutorials/3_new_module.md)
- [customize schedules](docs/en/tutorials/4_schedule.md)
- [customize runtime](docs/en/tutorials/5_runtime.md)
- [benchmarks](docs/en/tutorials/6_benchmarks.md)

## Citation

Expand All @@ -120,7 +120,7 @@ If you use this toolbox or benchmark in your research, please cite this project.

## Contributing

We appreciate all contributions improving MMSelfSup. Please refer to [CONTRIBUTING.md](docs/community/CONTRIBUTING.md) for more details about the contributing guideline.
We appreciate all contributions improving MMSelfSup. Please refer to [CONTRIBUTING.md](docs/en/community/CONTRIBUTING.md) for more details about the contributing guideline.

## Acknowledgement

Expand Down
22 changes: 11 additions & 11 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ MMSelfSup 是一个基于 PyTorch 实现的开源自监督表征学习工具箱

### 模型库

请参考 [模型库](docs/model_zoo.md) 查看我们更加全面的模型基准结果。
请参考 [模型库](docs/zh_cn/model_zoo.md) 查看我们更加全面的模型基准结果。

目前已支持的算法:

Expand Down Expand Up @@ -81,24 +81,24 @@ MMSelfSup 是一个基于 PyTorch 实现的开源自监督表征学习工具箱

## 安装

请参考 [安装文档](docs_zh-CN/install.md) 进行安装和参考 [准备数据](docs_zh-CN/prepare_data.md) 准备数据集。
请参考 [安装文档](docs/zh_cn/install.md) 进行安装和参考 [准备数据](docs/zh_cn/prepare_data.md) 准备数据集。

## 快速入门

请参考 [入门指南](docs_zh-CN/getting_started.md) 获取 MMSelfSup 的基本使用方法.
请参考 [入门指南](docs/zh_cn/getting_started.md) 获取 MMSelfSup 的基本使用方法.

我们也提供了更加全面的教程,包括:
- [配置文件](docs_zh-CN/tutorials/0_config.md)
- [添加数据集](docs_zh-CN/tutorials/1_new_dataset.md)
- [数据处理流](docs_zh-CN/tutorials/2_data_pipeline.md)
- [添加新模块](docs_zh-CN/tutorials/3_new_module.md)
- [自定义流程](docs_zh-CN/tutorials/4_schedule.md)
- [自定义运行](docs_zh-CN/tutorials/5_runtime.md)
- [基准测试](docs_zh-CN/tutorials/6_benchmarks.md)
- [配置文件](docs/zh_cn/tutorials/0_config.md)
- [添加数据集](docs/zh_cn/tutorials/1_new_dataset.md)
- [数据处理流](docs/zh_cn/tutorials/2_data_pipeline.md)
- [添加新模块](docs/zh_cn/tutorials/3_new_module.md)
- [自定义流程](docs/zh_cn/tutorials/4_schedule.md)
- [自定义运行](docs/zh_cn/tutorials/5_runtime.md)
- [基准测试](docs/zh_cn/tutorials/6_benchmarks.md)

## 参与贡献

我们非常欢迎任何有助于提升 MMSelfSup 的贡献,请参考 [贡献指南](docs_zh-CN/community/CONTRIBUTING.md) 来了解如何参与贡献。
我们非常欢迎任何有助于提升 MMSelfSup 的贡献,请参考 [贡献指南](docs/zh_cn/community/CONTRIBUTING.md) 来了解如何参与贡献。

## 致谢

Expand Down
2 changes: 1 addition & 1 deletion configs/selfsup/byol/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@

## Models and Benchmarks

**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.

Expand Down
2 changes: 1 addition & 1 deletion configs/selfsup/deepcluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Clustering is a class of unsupervised learning methods that has been extensively

## Models and Benchmarks

**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.

Expand Down
8 changes: 4 additions & 4 deletions configs/selfsup/densecl/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To date, most existing self-supervised learning methods are designed and optimiz

## Models and Benchmarks

**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.

Expand All @@ -40,9 +40,9 @@ The **Best Layer** indicates that the best results are obtained from which layer

Besides, k=1 to 96 indicates the hyper-parameter of Low-shot SVM.

| Self-Supervised Config | Best Layer | SVM | k=1 | k=2 | k=4 | k=8 | k=16 | k=32 | k=64 | k=96 |
| ---------------------------------------------------------------------- | ---------- | --- | --- | --- | --- | --- | ---- | ---- | ---- | ---- |
| [resnet50_8xb32-coslr-200e](densecl_resnet50_8xb32-coslr-200e_in1k.py) | feature5 |82.5|42.68|50.64|61.74|68.17|72.99|76.07|79.19|80.55|
| Self-Supervised Config | Best Layer | SVM | k=1 | k=2 | k=4 | k=8 | k=16 | k=32 | k=64 | k=96 |
| ---------------------------------------------------------------------- | ---------- | ---- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| [resnet50_8xb32-coslr-200e](densecl_resnet50_8xb32-coslr-200e_in1k.py) | feature5 | 82.5 | 42.68 | 50.64 | 61.74 | 68.17 | 72.99 | 76.07 | 79.19 | 80.55 |

#### ImageNet Linear Evaluation

Expand Down
2 changes: 1 addition & 1 deletion configs/selfsup/moco/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Contrastive unsupervised learning has recently shown encouraging progress, e.g.,

## Models and Benchmarks

**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.

Expand Down
22 changes: 11 additions & 11 deletions configs/selfsup/npid/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Our method is also remarkable for consistently improving test performance with m

## Models and Benchmarks

**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.

Expand All @@ -44,9 +44,9 @@ The **Best Layer** indicates that the best results are obtained from which layer

Besides, k=1 to 96 indicates the hyper-parameter of Low-shot SVM.

| Self-Supervised Config | Best Layer | SVM | k=1 | k=2 | k=4 | k=8 | k=16 | k=32 | k=64 | k=96 |
| ------------------------------------------------------------------------------------------- | ---------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | feature5 | 76.75 | 26.96 | 35.37 | 44.48 | 53.89 | 60.39 | 66.41 | 71.48 | 73.39 |
| Self-Supervised Config | Best Layer | SVM | k=1 | k=2 | k=4 | k=8 | k=16 | k=32 | k=64 | k=96 |
| --------------------------------------------------------------------- | ---------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | feature5 | 76.75 | 26.96 | 35.37 | 44.48 | 53.89 | 60.39 | 66.41 | 71.48 | 73.39 |

#### ImageNet Linear Evaluation

Expand All @@ -56,7 +56,7 @@ The **AvgPool** result is obtained from Linear Evaluation with GlobalAveragePool

| Self-Supervised Config | Feature1 | Feature2 | Feature3 | Feature4 | Feature5 | AvgPool |
| --------------------------------------------------------------------- | -------- | -------- | -------- | -------- | -------- | ------- |
| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | 14.68 | 31.98 | 42.85 | 56.95 | 58.41 | 58.16 |
| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | 14.68 | 31.98 | 42.85 | 56.95 | 58.41 | 58.16 |

### Detection

Expand All @@ -66,9 +66,9 @@ The detection benchmarks includes 2 downstream task datasets, **Pascal VOC 2007

Please refer to [faster_rcnn_r50_c4_mstrain_24k_voc0712.py](../../benchmarks/mmdetection/voc0712/faster_rcnn_r50_c4_mstrain_24k_voc0712.py) for details of config.

| Self-Supervised Config | AP50 |
| ------------------------------------------------------------------------------------------- | ---- |
| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) |79.52 |
| Self-Supervised Config | AP50 |
| --------------------------------------------------------------------- | ----- |
| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | 79.52 |

#### COCO2017

Expand All @@ -86,6 +86,6 @@ The segmentation benchmarks includes 2 downstream task datasets, **Cityscapes**

Please refer to [fcn_r50-d8_512x512_20k_voc12aug.py](../../benchmarks/mmsegmentation/voc12aug/fcn_r50-d8_512x512_20k_voc12aug.py) for details of config.

| Self-Supervised Config | mIOU |
| ------------------------------------------------------------------------------------------- | ----- |
| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | 65.45 |
| Self-Supervised Config | mIOU |
| --------------------------------------------------------------------- | ----- |
| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | 65.45 |
2 changes: 1 addition & 1 deletion configs/selfsup/odc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Joint clustering and feature learning methods have shown remarkable performance

## Models and Benchmarks

**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.

Expand Down
8 changes: 4 additions & 4 deletions configs/selfsup/relative_loc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ This work explores the use of spatial context as a source of free and plentiful

## Models and Benchmarks

**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.

Expand Down Expand Up @@ -62,8 +62,8 @@ The detection benchmarks includes 2 downstream task datasets, **Pascal VOC 2007

Please refer to [faster_rcnn_r50_c4_mstrain_24k_voc0712.py](../../benchmarks/mmdetection/voc0712/faster_rcnn_r50_c4_mstrain_24k_voc0712.py) for details of config.

| Self-Supervised Config | AP50 |
| --------------------------------------------------------------------------- | ---- |
| Self-Supervised Config | AP50 |
| --------------------------------------------------------------------------- | ----- |
| [resnet50_8xb64-steplr-70e](relative-loc_resnet50_8xb64-steplr-70e_in1k.py) | 79.70 |

#### COCO2017
Expand All @@ -72,7 +72,7 @@ Please refer to [mask_rcnn_r50_fpn_mstrain_1x_coco.py](../../benchmarks/mmdetect

| Self-Supervised Config | mAP(Box) | AP50(Box) | AP75(Box) | mAP(Mask) | AP50(Mask) | AP75(Mask) |
| --------------------------------------------------------------------------- | -------- | --------- | --------- | --------- | ---------- | ---------- |
| [resnet50_8xb64-steplr-70e](relative-loc_resnet50_8xb64-steplr-70e_in1k.py) | 37.5 | 56.2 | 41.3 | 33.7 | 53.3 | 36.1 |
| [resnet50_8xb64-steplr-70e](relative-loc_resnet50_8xb64-steplr-70e_in1k.py) | 37.5 | 56.2 | 41.3 | 33.7 | 53.3 | 36.1 |

### Segmentation

Expand Down
Loading

0 comments on commit febe511

Please sign in to comment.