diff --git a/.gitignore b/.gitignore
index 0af1bda05..0c7c355e1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -124,3 +124,9 @@ benchmarks/detection/output
 
 # Pytorch
 *.pth
+
+
+# readthedocs
+docs/zh_cn/_build
+src/
+docs/en/_build
diff --git a/README.md b/README.md
index 23eb91b79..8a3d88c10 100644
--- a/README.md
+++ b/README.md
@@ -46,14 +46,14 @@ This project is released under the [Apache 2.0 license](LICENSE).
 
 MMSelfSup **v0.5.0** was released with refactor in 16/12/2021.
 
-Please refer to [changelog.md](docs/changelog.md) for details and release history.
+Please refer to [changelog.md](docs/en/changelog.md) for details and release history.
 
-Differences between MMSelfSup and OpenSelfSup codebases can be found in [compatibility.md](docs/compatibility.md).
+Differences between MMSelfSup and OpenSelfSup codebases can be found in [compatibility.md](docs/en/compatibility.md).
 
 ## Model Zoo and Benchmark
 
 ### Model Zoo
-Please refer to [model_zoo.md](docs/model_zoo.md) for a comprehensive set of pre-trained models and benchmarks.
+Please refer to [model_zoo.md](docs/en/model_zoo.md) for a comprehensive set of pre-trained models and benchmarks.
 
 Supported algorithms:
 
@@ -74,36 +74,36 @@ More algorithms are in our plan.
 
 ### Benchmark
 
-  | Benchmarks                                   | Setting                                                                                                                                                              |
-  | -------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-  | ImageNet Linear Classification (Multi-head)  | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
-  | ImageNet Linear Classification (Last)        |                                                                                                                                                                      |
-  | ImageNet Semi-Sup Classification             |                                                                                                                                                                      |
-  | Places205 Linear Classification (Multi-head) | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
-  | iNaturalist2018 Linear Classification (Multi-head) | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf)               |
-  | PASCAL VOC07 SVM                             | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
-  | PASCAL VOC07 Low-shot SVM                    | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
-  | PASCAL VOC07+12 Object Detection             | [MoCo](http://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf)               |
-  | COCO17 Object Detection                      | [MoCo](http://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf)               |
-  | Cityscapes Segmentation                      | [MMSeg](configs/benchmarks/mmsegmentation/cityscapes/fcn_r50-d8_769x769_40k_cityscapes.py)                                                                           |
-  | PASCAL VOC12 Aug Segmentation                | [MMSeg](configs/benchmarks/mmsegmentation/voc12aug/fcn_r50-d8_512x512_20k_voc12aug.py)                                                                               |
+  | Benchmarks                                         | Setting                                                                                                                                                              |
+  | -------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+  | ImageNet Linear Classification (Multi-head)        | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
+  | ImageNet Linear Classification (Last)              |                                                                                                                                                                      |
+  | ImageNet Semi-Sup Classification                   |                                                                                                                                                                      |
+  | Places205 Linear Classification (Multi-head)       | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
+  | iNaturalist2018 Linear Classification (Multi-head) | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
+  | PASCAL VOC07 SVM                                   | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
+  | PASCAL VOC07 Low-shot SVM                          | [Goyal2019](http://openaccess.thecvf.com/content_ICCV_2019/papers/Goyal_Scaling_and_Benchmarking_Self-Supervised_Visual_Representation_Learning_ICCV_2019_paper.pdf) |
+  | PASCAL VOC07+12 Object Detection                   | [MoCo](http://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf)               |
+  | COCO17 Object Detection                            | [MoCo](http://openaccess.thecvf.com/content_CVPR_2020/papers/He_Momentum_Contrast_for_Unsupervised_Visual_Representation_Learning_CVPR_2020_paper.pdf)               |
+  | Cityscapes Segmentation                            | [MMSeg](configs/benchmarks/mmsegmentation/cityscapes/fcn_r50-d8_769x769_40k_cityscapes.py)                                                                           |
+  | PASCAL VOC12 Aug Segmentation                      | [MMSeg](configs/benchmarks/mmsegmentation/voc12aug/fcn_r50-d8_512x512_20k_voc12aug.py)                                                                               |
 
 ## Installation
 
-Please refer to [install.md](docs/install.md) for installation and [prepare_data.md](docs/prepare_data.md) for dataset preparation.
+Please refer to [install.md](docs/en/install.md) for installation and [prepare_data.md](docs/en/prepare_data.md) for dataset preparation.
 
 ## Get Started
 
-Please see [getting_started.md](docs/getting_started.md) for the basic usage of MMSelfSup.
+Please see [getting_started.md](docs/en/getting_started.md) for the basic usage of MMSelfSup.
 
 We also provides tutorials for more details:
-- [config](docs/tutorials/0_config.md)
-- [add new dataset](docs/tutorials/1_new_dataset.md)
-- [data pipeline](docs/tutorials/2_data_pipeline.md)
-- [add new module](docs/tutorials/3_new_module.md)
-- [customize schedules](docs/tutorials/4_schedule.md)
-- [customize runtime](docs/tutorials/5_runtime.md)
-- [benchmarks](docs/tutorials/6_benchmarks.md)
+- [config](docs/en/tutorials/0_config.md)
+- [add new dataset](docs/en/tutorials/1_new_dataset.md)
+- [data pipeline](docs/en/tutorials/2_data_pipeline.md)
+- [add new module](docs/en/tutorials/3_new_module.md)
+- [customize schedules](docs/en/tutorials/4_schedule.md)
+- [customize runtime](docs/en/tutorials/5_runtime.md)
+- [benchmarks](docs/en/tutorials/6_benchmarks.md)
 
 ## Citation
 
@@ -120,7 +120,7 @@ If you use this toolbox or benchmark in your research, please cite this project.
 
 ## Contributing
 
-We appreciate all contributions improving MMSelfSup. Please refer to [CONTRIBUTING.md](docs/community/CONTRIBUTING.md) for more details about the contributing guideline.
+We appreciate all contributions improving MMSelfSup. Please refer to [CONTRIBUTING.md](docs/en/community/CONTRIBUTING.md) for more details about the contributing guideline.
 
 ## Acknowledgement
 
diff --git a/README_zh-CN.md b/README_zh-CN.md
index 1c1b7f52a..0cff19eaa 100644
--- a/README_zh-CN.md
+++ b/README_zh-CN.md
@@ -44,7 +44,7 @@ MMSelfSup 是一个基于 PyTorch 实现的开源自监督表征学习工具箱
 
 ### 模型库
 
-请参考 [模型库](docs/model_zoo.md) 查看我们更加全面的模型基准结果。
+请参考 [模型库](docs/zh_cn/model_zoo.md) 查看我们更加全面的模型基准结果。
 
 目前已支持的算法:
 
@@ -81,24 +81,24 @@ MMSelfSup 是一个基于 PyTorch 实现的开源自监督表征学习工具箱
 
 ## 安装
 
-请参考 [安装文档](docs_zh-CN/install.md) 进行安装和参考 [准备数据](docs_zh-CN/prepare_data.md) 准备数据集。
+请参考 [安装文档](docs/zh_cn/install.md) 进行安装和参考 [准备数据](docs/zh_cn/prepare_data.md) 准备数据集。
 
 ## 快速入门
 
-请参考 [入门指南](docs_zh-CN/getting_started.md) 获取 MMSelfSup 的基本使用方法.
+请参考 [入门指南](docs/zh_cn/getting_started.md) 获取 MMSelfSup 的基本使用方法.
 
 我们也提供了更加全面的教程,包括:
-- [配置文件](docs_zh-CN/tutorials/0_config.md)
-- [添加数据集](docs_zh-CN/tutorials/1_new_dataset.md)
-- [数据处理流](docs_zh-CN/tutorials/2_data_pipeline.md)
-- [添加新模块](docs_zh-CN/tutorials/3_new_module.md)
-- [自定义流程](docs_zh-CN/tutorials/4_schedule.md)
-- [自定义运行](docs_zh-CN/tutorials/5_runtime.md)
-- [基准测试](docs_zh-CN/tutorials/6_benchmarks.md)
+- [配置文件](docs/zh_cn/tutorials/0_config.md)
+- [添加数据集](docs/zh_cn/tutorials/1_new_dataset.md)
+- [数据处理流](docs/zh_cn/tutorials/2_data_pipeline.md)
+- [添加新模块](docs/zh_cn/tutorials/3_new_module.md)
+- [自定义流程](docs/zh_cn/tutorials/4_schedule.md)
+- [自定义运行](docs/zh_cn/tutorials/5_runtime.md)
+- [基准测试](docs/zh_cn/tutorials/6_benchmarks.md)
 
 ## 参与贡献
 
-我们非常欢迎任何有助于提升 MMSelfSup 的贡献,请参考 [贡献指南](docs_zh-CN/community/CONTRIBUTING.md) 来了解如何参与贡献。
+我们非常欢迎任何有助于提升 MMSelfSup 的贡献,请参考 [贡献指南](docs/zh_cn/community/CONTRIBUTING.md) 来了解如何参与贡献。
 
 ## 致谢
 
diff --git a/configs/selfsup/byol/README.md b/configs/selfsup/byol/README.md
index 88ab0ea7e..c1a19b6f9 100644
--- a/configs/selfsup/byol/README.md
+++ b/configs/selfsup/byol/README.md
@@ -26,7 +26,7 @@
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
diff --git a/configs/selfsup/deepcluster/README.md b/configs/selfsup/deepcluster/README.md
index f6704902d..2e7657ba4 100644
--- a/configs/selfsup/deepcluster/README.md
+++ b/configs/selfsup/deepcluster/README.md
@@ -26,7 +26,7 @@ Clustering is a class of unsupervised learning methods that has been extensively
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
diff --git a/configs/selfsup/densecl/README.md b/configs/selfsup/densecl/README.md
index d099b01f5..b97feac37 100644
--- a/configs/selfsup/densecl/README.md
+++ b/configs/selfsup/densecl/README.md
@@ -26,7 +26,7 @@ To date, most existing self-supervised learning methods are designed and optimiz
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
@@ -40,9 +40,9 @@ The **Best Layer** indicates that the best results are obtained from which layer
 
 Besides, k=1 to 96 indicates the hyper-parameter of Low-shot SVM.
 
-| Self-Supervised Config                                                 | Best Layer | SVM | k=1 | k=2 | k=4 | k=8 | k=16 | k=32 | k=64 | k=96 |
-| ---------------------------------------------------------------------- | ---------- | --- | --- | --- | --- | --- | ---- | ---- | ---- | ---- |
-| [resnet50_8xb32-coslr-200e](densecl_resnet50_8xb32-coslr-200e_in1k.py) | feature5   |82.5|42.68|50.64|61.74|68.17|72.99|76.07|79.19|80.55|
+| Self-Supervised Config                                                 | Best Layer | SVM  | k=1   | k=2   | k=4   | k=8   | k=16  | k=32  | k=64  | k=96  |
+| ---------------------------------------------------------------------- | ---------- | ---- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
+| [resnet50_8xb32-coslr-200e](densecl_resnet50_8xb32-coslr-200e_in1k.py) | feature5   | 82.5 | 42.68 | 50.64 | 61.74 | 68.17 | 72.99 | 76.07 | 79.19 | 80.55 |
 
 #### ImageNet Linear Evaluation
 
diff --git a/configs/selfsup/moco/README.md b/configs/selfsup/moco/README.md
index 63cf41a02..f25514f61 100644
--- a/configs/selfsup/moco/README.md
+++ b/configs/selfsup/moco/README.md
@@ -50,7 +50,7 @@ Contrastive unsupervised learning has recently shown encouraging progress, e.g.,
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
diff --git a/configs/selfsup/npid/README.md b/configs/selfsup/npid/README.md
index 0caebac9a..19d6272ed 100644
--- a/configs/selfsup/npid/README.md
+++ b/configs/selfsup/npid/README.md
@@ -30,7 +30,7 @@ Our method is also remarkable for consistently improving test performance with m
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
@@ -44,9 +44,9 @@ The **Best Layer** indicates that the best results are obtained from which layer
 
 Besides, k=1 to 96 indicates the hyper-parameter of Low-shot SVM.
 
-| Self-Supervised Config                                                                      | Best Layer | SVM   | k=1   | k=2   | k=4   | k=8   | k=16  | k=32  | k=64  | k=96  |
-| ------------------------------------------------------------------------------------------- | ---------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
-| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py)                       | feature5   | 76.75 | 26.96 | 35.37 | 44.48 | 53.89 | 60.39 | 66.41 | 71.48 | 73.39 |
+| Self-Supervised Config                                                | Best Layer | SVM   | k=1   | k=2   | k=4   | k=8   | k=16  | k=32  | k=64  | k=96  |
+| --------------------------------------------------------------------- | ---------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
+| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | feature5   | 76.75 | 26.96 | 35.37 | 44.48 | 53.89 | 60.39 | 66.41 | 71.48 | 73.39 |
 
 #### ImageNet Linear Evaluation
 
@@ -56,7 +56,7 @@ The **AvgPool** result is obtained from Linear Evaluation with GlobalAveragePool
 
 | Self-Supervised Config                                                | Feature1 | Feature2 | Feature3 | Feature4 | Feature5 | AvgPool |
 | --------------------------------------------------------------------- | -------- | -------- | -------- | -------- | -------- | ------- |
-| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | 14.68    | 31.98    | 42.85    |  56.95   | 58.41    | 58.16   |
+| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | 14.68    | 31.98    | 42.85    | 56.95    | 58.41    | 58.16   |
 
 ### Detection
 
@@ -66,9 +66,9 @@ The detection benchmarks includes 2 downstream task datasets, **Pascal VOC 2007
 
 Please refer to [faster_rcnn_r50_c4_mstrain_24k_voc0712.py](../../benchmarks/mmdetection/voc0712/faster_rcnn_r50_c4_mstrain_24k_voc0712.py) for details of config.
 
-| Self-Supervised Config                                                                      | AP50 |
-| ------------------------------------------------------------------------------------------- | ---- |
-| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py)                       |79.52 |
+| Self-Supervised Config                                                | AP50  |
+| --------------------------------------------------------------------- | ----- |
+| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | 79.52 |
 
 #### COCO2017
 
@@ -86,6 +86,6 @@ The segmentation benchmarks includes 2 downstream task datasets, **Cityscapes**
 
 Please refer to [fcn_r50-d8_512x512_20k_voc12aug.py](../../benchmarks/mmsegmentation/voc12aug/fcn_r50-d8_512x512_20k_voc12aug.py) for details of config.
 
-| Self-Supervised Config                                                                      | mIOU  |
-| ------------------------------------------------------------------------------------------- | ----- |
-| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py)                       | 65.45 |
+| Self-Supervised Config                                                | mIOU  |
+| --------------------------------------------------------------------- | ----- |
+| [resnet50_8xb32-steplr-200e](npid_resnet50_8xb32-steplr-200e_in1k.py) | 65.45 |
diff --git a/configs/selfsup/odc/README.md b/configs/selfsup/odc/README.md
index 07b74f9dd..c4ddf2cbc 100644
--- a/configs/selfsup/odc/README.md
+++ b/configs/selfsup/odc/README.md
@@ -26,7 +26,7 @@ Joint clustering and feature learning methods have shown remarkable performance
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
diff --git a/configs/selfsup/relative_loc/README.md b/configs/selfsup/relative_loc/README.md
index 39a3d16f6..9d8b58c6e 100644
--- a/configs/selfsup/relative_loc/README.md
+++ b/configs/selfsup/relative_loc/README.md
@@ -26,7 +26,7 @@ This work explores the use of spatial context as a source of free and plentiful
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
@@ -62,8 +62,8 @@ The detection benchmarks includes 2 downstream task datasets, **Pascal VOC 2007
 
 Please refer to [faster_rcnn_r50_c4_mstrain_24k_voc0712.py](../../benchmarks/mmdetection/voc0712/faster_rcnn_r50_c4_mstrain_24k_voc0712.py) for details of config.
 
-| Self-Supervised Config                                                      | AP50 |
-| --------------------------------------------------------------------------- | ---- |
+| Self-Supervised Config                                                      | AP50  |
+| --------------------------------------------------------------------------- | ----- |
 | [resnet50_8xb64-steplr-70e](relative-loc_resnet50_8xb64-steplr-70e_in1k.py) | 79.70 |
 
 #### COCO2017
@@ -72,7 +72,7 @@ Please refer to [mask_rcnn_r50_fpn_mstrain_1x_coco.py](../../benchmarks/mmdetect
 
 | Self-Supervised Config                                                      | mAP(Box) | AP50(Box) | AP75(Box) | mAP(Mask) | AP50(Mask) | AP75(Mask) |
 | --------------------------------------------------------------------------- | -------- | --------- | --------- | --------- | ---------- | ---------- |
-| [resnet50_8xb64-steplr-70e](relative-loc_resnet50_8xb64-steplr-70e_in1k.py) | 37.5     | 56.2      | 41.3      | 33.7      | 53.3       |  36.1      |
+| [resnet50_8xb64-steplr-70e](relative-loc_resnet50_8xb64-steplr-70e_in1k.py) | 37.5     | 56.2      | 41.3      | 33.7      | 53.3       | 36.1       |
 
 ### Segmentation
 
diff --git a/configs/selfsup/rotation_pred/README.md b/configs/selfsup/rotation_pred/README.md
index 408aaf4b6..ba3b2fd17 100644
--- a/configs/selfsup/rotation_pred/README.md
+++ b/configs/selfsup/rotation_pred/README.md
@@ -26,7 +26,7 @@ Over the last years, deep convolutional neural networks (ConvNets) have transfor
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
@@ -72,7 +72,7 @@ Please refer to [mask_rcnn_r50_fpn_mstrain_1x_coco.py](../../benchmarks/mmdetect
 
 | Self-Supervised Config                                                       | mAP(Box) | AP50(Box) | AP75(Box) | mAP(Mask) | AP50(Mask) | AP75(Mask) |
 | ---------------------------------------------------------------------------- | -------- | --------- | --------- | --------- | ---------- | ---------- |
-| [resnet50_8xb16-steplr-70e](rotation-pred_resnet50_8xb16-steplr-70e_in1k.py) |  37.9    | 56.5      | 41.5      | 34.2      |   53.9     |  36.7      |
+| [resnet50_8xb16-steplr-70e](rotation-pred_resnet50_8xb16-steplr-70e_in1k.py) | 37.9     | 56.5      | 41.5      | 34.2      | 53.9       | 36.7       |
 
 ### Segmentation
 
diff --git a/configs/selfsup/simclr/README.md b/configs/selfsup/simclr/README.md
index 25376477e..cfd09fdc5 100644
--- a/configs/selfsup/simclr/README.md
+++ b/configs/selfsup/simclr/README.md
@@ -26,7 +26,7 @@ This paper presents SimCLR: a simple framework for contrastive learning of visua
 
 ## Models and Benchmarks
 
-[Back to model_zoo.md](../../../docs/model_zoo.md)
+[Back to model_zoo.md](../../../docs/en/model_zoo.md)
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
@@ -72,7 +72,7 @@ Please refer to [mask_rcnn_r50_fpn_mstrain_1x_coco.py](../../benchmarks/mmdetect
 
 | Self-Supervised Config                                                | mAP(Box) | AP50(Box) | AP75(Box) | mAP(Mask) | AP50(Mask) | AP75(Mask) |
 | --------------------------------------------------------------------- | -------- | --------- | --------- | --------- | ---------- | ---------- |
-| [resnet50_8xb32-coslr-200e](simclr_resnet50_8xb32-coslr-200e_in1k.py) |  38.7    | 58.1      | 42.4      | 34.9      | 55.3       | 37.5       |
+| [resnet50_8xb32-coslr-200e](simclr_resnet50_8xb32-coslr-200e_in1k.py) | 38.7     | 58.1      | 42.4      | 34.9      | 55.3       | 37.5       |
 
 ### Segmentation
 
diff --git a/configs/selfsup/simsiam/README.md b/configs/selfsup/simsiam/README.md
index fde53fa7c..ea01158bb 100644
--- a/configs/selfsup/simsiam/README.md
+++ b/configs/selfsup/simsiam/README.md
@@ -26,7 +26,7 @@ Siamese networks have become a common structure in various recent models for uns
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
diff --git a/configs/selfsup/swav/README.md b/configs/selfsup/swav/README.md
index f2794b868..01ef62a3d 100644
--- a/configs/selfsup/swav/README.md
+++ b/configs/selfsup/swav/README.md
@@ -26,7 +26,7 @@ Unsupervised image representations have significantly reduced the gap with super
 
 ## Models and Benchmarks
 
-**Back to [model_zoo.md](../../../docs/model_zoo.md) to download models.**
+**Back to [model_zoo.md](../../../docs/en/model_zoo.md) to download models.**
 
 In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
 
diff --git a/docs/conf.py b/docs/conf.py
deleted file mode 100644
index 63bd9707f..000000000
--- a/docs/conf.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import sys
-
-import pytorch_sphinx_theme
-
-sys.path.insert(0, os.path.abspath('..'))
-
-# -- Project information -----------------------------------------------------
-
-project = 'MMSelfSup'
-copyright = '2020-2021, OpenMMLab'
-author = 'MMSelfSup Authors'
-
-# The full version, including alpha/beta/rc tags
-version_file = '../mmselfsup/version.py'
-
-
-def get_version():
-    with open(version_file, 'r') as f:
-        exec(compile(f.read(), version_file, 'exec'))
-    return locals()['__version__']
-
-
-release = get_version()
-
-# -- General configuration ---------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-extensions = [
-    'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx.ext.viewcode',
-    'sphinx_markdown_tables', 'sphinx_copybutton', 'myst_parser'
-]
-
-autodoc_mock_imports = ['json_tricks', 'mmselfsup.version']
-
-# Ignore >>> when copying code
-copybutton_prompt_text = r'>>> |\.\.\. '
-copybutton_prompt_is_regexp = True
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-
-# -- Options for HTML output -------------------------------------------------
-source_suffix = {
-    '.rst': 'restructuredtext',
-    '.md': 'markdown',
-}
-
-# The theme to use for HTML and HTML Help pages.  See the documentation for
-# a list of builtin themes.
-#
-html_theme = 'pytorch_sphinx_theme'
-html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
-html_theme_options = {
-    'menu': [
-        {
-            'name':
-            'Tutorial',
-            'url':
-            'https://colab.research.google.com/github/'
-            'open-mmlab/mmpose/blob/master/demo/MMPose_Tutorial.ipynb'
-        },
-        {
-            'name': 'GitHub',
-            'url': 'https://github.com/open-mmlab/mmpose'
-        },
-        {
-            'name':
-            'Projects',
-            'children': [{
-                'name':
-                'MMCV',
-                'url':
-                'https://mmcv.readthedocs.io/en/latest/',
-                'description':
-                'Foundational library for computer vision'
-            }, {
-                'name':
-                'MMDetection',
-                'url':
-                'https://mmdetection.readthedocs.io/en/latest/',
-                'description':
-                'Object detection toolbox and benchmark'
-            }, {
-                'name':
-                'MMAction2',
-                'url':
-                'https://mmaction2.readthedocs.io/en/latest/',
-                'description':
-                'Action understanding toolbox and benchmark'
-            }, {
-                'name':
-                'MMClassification',
-                'url':
-                'https://mmclassification.readthedocs.io/en/latest/',
-                'description':
-                'Image classification toolbox and benchmark'
-            }, {
-                'name':
-                'MMSegmentation',
-                'url':
-                'https://mmsegmentation.readthedocs.io/en/latest/',
-                'description':
-                'Semantic segmentation toolbox and benchmark'
-            }, {
-                'name': 'MMDetection3D',
-                'url': 'https://mmdetection3d.readthedocs.io/en/latest/',
-                'description': 'General 3D object detection platform'
-            }, {
-                'name': 'MMEditing',
-                'url': 'https://mmediting.readthedocs.io/en/latest/',
-                'description': 'Image and video editing toolbox'
-            }, {
-                'name':
-                'MMOCR',
-                'url':
-                'https://mmocr.readthedocs.io/en/latest/',
-                'description':
-                'Text detection, recognition and understanding toolbox'
-            }, {
-                'name':
-                'MMTracking',
-                'url':
-                'https://mmtracking.readthedocs.io/en/latest/',
-                'description':
-                'Video perception toolbox and benchmark'
-            }, {
-                'name': 'MMGeneration',
-                'url': 'https://mmgeneration.readthedocs.io/en/latest/',
-                'description': 'Generative model toolbox'
-            }, {
-                'name': 'MMFlow',
-                'url': 'https://mmflow.readthedocs.io/en/latest/',
-                'description': 'Optical flow toolbox and benchmark'
-            }, {
-                'name':
-                'MMFewShot',
-                'url':
-                'https://mmfewshot.readthedocs.io/en/latest/',
-                'description':
-                'FewShot learning toolbox and benchmark'
-            }, {
-                'name':
-                'MMHuman3D',
-                'url':
-                'https://mmhuman3d.readthedocs.io/en/latest/',
-                'description':
-                '3D human parametric model toolbox and benchmark.'
-            }]
-        },
-        {
-            'name':
-            'OpenMMLab',
-            'children': [{
-                'name': 'Homepage',
-                'url': 'https://openmmlab.com/'
-            }, {
-                'name': 'GitHub',
-                'url': 'https://github.com/open-mmlab/'
-            }, {
-                'name': 'Twitter',
-                'url': 'https://twitter.com/OpenMMLab'
-            }, {
-                'name': 'Zhihu',
-                'url': 'https://zhihu.com/people/openmmlab'
-            }]
-        },
-    ]
-}
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-
-language = 'en'
-
-html_static_path = ['_static']
-html_css_files = ['css/readthedocs.css']
-
-# Enable ::: for my_st
-myst_enable_extensions = ['colon_fence']
-
-master_doc = 'index'
diff --git a/docs/Makefile b/docs/en/Makefile
similarity index 100%
rename from docs/Makefile
rename to docs/en/Makefile
diff --git a/docs/_static/css/readthedocs.css b/docs/en/_static/css/readthedocs.css
similarity index 100%
rename from docs/_static/css/readthedocs.css
rename to docs/en/_static/css/readthedocs.css
diff --git a/docs/_static/image/logo.png b/docs/en/_static/image/logo.png
similarity index 100%
rename from docs/_static/image/logo.png
rename to docs/en/_static/image/logo.png
diff --git a/docs/algorithms/byol.md b/docs/en/algorithms/byol.md
similarity index 100%
rename from docs/algorithms/byol.md
rename to docs/en/algorithms/byol.md
diff --git a/docs/algorithms/deep.md b/docs/en/algorithms/deep.md
similarity index 100%
rename from docs/algorithms/deep.md
rename to docs/en/algorithms/deep.md
diff --git a/docs/algorithms/dense.md b/docs/en/algorithms/dense.md
similarity index 100%
rename from docs/algorithms/dense.md
rename to docs/en/algorithms/dense.md
diff --git a/docs/algorithms/moco.md b/docs/en/algorithms/moco.md
similarity index 100%
rename from docs/algorithms/moco.md
rename to docs/en/algorithms/moco.md
diff --git a/docs/algorithms/npid.md b/docs/en/algorithms/npid.md
similarity index 100%
rename from docs/algorithms/npid.md
rename to docs/en/algorithms/npid.md
diff --git a/docs/algorithms/odc.md b/docs/en/algorithms/odc.md
similarity index 100%
rename from docs/algorithms/odc.md
rename to docs/en/algorithms/odc.md
diff --git a/docs/algorithms/rl.md b/docs/en/algorithms/rl.md
similarity index 100%
rename from docs/algorithms/rl.md
rename to docs/en/algorithms/rl.md
diff --git a/docs/algorithms/rp.md b/docs/en/algorithms/rp.md
similarity index 100%
rename from docs/algorithms/rp.md
rename to docs/en/algorithms/rp.md
diff --git a/docs/algorithms/simclr.md b/docs/en/algorithms/simclr.md
similarity index 100%
rename from docs/algorithms/simclr.md
rename to docs/en/algorithms/simclr.md
diff --git a/docs/algorithms/ss.md b/docs/en/algorithms/ss.md
similarity index 100%
rename from docs/algorithms/ss.md
rename to docs/en/algorithms/ss.md
diff --git a/docs/algorithms/swav.md b/docs/en/algorithms/swav.md
similarity index 100%
rename from docs/algorithms/swav.md
rename to docs/en/algorithms/swav.md
diff --git a/docs/api.rst b/docs/en/api.rst
similarity index 100%
rename from docs/api.rst
rename to docs/en/api.rst
diff --git a/docs/changelog.md b/docs/en/changelog.md
similarity index 100%
rename from docs/changelog.md
rename to docs/en/changelog.md
diff --git a/docs/community/CONTRIBUTING.md b/docs/en/community/CONTRIBUTING.md
similarity index 100%
rename from docs/community/CONTRIBUTING.md
rename to docs/en/community/CONTRIBUTING.md
diff --git a/docs/compatibility.md b/docs/en/compatibility.md
similarity index 100%
rename from docs/compatibility.md
rename to docs/en/compatibility.md
diff --git a/docs/en/conf.py b/docs/en/conf.py
new file mode 100644
index 000000000..f5035793a
--- /dev/null
+++ b/docs/en/conf.py
@@ -0,0 +1,96 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+
+import pytorch_sphinx_theme
+
+sys.path.insert(0, os.path.abspath('..'))
+
+# -- Project information -----------------------------------------------------
+
+project = 'MMSelfSup'
+copyright = '2020-2021, OpenMMLab'
+author = 'MMSelfSup Authors'
+
+# The full version, including alpha/beta/rc tags
+version_file = '../../mmselfsup/version.py'
+
+
+def get_version():
+    with open(version_file, 'r') as f:
+        exec(compile(f.read(), version_file, 'exec'))
+    return locals()['__version__']
+
+
+release = get_version()
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+    'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx.ext.viewcode',
+    'sphinx_markdown_tables', 'sphinx_copybutton', 'myst_parser'
+]
+
+autodoc_mock_imports = ['json_tricks', 'mmselfsup.version']
+
+# Ignore >>> when copying code
+copybutton_prompt_text = r'>>> |\.\.\. '
+copybutton_prompt_is_regexp = True
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
+
+# -- Options for HTML output -------------------------------------------------
+source_suffix = {
+    '.rst': 'restructuredtext',
+    '.md': 'markdown',
+}
+
+# The theme to use for HTML and HTML Help pages.  See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'pytorch_sphinx_theme'
+html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
+html_theme_options = {
+    'menu': [
+        {
+            'name': 'GitHub',
+            'url': 'https://github.com/open-mmlab/mmselfsup'
+        },
+    ],
+    'menu_lang': 'en'
+}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+
+language = 'en'
+
+html_static_path = ['_static']
+html_css_files = ['css/readthedocs.css']
+
+# Enable ::: for my_st
+myst_enable_extensions = ['colon_fence']
+
+master_doc = 'index'
diff --git a/docs/getting_started.md b/docs/en/getting_started.md
similarity index 100%
rename from docs/getting_started.md
rename to docs/en/getting_started.md
diff --git a/docs/index.rst b/docs/en/index.rst
similarity index 100%
rename from docs/index.rst
rename to docs/en/index.rst
diff --git a/docs/install.md b/docs/en/install.md
similarity index 100%
rename from docs/install.md
rename to docs/en/install.md
diff --git a/docs/make.bat b/docs/en/make.bat
similarity index 100%
rename from docs/make.bat
rename to docs/en/make.bat
diff --git a/docs/model_zoo.md b/docs/en/model_zoo.md
similarity index 100%
rename from docs/model_zoo.md
rename to docs/en/model_zoo.md
diff --git a/docs/prepare_data.md b/docs/en/prepare_data.md
similarity index 100%
rename from docs/prepare_data.md
rename to docs/en/prepare_data.md
diff --git a/docs/switch_language.md b/docs/en/switch_language.md
similarity index 100%
rename from docs/switch_language.md
rename to docs/en/switch_language.md
diff --git a/docs/tutorials/0_config.md b/docs/en/tutorials/0_config.md
similarity index 100%
rename from docs/tutorials/0_config.md
rename to docs/en/tutorials/0_config.md
diff --git a/docs/tutorials/1_new_dataset.md b/docs/en/tutorials/1_new_dataset.md
similarity index 100%
rename from docs/tutorials/1_new_dataset.md
rename to docs/en/tutorials/1_new_dataset.md
diff --git a/docs/tutorials/2_data_pipeline.md b/docs/en/tutorials/2_data_pipeline.md
similarity index 100%
rename from docs/tutorials/2_data_pipeline.md
rename to docs/en/tutorials/2_data_pipeline.md
diff --git a/docs/tutorials/3_new_module.md b/docs/en/tutorials/3_new_module.md
similarity index 100%
rename from docs/tutorials/3_new_module.md
rename to docs/en/tutorials/3_new_module.md
diff --git a/docs/tutorials/4_schedule.md b/docs/en/tutorials/4_schedule.md
similarity index 100%
rename from docs/tutorials/4_schedule.md
rename to docs/en/tutorials/4_schedule.md
diff --git a/docs/tutorials/5_runtime.md b/docs/en/tutorials/5_runtime.md
similarity index 100%
rename from docs/tutorials/5_runtime.md
rename to docs/en/tutorials/5_runtime.md
diff --git a/docs/tutorials/6_benchmarks.md b/docs/en/tutorials/6_benchmarks.md
similarity index 100%
rename from docs/tutorials/6_benchmarks.md
rename to docs/en/tutorials/6_benchmarks.md
diff --git a/docs_zh-CN/Makefile b/docs/zh_cn/Makefile
similarity index 100%
rename from docs_zh-CN/Makefile
rename to docs/zh_cn/Makefile
diff --git a/docs_zh-CN/_static/css/readthedocs.css b/docs/zh_cn/_static/css/readthedocs.css
similarity index 100%
rename from docs_zh-CN/_static/css/readthedocs.css
rename to docs/zh_cn/_static/css/readthedocs.css
diff --git a/docs_zh-CN/_static/image/logo.png b/docs/zh_cn/_static/image/logo.png
similarity index 100%
rename from docs_zh-CN/_static/image/logo.png
rename to docs/zh_cn/_static/image/logo.png
diff --git a/docs_zh-CN/algorithms/byol.md b/docs/zh_cn/algorithms/byol.md
similarity index 100%
rename from docs_zh-CN/algorithms/byol.md
rename to docs/zh_cn/algorithms/byol.md
diff --git a/docs_zh-CN/algorithms/deep.md b/docs/zh_cn/algorithms/deep.md
similarity index 100%
rename from docs_zh-CN/algorithms/deep.md
rename to docs/zh_cn/algorithms/deep.md
diff --git a/docs_zh-CN/algorithms/dense.md b/docs/zh_cn/algorithms/dense.md
similarity index 100%
rename from docs_zh-CN/algorithms/dense.md
rename to docs/zh_cn/algorithms/dense.md
diff --git a/docs_zh-CN/algorithms/moco.md b/docs/zh_cn/algorithms/moco.md
similarity index 100%
rename from docs_zh-CN/algorithms/moco.md
rename to docs/zh_cn/algorithms/moco.md
diff --git a/docs_zh-CN/algorithms/npid.md b/docs/zh_cn/algorithms/npid.md
similarity index 100%
rename from docs_zh-CN/algorithms/npid.md
rename to docs/zh_cn/algorithms/npid.md
diff --git a/docs_zh-CN/algorithms/odc.md b/docs/zh_cn/algorithms/odc.md
similarity index 100%
rename from docs_zh-CN/algorithms/odc.md
rename to docs/zh_cn/algorithms/odc.md
diff --git a/docs_zh-CN/algorithms/rl.md b/docs/zh_cn/algorithms/rl.md
similarity index 100%
rename from docs_zh-CN/algorithms/rl.md
rename to docs/zh_cn/algorithms/rl.md
diff --git a/docs_zh-CN/algorithms/rp.md b/docs/zh_cn/algorithms/rp.md
similarity index 100%
rename from docs_zh-CN/algorithms/rp.md
rename to docs/zh_cn/algorithms/rp.md
diff --git a/docs_zh-CN/algorithms/simclr.md b/docs/zh_cn/algorithms/simclr.md
similarity index 100%
rename from docs_zh-CN/algorithms/simclr.md
rename to docs/zh_cn/algorithms/simclr.md
diff --git a/docs_zh-CN/algorithms/ss.md b/docs/zh_cn/algorithms/ss.md
similarity index 100%
rename from docs_zh-CN/algorithms/ss.md
rename to docs/zh_cn/algorithms/ss.md
diff --git a/docs_zh-CN/algorithms/swav.md b/docs/zh_cn/algorithms/swav.md
similarity index 100%
rename from docs_zh-CN/algorithms/swav.md
rename to docs/zh_cn/algorithms/swav.md
diff --git a/docs_zh-CN/api.rst b/docs/zh_cn/api.rst
similarity index 100%
rename from docs_zh-CN/api.rst
rename to docs/zh_cn/api.rst
diff --git a/docs_zh-CN/changelog.md b/docs/zh_cn/changelog.md
similarity index 100%
rename from docs_zh-CN/changelog.md
rename to docs/zh_cn/changelog.md
diff --git a/docs_zh-CN/community/CONTRIBUTING.md b/docs/zh_cn/community/CONTRIBUTING.md
similarity index 100%
rename from docs_zh-CN/community/CONTRIBUTING.md
rename to docs/zh_cn/community/CONTRIBUTING.md
diff --git a/docs/zh_cn/conf.py b/docs/zh_cn/conf.py
new file mode 100644
index 000000000..6dbdc3202
--- /dev/null
+++ b/docs/zh_cn/conf.py
@@ -0,0 +1,96 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+
+import pytorch_sphinx_theme
+
+sys.path.insert(0, os.path.abspath('..'))
+
+# -- Project information -----------------------------------------------------
+
+project = 'MMSelfSup'
+copyright = '2020-2021, OpenMMLab'
+author = 'MMSelfSup Authors'
+
+# The full version, including alpha/beta/rc tags
+version_file = '../../mmselfsup/version.py'
+
+
+def get_version():
+    with open(version_file, 'r') as f:
+        exec(compile(f.read(), version_file, 'exec'))
+    return locals()['__version__']
+
+
+release = get_version()
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+    'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx.ext.viewcode',
+    'sphinx_markdown_tables', 'sphinx_copybutton', 'myst_parser'
+]
+
+autodoc_mock_imports = ['json_tricks', 'mmselfsup.version']
+
+# Ignore >>> when copying code
+copybutton_prompt_text = r'>>> |\.\.\. '
+copybutton_prompt_is_regexp = True
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
+
+# -- Options for HTML output -------------------------------------------------
+source_suffix = {
+    '.rst': 'restructuredtext',
+    '.md': 'markdown',
+}
+
+# The theme to use for HTML and HTML Help pages.  See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'pytorch_sphinx_theme'
+html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
+html_theme_options = {
+    'menu': [
+        {
+            'name': 'GitHub',
+            'url': 'https://github.com/open-mmlab/mmselfsup'
+        },
+    ],
+    'menu_lang': 'cn',
+}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+
+language = 'zh_CN'
+
+html_static_path = ['_static']
+html_css_files = ['css/readthedocs.css']
+
+# Enable ::: for my_st
+myst_enable_extensions = ['colon_fence']
+
+master_doc = 'index'
diff --git a/docs_zh-CN/getting_started.md b/docs/zh_cn/getting_started.md
similarity index 100%
rename from docs_zh-CN/getting_started.md
rename to docs/zh_cn/getting_started.md
diff --git a/docs_zh-CN/index.rst b/docs/zh_cn/index.rst
similarity index 100%
rename from docs_zh-CN/index.rst
rename to docs/zh_cn/index.rst
diff --git a/docs_zh-CN/install.md b/docs/zh_cn/install.md
similarity index 100%
rename from docs_zh-CN/install.md
rename to docs/zh_cn/install.md
diff --git a/docs_zh-CN/make.bat b/docs/zh_cn/make.bat
similarity index 100%
rename from docs_zh-CN/make.bat
rename to docs/zh_cn/make.bat
diff --git a/docs_zh-CN/model_zoo.md b/docs/zh_cn/model_zoo.md
similarity index 100%
rename from docs_zh-CN/model_zoo.md
rename to docs/zh_cn/model_zoo.md
diff --git a/docs_zh-CN/prepare_data.md b/docs/zh_cn/prepare_data.md
similarity index 100%
rename from docs_zh-CN/prepare_data.md
rename to docs/zh_cn/prepare_data.md
diff --git a/docs_zh-CN/switch_language.md b/docs/zh_cn/switch_language.md
similarity index 100%
rename from docs_zh-CN/switch_language.md
rename to docs/zh_cn/switch_language.md
diff --git a/docs_zh-CN/tutorials/0_config.md b/docs/zh_cn/tutorials/0_config.md
similarity index 100%
rename from docs_zh-CN/tutorials/0_config.md
rename to docs/zh_cn/tutorials/0_config.md
diff --git a/docs_zh-CN/tutorials/1_new_dataset.md b/docs/zh_cn/tutorials/1_new_dataset.md
similarity index 100%
rename from docs_zh-CN/tutorials/1_new_dataset.md
rename to docs/zh_cn/tutorials/1_new_dataset.md
diff --git a/docs_zh-CN/tutorials/2_data_pipeline.md b/docs/zh_cn/tutorials/2_data_pipeline.md
similarity index 100%
rename from docs_zh-CN/tutorials/2_data_pipeline.md
rename to docs/zh_cn/tutorials/2_data_pipeline.md
diff --git a/docs_zh-CN/tutorials/3_new_module.md b/docs/zh_cn/tutorials/3_new_module.md
similarity index 100%
rename from docs_zh-CN/tutorials/3_new_module.md
rename to docs/zh_cn/tutorials/3_new_module.md
diff --git a/docs_zh-CN/tutorials/4_schedule.md b/docs/zh_cn/tutorials/4_schedule.md
similarity index 100%
rename from docs_zh-CN/tutorials/4_schedule.md
rename to docs/zh_cn/tutorials/4_schedule.md
diff --git a/docs_zh-CN/tutorials/5_runtime.md b/docs/zh_cn/tutorials/5_runtime.md
similarity index 100%
rename from docs_zh-CN/tutorials/5_runtime.md
rename to docs/zh_cn/tutorials/5_runtime.md
diff --git a/docs_zh-CN/tutorials/6_benchmarks.md b/docs/zh_cn/tutorials/6_benchmarks.md
similarity index 100%
rename from docs_zh-CN/tutorials/6_benchmarks.md
rename to docs/zh_cn/tutorials/6_benchmarks.md
diff --git a/docs_zh-CN/conf.py b/docs_zh-CN/conf.py
deleted file mode 100644
index 8514c5bec..000000000
--- a/docs_zh-CN/conf.py
+++ /dev/null
@@ -1,169 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import sys
-
-import pytorch_sphinx_theme
-
-sys.path.insert(0, os.path.abspath('..'))
-
-# -- Project information -----------------------------------------------------
-
-project = 'MMSelfSup'
-copyright = '2020-2021, OpenMMLab'
-author = 'MMSelfSup Authors'
-
-# The full version, including alpha/beta/rc tags
-version_file = '../mmselfsup/version.py'
-
-
-def get_version():
-    with open(version_file, 'r') as f:
-        exec(compile(f.read(), version_file, 'exec'))
-    return locals()['__version__']
-
-
-release = get_version()
-
-# -- General configuration ---------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-extensions = [
-    'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx.ext.viewcode',
-    'sphinx_markdown_tables', 'sphinx_copybutton', 'myst_parser'
-]
-
-autodoc_mock_imports = ['json_tricks', 'mmselfsup.version']
-
-# Ignore >>> when copying code
-copybutton_prompt_text = r'>>> |\.\.\. '
-copybutton_prompt_is_regexp = True
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-
-# -- Options for HTML output -------------------------------------------------
-source_suffix = {
-    '.rst': 'restructuredtext',
-    '.md': 'markdown',
-}
-
-# The theme to use for HTML and HTML Help pages.  See the documentation for
-# a list of builtin themes.
-#
-html_theme = 'pytorch_sphinx_theme'
-html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
-html_theme_options = {
-    'menu': [
-        {
-            'name': 'GitHub',
-            'url': 'https://github.com/open-mmlab/mmfewshot'
-        },
-        {
-            'name':
-            '算法库',
-            'children': [{
-                'name': 'MMCV',
-                'url': 'https://mmcv.readthedocs.io/zh_CN/latest/',
-                'description': '计算机视觉基础库'
-            }, {
-                'name': 'MMDetection',
-                'url': 'https://mmdetection.readthedocs.io/zh_CN/latest/',
-                'description': '检测工具箱与测试基准'
-            }, {
-                'name': 'MMAction2',
-                'url': 'https://mmaction2.readthedocs.io/zh_CN/latest/',
-                'description': '视频理解工具箱与测试基准'
-            }, {
-                'name': 'MMClassification',
-                'url': 'https://mmclassification.readthedocs.io/zh_CN/latest/',
-                'description': '图像分类工具箱与测试基准'
-            }, {
-                'name': 'MMSegmentation',
-                'url': 'https://mmsegmentation.readthedocs.io/zh_CN/latest/',
-                'description': '语义分割工具箱与测试基准'
-            }, {
-                'name': 'MMDetection3D',
-                'url': 'https://mmdetection3d.readthedocs.io/zh_CN/latest/',
-                'description': '通用3D目标检测平台'
-            }, {
-                'name': 'MMEditing',
-                'url': 'https://mmediting.readthedocs.io/zh_CN/latest/',
-                'description': '图像视频编辑工具箱'
-            }, {
-                'name': 'MMOCR',
-                'url': 'https://mmocr.readthedocs.io/zh_CN/latest/',
-                'description': '全流程文字检测识别理解工具包'
-            }, {
-                'name': 'MMTracking',
-                'url': 'https://mmtracking.readthedocs.io/zh_CN/latest/',
-                'description': '一体化视频目标感知平台'
-            }, {
-                'name': 'MMGeneration',
-                'url': 'https://mmgeneration.readthedocs.io/zh_CN/latest/',
-                'description': '生成模型工具箱'
-            }, {
-                'name': 'MMFlow',
-                'url': 'https://mmflow.readthedocs.io/zh_CN/latest/',
-                'description': '光流估计工具箱与测试基准'
-            }, {
-                'name': 'MMFewShot',
-                'url': 'https://mmfewshot.readthedocs.io/zh_CN/latest/',
-                'description': '少样本学习工具箱与测试基准'
-            }, {
-                'name': 'MMHuman3D',
-                'url': 'https://mmhuman3d.readthedocs.io/zh_CN/latest/',
-                'description': 'OpenMMLab 人体参数化模型工具箱与测试基准.'
-            }]
-        },
-        {
-            'name':
-            'OpenMMLab',
-            'children': [{
-                'name': '主页',
-                'url': 'https://openmmlab.com/'
-            }, {
-                'name': 'GitHub',
-                'url': 'https://github.com/open-mmlab/'
-            }, {
-                'name': '推特',
-                'url': 'https://twitter.com/OpenMMLab'
-            }, {
-                'name': '知乎',
-                'url': 'https://zhihu.com/people/openmmlab'
-            }]
-        },
-    ]
-}
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-
-language = 'zh_CN'
-
-html_static_path = ['_static']
-html_css_files = ['css/readthedocs.css']
-
-# Enable ::: for my_st
-myst_enable_extensions = ['colon_fence']
-
-master_doc = 'index'
diff --git a/tools/benchmarks/classification/README.md b/tools/benchmarks/classification/README.md
index 1ae584c1a..d0935fadf 100644
--- a/tools/benchmarks/classification/README.md
+++ b/tools/benchmarks/classification/README.md
@@ -2,4 +2,4 @@
 
 As for classification task, we provides several benchmarks, such as SVM / Low-shot SVM, linear evaluation,  semi-supervised classification, etc.
 
-Please refer to [benchmark tutorial](../../../docs/tutorials/6_benchmarks.md) for details.
+Please refer to [benchmark tutorial](../../../docs/en/tutorials/6_benchmarks.md) for details.
diff --git a/tools/benchmarks/detectron2/README.md b/tools/benchmarks/detectron2/README.md
index b1413aa76..d1cc4c2a9 100644
--- a/tools/benchmarks/detectron2/README.md
+++ b/tools/benchmarks/detectron2/README.md
@@ -2,4 +2,4 @@
 
 We follow the evaluation setting in MoCo when trasferring to object detection.
 
-Please refer to [benchmark tutorial](../../../docs/tutorials/6_benchmarks.md) for details.
+Please refer to [benchmark tutorial](../../../docs/en/tutorials/6_benchmarks.md) for details.
diff --git a/tools/benchmarks/mmdetection/README.md b/tools/benchmarks/mmdetection/README.md
index 6cbb51c92..32644f195 100644
--- a/tools/benchmarks/mmdetection/README.md
+++ b/tools/benchmarks/mmdetection/README.md
@@ -2,4 +2,4 @@
 
 We follow the evaluation setting in MoCo when trasferring to object detection.
 
-Please refer to [benchmark tutorial](../../../docs/tutorials/6_benchmarks.md) for details.
+Please refer to [benchmark tutorial](../../../docs/en/tutorials/6_benchmarks.md) for details.
diff --git a/tools/benchmarks/mmsegmentation/README.md b/tools/benchmarks/mmsegmentation/README.md
index e91090477..781385eb4 100644
--- a/tools/benchmarks/mmsegmentation/README.md
+++ b/tools/benchmarks/mmsegmentation/README.md
@@ -2,4 +2,4 @@
 
 We follow the evaluation setting in MMSeg when transferring to semantic segmentation.
 
-Please refer to [benchmark tutorial](../../../docs/tutorials/6_benchmarks.md) for details.
+Please refer to [benchmark tutorial](../../../docs/en/tutorials/6_benchmarks.md) for details.