Skip to content

Commit

Permalink
[Docs] translate 2_data_pipeline.md and 3_new_module.md into Chinese …
Browse files Browse the repository at this point in the history
…and fix some typos. (#168)

* [Docs] translate 2_data_pipeline.md into Chinese

* [Docs] translate 3_new_module.md into Chinese

* [Docs] Fix typos from py to python
  • Loading branch information
Muyun99 authored Jan 10, 2022
1 parent 86ca16c commit 54471dd
Show file tree
Hide file tree
Showing 12 changed files with 118 additions and 119 deletions.
10 changes: 5 additions & 5 deletions docs/en/tutorials/1_new_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ To write a new dataset, you need to implement:

Assume the name of your `DataSource` is `NewDataSource`, you can create a file, named `new_data_source.py` under `mmselfsup/datasets/data_sources` and implement `NewDataSource` in it.

```py
```python
import mmcv
import numpy as np

Expand All @@ -49,7 +49,7 @@ class NewDataSource(BaseDataSource):

Then, add `NewDataSource` in `mmselfsup/dataset/data_sources/__init__.py`.

```py
```python
from .base import BaseDataSource
...
from .new_data_source import NewDataSource
Expand All @@ -63,7 +63,7 @@ __all__ = [

Assume the name of your `Dataset` is `NewDataset`, you can create a file, named `new_dataset.py` under `mmselfsup/datasets` and implement `NewDataset` in it.

```py
```python
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from mmcv.utils import build_from_cfg
Expand All @@ -89,7 +89,7 @@ class NewDataset(BaseDataset):

Then, add `NewDataset` in `mmselfsup/dataset/__init__.py`.

```py
```python
from .base import BaseDataset
...
from .new_dataset import NewDataset
Expand All @@ -103,7 +103,7 @@ __all__ = [

To use `NewDataset`, you can modify the config as the following:

```py
```python
train=dict(
type='NewDataset',
data_source=dict(
Expand Down
6 changes: 3 additions & 3 deletions docs/en/tutorials/2_data_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

Here is a config example of `Pipeline` for `SimCLR` training:

```py
```python
train_pipeline = [
dict(type='RandomResizedCrop', size=224),
dict(type='RandomHorizontalFlip'),
Expand All @@ -36,7 +36,7 @@ Every augmentation in the `Pipeline` receives an image as input and outputs an a

1.Write a new transformation function in [transforms.py](../../mmselfsup/datasets/pipelines/transforms.py) and overwrite the `__call__` function, which takes a `Pillow` image as input:

```py
```python
@PIPELINES.register_module()
class MyTransform(object):

Expand All @@ -47,7 +47,7 @@ class MyTransform(object):

2.Use it in config files. We reuse the config file shown above and add `MyTransform` to it.

```py
```python
train_pipeline = [
dict(type='RandomResizedCrop', size=224),
dict(type='RandomHorizontalFlip'),
Expand Down
24 changes: 12 additions & 12 deletions docs/en/tutorials/3_new_module.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Assuming we are going to create a customized backbone `CustomizedBackbone`

1.Create a new file `mmselfsup/models/backbones/customized_backbone.py` and implement `CustomizedBackbone` in it.

```py
```python
import torch.nn as nn
from ..builder import BACKBONES

Expand All @@ -45,7 +45,7 @@ class CustomizedBackbone(nn.Module):

2.Import the customized backbone in `mmselfsup/models/backbones/__init__.py`.

```py
```python
from .customized_backbone import CustomizedBackbone

__all__ = [
Expand All @@ -55,7 +55,7 @@ __all__ = [

3.Use it in your config file.

```py
```python
model = dict(
...
backbone=dict(
Expand All @@ -71,7 +71,7 @@ we include all projection heads in `mmselfsup/models/necks`. Assuming we are goi

1.Create a new file `mmselfsup/models/necks/customized_proj_head.py` and implement `CustomizedProjHead` in it.

```py
```python
import torch.nn as nn
from mmcv.runner import BaseModule

Expand All @@ -92,7 +92,7 @@ You need to implement the forward function, which takes the feature from the bac

2.Import the `CustomizedProjHead` in `mmselfsup/models/necks/__init__`.

```py
```python
from .customized_proj_head import CustomizedProjHead

__all__ = [
Expand All @@ -104,7 +104,7 @@ __all__ = [

3.Use it in your config file.

```py
```python
model = dict(
...,
neck=dict(
Expand All @@ -119,7 +119,7 @@ To add a new loss function, we mainly implement the `forward` function in the lo

1.Create a new file `mmselfsup/models/heads/customized_head.py` and implement your customized `CustomizedHead` in it.

```py
```python
import torch
import torch.nn as nn
from mmcv.runner import BaseModule
Expand All @@ -142,15 +142,15 @@ class CustomizedHead(BaseModule):

2.Import the module in `mmselfsup/models/heads/__init__.py`

```py
```python
from .customized_head import CustomizedHead

__all__ = [..., CustomizedHead, ...]
```

3.Use it in your config file.

```py
```python
model = dict(
...,
head=dict(type='CustomizedHead')
Expand All @@ -163,7 +163,7 @@ After creating each component, mentioned above, we need to create a `CustomizedA

1.Create a new file `mmselfsup/models/algorithms/customized_algorithm.py` and implement `CustomizedAlgorithm` in it.

```py
```python
# Copyright (c) OpenMMLab. All rights reserved.
import torch

Expand All @@ -187,15 +187,15 @@ class CustomizedAlgorithm(BaseModel):

2.Import the module in `mmselfsup/models/algorithms/__init__.py`

```py
```python
from .customized_algorithm import CustomizedAlgorithm

__all__ = [..., CustomizedAlgorithm, ...]
```

3.Use it in your config file.

```py
```python
model = dict(
type='CustomizedAlgorightm',
backbone=...,
Expand Down
26 changes: 13 additions & 13 deletions docs/en/tutorials/4_schedule.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ We already support to use all the optimizers implemented by PyTorch, and to use

For example, if you want to use SGD, the modification could be as the following.

```py
```python
optimizer = dict(type='SGD', lr=0.0003, weight_decay=0.0001)
```

To modify the learning rate of the model, just modify the `lr` in the config of optimizer. You can also directly set other arguments according to the [API doc](https://pytorch.org/docs/stable/optim.html?highlight=optim#module-torch.optim) of PyTorch.

For example, if you want to use `Adam` with the setting like `torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)` in PyTorch, the config should looks like:

```py
```python
optimizer = dict(type='Adam', lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
```

Expand All @@ -42,7 +42,7 @@ Learning rate decay is widely used to improve performance. And to use learning r

For example, we use CosineAnnealing policy to train SimCLR, and the config is:

```py
```python
lr_config = dict(
policy='CosineAnnealing',
...)
Expand All @@ -67,7 +67,7 @@ Here are some examples:

1.linear & warmup by iter

```py
```python
lr_config = dict(
policy='CosineAnnealing',
by_epoch=False,
Expand All @@ -80,7 +80,7 @@ lr_config = dict(

2.exp & warmup by epoch

```py
```python
lr_config = dict(
policy='CosineAnnealing',
min_lr=0,
Expand All @@ -98,7 +98,7 @@ Momentum scheduler is usually used with LR scheduler, for example, the following

Here is an example:

```py
```python
lr_config = dict(
policy='cyclic',
target_ratio=(10, 1e-4),
Expand All @@ -119,7 +119,7 @@ Some models may have some parameter-specific settings for optimization, for exam

For example, if we do not want to apply weight decay to the parameters of BatchNorm or GroupNorm, and the bias in each layer, we can use following config file:

```py
```python
optimizer = dict(
type=...,
lr=...,
Expand All @@ -140,7 +140,7 @@ Currently we support `grad_clip` option in `optimizer_config`, and you can refer

Here is an example:

```py
```python
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# norm_type: type of the used p-norm, here norm_type is 2.
```
Expand All @@ -153,14 +153,14 @@ When there is not enough computation resource, the batch size can only be set to

Here is an example:

```py
```python
data = dict(imgs_per_gpu=64)
optimizer_config = dict(type="DistOptimizerHook", update_interval=4)
```

Indicates that during training, back-propagation is performed every 4 iters. And the above is equivalent to:

```py
```python
data = dict(imgs_per_gpu=256)
optimizer_config = dict(type="OptimizerHook")
```
Expand All @@ -171,7 +171,7 @@ In academic research and industrial practice, it is likely that you need some op

Implement your `CustomizedOptim` in `mmselfsup/core/optimizer/optimizers.py`

```py
```python
import torch
from torch.optim import * # noqa: F401,F403
from torch.optim.optimizer import Optimizer, required
Expand All @@ -193,7 +193,7 @@ class CustomizedOptim(Optimizer):

Import it in `mmselfsup/core/optimizer/__init__.py`

```py
```python
from .optimizers import CustomizedOptim
from .builder import build_optimizer

Expand All @@ -202,7 +202,7 @@ __all__ = ['CustomizedOptim', 'build_optimizer', ...]

Use it in your config file

```py
```python
optimizer = dict(
type='CustomizedOptim',
...
Expand Down
4 changes: 2 additions & 2 deletions docs/en/tutorials/5_runtime.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ Workflow is a list of (phase, duration) to specify the running order and duratio

For example, we use epoch-based runner by default, and the "duration" means how many epochs the phase to be executed in a cycle. Usually, we only want to execute training phase, just use the following config.

```py
```python
workflow = [('train', 1)]
```

Sometimes we may want to check some metrics (e.g. loss, accuracy) about the model on the validate set. In such case, we can set the workflow as

```py
```python
[('train', 1), ('val', 1)]
```

Expand Down
5 changes: 2 additions & 3 deletions docs/en/tutorials/6_benchmarks.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ In MMSelfSup, we provide many benchmarks, thus the models can be evaluated on di
- [Segmentation](#segmentation)

First, you are supposed to extract your backbone weights by `tools/model_converters/extract_backbone_weights.py`
```
```shell
python ./tools/misc/extract_backbone_weights.py {CHECKPOINT} {MODEL_FILE}
```

Expand Down Expand Up @@ -115,11 +115,10 @@ Remarks:
- `CONFIG`: Use config files under `configs/benchmarks/mmdetection/` or write your own config files
- `PRETRAIN`: the pretrained model file.


Or if you want to do detection task with [detectron2](https://github.com/facebookresearch/detectron2), we also provides some config files.
Please refer [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md) for installation and follow the [directory structure](https://github.com/facebookresearch/detectron2/tree/main/datasets) to prepare your datasets required by detectron2.

```
```shell
conda activate detectron2 # use detectron2 environment here, otherwise use open-mmlab environment
cd benchmarks/detection
python convert-pretrain-to-detectron2.py ${WEIGHT_FILE} ${OUTPUT_FILE} # must use .pkl as the output extension.
Expand Down
10 changes: 5 additions & 5 deletions docs/zh_cn/tutorials/1_new_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@

假设你基于父类`DataSource` 创建的子类名为 `NewDataSource`, 你可以在`mmselfsup/datasets/data_sources` 目录下创建一个文件,文件名为 `new_data_source.py` ,并在这个文件中实现 `NewDataSource` 创建。

```py
```python
import mmcv
import numpy as np

Expand All @@ -49,7 +49,7 @@ class NewDataSource(BaseDataSource):

然后, 在 `mmselfsup/dataset/data_sources/__init__.py`中添加`NewDataSource`

```py
```python
from .base import BaseDataSource
...
from .new_data_source import NewDataSource
Expand All @@ -63,7 +63,7 @@ __all__ = [

假设你基于父类 `Dataset` 创建的子类名为 `NewDataset`,你可以在`mmselfsup/datasets`目录下创建一个文件,文件名为`new_dataset.py` ,并在这个文件中实现 `NewDataset` 创建。

```py
```python
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from mmcv.utils import build_from_cfg
Expand All @@ -89,7 +89,7 @@ class NewDataset(BaseDataset):

然后,在 `mmselfsup/dataset/__init__.py`中添加 `NewDataset`

```py
```python
from .base import BaseDataset
...
from .new_dataset import NewDataset
Expand All @@ -103,7 +103,7 @@ __all__ = [

为了使用 `NewDataset`,你可以修改配置如下:

```py
```python
train=dict(
type='NewDataset',
data_source=dict(
Expand Down
Loading

0 comments on commit 54471dd

Please sign in to comment.