Skip to content

Commit

Permalink
remove gpu's description in md
Browse files Browse the repository at this point in the history
  • Loading branch information
WongGawa committed Dec 23, 2024
1 parent 0016ee8 commit ae84fe1
Show file tree
Hide file tree
Showing 12 changed files with 29 additions and 43 deletions.
13 changes: 5 additions & 8 deletions GETTING_STARTED.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,6 @@ This document provides a brief introduction to the usage of built-in command-lin
```
# Run with Ascend (By default)
python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_ckpt/WEIGHT.ckpt --image_path /path_to_image/IMAGE.jpg
# Run with GPU
python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_ckpt/WEIGHT.ckpt --image_path /path_to_image/IMAGE.jpg --device_target=GPU
```


Expand Down Expand Up @@ -48,23 +45,23 @@ to understand their behavior. Some common arguments are:
```
</details>

* To train a model on 1 NPU/GPU/CPU:
* To train a model on 1 NPU/CPU:
```
python train.py --config ./configs/yolov7/yolov7.yaml
```
* To train a model on 8 NPUs/GPUs:
* To train a model on 8 NPUs:
```
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python train.py --config ./configs/yolov7/yolov7.yaml --is_parallel True
```
* To evaluate a model's performance on 1 NPU/GPU/CPU:
* To evaluate a model's performance on 1 NPU/CPU:
```
python test.py --config ./configs/yolov7/yolov7.yaml --weight /path_to_ckpt/WEIGHT.ckpt
```
* To evaluate a model's performance 8 NPUs/GPUs:
* To evaluate a model's performance 8 NPUs:
```
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python test.py --config ./configs/yolov7/yolov7.yaml --weight /path_to_ckpt/WEIGHT.ckpt --is_parallel True
```
*Notes: (1) The default hyper-parameter is used for 8-card training, and some parameters need to be adjusted in the case of a single card. (2) The default device is Ascend, and you can modify it by specifying 'device_target' as Ascend/GPU/CPU, as these are currently supported.*
*Notes: (1) The default hyper-parameter is used for 8-card training, and some parameters need to be adjusted in the case of a single card. (2) The default device is Ascend, and you can modify it by specifying 'device_target' as Ascend/CPU, as these are currently supported.*
* For more options, see `train/test.py -h`.

* Notice that if you are using `msrun` startup with 2 devices, please add `--bind_core=True` to improve performance. For example:
Expand Down
13 changes: 5 additions & 8 deletions GETTING_STARTED_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,6 @@
```shell
# NPU (默认)
python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_ckpt/WEIGHT.ckpt --image_path /path_to_image/IMAGE.jpg

# GPU
python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_ckpt/WEIGHT.ckpt --image_path /path_to_image/IMAGE.jpg --device_target=GPU
```

有关命令行参数的详细信息,请参阅`demo/predict.py -h`,或查看其[源代码](https://github.com/mindspore-lab/mindyolo/blob/master/deploy/predict.py)
Expand Down Expand Up @@ -45,27 +42,27 @@ python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_c
```
</details>

* 在单卡NPU/GPU/CPU上训练模型:
* 在单卡NPU/CPU上训练模型:

```shell
python train.py --config ./configs/yolov7/yolov7.yaml
```
* 在多卡NPU/GPU上进行分布式模型训练,以8卡为例:
* 在多卡NPU上进行分布式模型训练,以8卡为例:
```shell
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python train.py --config ./configs/yolov7/yolov7.yaml --is_parallel True
```
* 在单卡NPU/GPU/CPU上评估模型的精度:
* 在单卡NPU/CPU上评估模型的精度:

```shell
python test.py --config ./configs/yolov7/yolov7.yaml --weight /path_to_ckpt/WEIGHT.ckpt
```
* 在多卡NPU/GPU上进行分布式评估模型的精度
* 在多卡NPU上进行分布式评估模型的精度

```shell
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python test.py --config ./configs/yolov7/yolov7.yaml --weight /path_to_ckpt/WEIGHT.ckpt --is_parallel True
```

*注意:默认超参为8卡训练,单卡情况需调整部分参数。 默认设备为Ascend,您可以指定'device_target'的值为Ascend/GPU/CPU。*
*注意:默认超参为8卡训练,单卡情况需调整部分参数。 默认设备为Ascend,您可以指定'device_target'的值为Ascend/CPU。*
* 有关更多选项,请参阅 `train/test.py -h`.
* 在云脑上进行训练,请在[这里](./tutorials/cloud/modelarts_CN.md)查看。

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov10/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov10_log python train.py --config ./configs/yolov10/yolov10n.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -64,7 +63,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov19/yolov10n.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,11 +37,10 @@ python mindyolo/utils/convert_weight_darknet53.py

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov3_log python train.py --config ./configs/yolov3/yolov3.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html)

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -53,7 +52,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov3/yolov3.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov4/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,10 @@ python mindyolo/utils/convert_weight_cspdarknet53.py

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov4_log python train.py --config ./configs/yolov4/yolov4-silu.yaml --device_target Ascend --is_parallel True --epochs 320
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html)

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -72,7 +71,7 @@ multiprocessing/semaphore_tracker.py: 144 UserWarning: semaphore_tracker: There
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov4/yolov4-silu.yaml --device_target Ascend --epochs 320
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov5_log python train.py --config ./configs/yolov5/yolov5n.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html)

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -41,7 +40,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov5/yolov5n.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov7/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python train.py --config ./configs/yolov7/yolov7.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html)

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -44,7 +43,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov7/yolov7.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov8/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov8_log python train.py --config ./configs/yolov8/yolov8n.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html)

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -42,7 +41,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov8/yolov8n.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov9/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov9_log python train.py --config ./configs/yolov9/yolov9-t.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -72,7 +71,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov9/yolov9-t.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolox/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolox_log python train.py --config ./configs/yolox/yolox-s.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html)

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -41,7 +40,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please firstly run:

```shell
# standalone 1st stage training on a CPU/GPU/Ascend device
# standalone 1st stage training on a CPU/Ascend device
python train.py --config ./configs/yolox/yolox-s.yaml --device_target Ascend
```

Expand Down
4 changes: 2 additions & 2 deletions examples/finetune_SHWD/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,13 +111,13 @@ optimizer:
* anchor可根据实际物体大小进行调整

由于SHWD训练集只有约6000张图片,选用yolov7-tiny模型进行训练。
* 在多卡NPU/GPU上进行分布式模型训练,以8卡为例:
* 在多卡NPU上进行分布式模型训练,以8卡为例:

```shell
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7-tiny_log python train.py --config ./examples/finetune_SHWD/yolov7-tiny_shwd.yaml --is_parallel True
```

* 在单卡NPU/GPU/CPU上训练模型:
* 在单卡NPU/CPU上训练模型:

```shell
python train.py --config ./examples/finetune_SHWD/yolov7-tiny_shwd.yaml
Expand Down
2 changes: 1 addition & 1 deletion tutorials/configuration_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ __BASE__: [
## 基础参数

### 参数说明
- device_target: 所用设备,Ascend/GPU/CPU
- device_target: 所用设备,Ascend/CPU
- save_dir: 运行结果保存路径,默认为./runs
- log_interval: 打印日志step间隔,默认为100
- is_parallel: 是否分布式训练,默认为False
Expand Down

0 comments on commit ae84fe1

Please sign in to comment.