From e4509c39294666b3db7ad7c611629d80b97c2839 Mon Sep 17 00:00:00 2001 From: lzyhha <819814373@qq.com> Date: Thu, 21 Nov 2024 11:27:00 +0800 Subject: [PATCH] train and evaluation --- README.md | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 70 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 1c03a80d6..31a1895af 100644 --- a/README.md +++ b/README.md @@ -6,11 +6,15 @@ # Introduction -Existing object detection methods often consider sRGB input, which was compressed from RAW data using ISP originally designed for visualization. However, such compression might lose crucial information for detection, especially under complex light and weather conditions. We introduce the AODRaw dataset, which offers 7,785 high-resolution real RAW images with 135,601 annotated instances spanning 62 categories, capturing a broad range of indoor and outdoor scenes under 9 distinct light and weather conditions. Based on AODRaw that supports RAW and sRGB object detection, we provide a comprehensive benchmark for evaluating current detection methods. We find that sRGB pre-training constrains the potential of RAW object detection due to the domain gap between sRGB and RAW, prompting us to directly pre-train on the RAW domain. However, it is harder for RAW pre-training to learn rich representations than sRGB pre-training due to the camera noise. To assist RAW pre-training, we distill the knowledge from an off-the-shelf model pre-trained on the sRGB domain. As a result, we achieve substantial improvements under diverse and adverse conditions without relying on extra pre-processing modules. +Existing object detection methods often consider sRGB input, which was compressed from RAW data using ISP originally designed for visualization. However, such compression might lose crucial information for detection, especially under complex light and weather conditions. **We introduce the AODRaw dataset, which offers 7,785 high-resolution real RAW images with 135,601 annotated instances spanning 62 categories, capturing a broad range of indoor and outdoor scenes under 9 distinct light and weather conditions.** Based on AODRaw that supports RAW and sRGB object detection, we provide a comprehensive benchmark for evaluating current detection methods. We find that sRGB pre-training constrains the potential of RAW object detection due to the domain gap between sRGB and RAW, prompting us to directly pre-train on the RAW domain. However, it is harder for RAW pre-training to learn rich representations than sRGB pre-training due to the camera noise. To assist RAW pre-training, we distill the knowledge from an off-the-shelf model pre-trained on the sRGB domain. As a result, we achieve substantial improvements under diverse and adverse conditions without relying on extra pre-processing modules. + +# Dataset + +Please refer to [AODRaw]() to download and preprocess our AODRaw dataset. # ModelZoo -#### Models using downsampled AODRaw +#### Models using down-sampled AODRaw Please follow [downsampling](xx) to preprocess the images. @@ -64,4 +68,67 @@ Please follow [slicing](xx) to preprocess the images. | Detector |Backbone | Pre-training Domain | Fine-tuning Domain | AP | Config | Model | | --------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | |Cascade RCNN | Swin-T | RAW | RAW| 29.8 | [config](configs/aodraw_slice/cascade-rcnn_swin-t-p4-w7_fpn_4conv1fc-giou_amp-1x_aodraw_raw_slice_raw-pretraining.py) | [model](cascade-rcnn_swin-t-p4-w7_fpn_4conv1fc-giou_amp-1x_aodraw_raw_slice_raw-pretraining.pth) | -|Cascade RCNN | ConvNeXt-T | RAW | RAW | 30.7 | [config](configs/aodraw_slice/cascade-rcnn_convnext-t-p4-w7_fpn_4conv1fc-giou_amp-1x_aodraw_raw_slice_raw-pretraining.py) | [model](cascade-rcnn_convnext-t-p4-w7_fpn_4conv1fc-giou_amp-1x_aodraw_raw_slice_raw-pretraining.pth) | \ No newline at end of file +|Cascade RCNN | ConvNeXt-T | RAW | RAW | 30.7 | [config](configs/aodraw_slice/cascade-rcnn_convnext-t-p4-w7_fpn_4conv1fc-giou_amp-1x_aodraw_raw_slice_raw-pretraining.py) | [model](cascade-rcnn_convnext-t-p4-w7_fpn_4conv1fc-giou_amp-1x_aodraw_raw_slice_raw-pretraining.pth) | + + +# Training and Evaluation + +### Configs + +- We provide training and evaluation for **RAW and sRGB object detection**. +- Meanwhile, the images in the AODRaw are recorded at a resolution of $6000\times 4000$. It is unrealistic to feed such huge images into the detectors. Thus, we adopt two experiment settings: 1) **down-sampling the images** into a lower resolution of $2000\times1333$, corresponding to [configs](#models-using-down-sampled-aodraw), and 2) **slicing the images** into a collection of $1280\times 1280$ patches, corresponding to [configs](#models-using-sliced-aodraw). **Please prepprocess the AODRaw dataset using [down-sampling](xx) and [slicing](xx) for these two setting, respectively.** + +[Configs](#models-using-down-sampled-aodraw) of training and evaluation using down-sampled AODRaw: +| Task | Pre-training | Config Path | +| --------------------- | -------------------- |-------------------- | +| sRGB object detection | sRGB pre-training | configs/aodraw/*._aodraw_srgb.py | +| RAW object detection | sRGB pre-training | configs/aodraw/*._aodraw_raw.py | +| RAW object detection | our RAW pre-training | configs/aodraw/*._aodraw_raw_raw-pretraining.py | + +[Configs](#models-using-sliced-aodraw) of training and evaluation using sliced AODRaw: +| Task | Pre-training | Config Path | +| --------------------- | -------------------- |-------------------- | +| sRGB object detection | sRGB pre-training | configs/aodraw_slice/*._aodraw_srgb_slice.py | +| RAW object detection | sRGB pre-training | configs/aodraw_slice/*._aodraw_raw_slice.py | +| RAW object detection | our RAW pre-training | configs/aodraw_slice/*._aodraw_raw_slice_raw-pretraining.py | + +### Training + +##### Single GPU + + ```shell + python tools/train.py ${CONFIG_FILE} [optional arguments] + ``` + +##### Multi GPU + + ```shell + bash tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments] + ``` + +For more details of the training and evaluation commands, please refer to [mmdetection](https://github.com/open-mmlab/mmdetection?tab=readme-ov-file#getting-started). + +### Evaluation + +##### Single GPU + + ```shell + python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments] + ``` + +##### Multi GPU + + ```shell + bash tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [optional arguments] + ``` + +For more details of the training and evaluation commands, please refer to [mmdetection](https://github.com/open-mmlab/mmdetection?tab=readme-ov-file#getting-started). + +# Citation +``` + +``` + +# Acknowledgement + +This repo is modified from [mmdetection](https://github.com/open-mmlab/mmdetection). \ No newline at end of file