- [2025/03/04] We have released the training code and inference code! 🚀🚀
- [2025/02/27] ReDDiT has been accepted to CVPR 2025! 🤗🤗
- Training code
- Inference code
- CVPR Camera-ready Version
- Project page
- Journal Version & Teacher Model
- Python 3.8
- Pytorch 1.11
- Create Conda Environment
conda create --name ReDDiT python=3.8
conda activate ReDDiT
- Install PyTorch
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
- Install Dependencies
cd ReDDiT
pip install -r requirements.txt
You can refer to the following links to download the datasets.
Then, put them in the following folder:
dataset (click to expand)
├── dataset
├── LOLv1
├── our485
├──low
├──high
├── eval15
├──low
├──high
├── dataset
├── LOLv2
├── Real_captured
├── Train
├── Test
├── Synthetic
├── Train
├── Test
Note: Following LLFlow and KinD, we have also adjusted the brightness of the output image produced by the network, based on the average value of Ground Truth (GT). It should be noted that this adjustment process does not influence the texture details generated; it is merely a straightforward method to regulate the overall illumination.
Moreover, it can be easily adjusted according to user preferences in practical applications.
You can also refer to the following links to download the Checkpoints and put it in the following folder:
├── checkpoints
├── lolv1_8step_gen.pth
├── lolv1_4step_gen.pth
├── lolv1_2step_gen.pth
......
To test the model using the sh test.sh
command and modify the n_timestep
and time_scale
parameters for different step models. Here's a general outline of the steps:
"val": {
"schedule": "linear",
"n_timestep": 8,
"linear_start": 1e-4,
"linear_end": 2e-2,
"time_scale": 64
}
"val": {
"schedule": "linear",
"n_timestep": 4,
"linear_start": 1e-4,
"linear_end": 2e-2,
"time_scale": 128
}
"val": {
"schedule": "linear",
"n_timestep": 2,
"linear_start": 1e-4,
"linear_end": 2e-2,
"time_scale": 256
}
python test_unpaired.py --config config/test_unpaired.json --input unpaired_image_folder
You can use any one of these three pre-trained models, and employ different sampling steps to obtain visual-pleasing results by modifying these terms in the 'test_unpaired.json'.
bash train.sh
If you find our repo useful for your research, please consider citing our paper:
@InProceedings{lan2024towards,
title={Efficient Diffusion as Low Light Enhancer},
author={Lan, Guanzhou and Ma, Qianli and Yang, Yuqi and Wang, Zhigang and Wang, Dong and Li, Xuelong and Zhao, Bin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025}
}
Our code is built upon SR3. Thanks to the contributors for their great work.