Skip to content

Official Code for Efficient Diffusion as Low Light Enhancer (CVPR 2025)

Notifications You must be signed in to change notification settings

lgz-0713/ReDDiT

Repository files navigation

[CVPR2025] Efficient Diffusion as Low Light Enhancer

🔥 News

  • [2025/03/04] We have released the training code and inference code! 🚀🚀
  • [2025/02/27] ReDDiT has been accepted to CVPR 2025! 🤗🤗

📝 TODO

  • Training code
  • Inference code
  • CVPR Camera-ready Version
  • Project page
  • Journal Version & Teacher Model

🔨 Get Started

🔍 Dependencies and Installation

  • Python 3.8
  • Pytorch 1.11
  1. Create Conda Environment
conda create --name ReDDiT python=3.8
conda activate ReDDiT
  1. Install PyTorch
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
  1. Install Dependencies
cd ReDDiT
pip install -r requirements.txt

📃 Data Preparation

You can refer to the following links to download the datasets.

Then, put them in the following folder:

dataset (click to expand)
├── dataset
    ├── LOLv1
        ├── our485
            ├──low
            ├──high
	├── eval15
            ├──low
            ├──high
├── dataset
   ├── LOLv2
       ├── Real_captured
           ├── Train
	   ├── Test
       ├── Synthetic
           ├── Train
	   ├── Test

📘 Testing

Note: Following LLFlow and KinD, we have also adjusted the brightness of the output image produced by the network, based on the average value of Ground Truth (GT). It should be noted that this adjustment process does not influence the texture details generated; it is merely a straightforward method to regulate the overall illumination. Moreover, it can be easily adjusted according to user preferences in practical applications.

You can also refer to the following links to download the Checkpoints and put it in the following folder:

├── checkpoints
    ├── lolv1_8step_gen.pth
    ├── lolv1_4step_gen.pth
    ├── lolv1_2step_gen.pth
    ......

To test the model using the sh test.sh command and modify the n_timestep and time_scale parameters for different step models. Here's a general outline of the steps:

"val": {
    "schedule": "linear",
                "n_timestep": 8,
                "linear_start": 1e-4,
                "linear_end": 2e-2,
                "time_scale": 64
}
"val": {
    "schedule": "linear",
                "n_timestep": 4,
                "linear_start": 1e-4,
                "linear_end": 2e-2,
                "time_scale": 128
}
"val": {
    "schedule": "linear",
                "n_timestep": 2,
                "linear_start": 1e-4,
                "linear_end": 2e-2,
                "time_scale": 256
}

📘 Testing on unpaired data

python test_unpaired.py  --config config/test_unpaired.json --input unpaired_image_folder

You can use any one of these three pre-trained models, and employ different sampling steps to obtain visual-pleasing results by modifying these terms in the 'test_unpaired.json'.

🚀 Training

bash train.sh

✒️ Citation

If you find our repo useful for your research, please consider citing our paper:

 @InProceedings{lan2024towards,
 title={Efficient Diffusion as Low Light Enhancer},
 author={Lan, Guanzhou and Ma, Qianli and Yang, Yuqi and Wang, Zhigang and Wang, Dong and Li, Xuelong and Zhao, Bin},
 booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
 year={2025}
 }

❤️ Acknowledgement

Our code is built upon SR3. Thanks to the contributors for their great work.

About

Official Code for Efficient Diffusion as Low Light Enhancer (CVPR 2025)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •