Skip to content
/ LEAP Public

Learning to Enhance Aperture Phasor Field for Non-Line-of-Sight Imaging

Notifications You must be signed in to change notification settings

join16/LEAP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LEAP: Learning to Enhance Aperture Phasor Field for Non-Line-of-Sight Imaging

In Cho, Hyunbo Shim, Seon Joo Kim

[arXiv] [Project] [BibTeX]

This is an official implementation of the paper Learning to Enhance Aperture Phasor Field for Non-Line-of-Sight Imaging.

Installation

Using docker

We provide a prebuilt docker image, which contains all the dependencies required to run the code.

docker pull join16/join16/nlos-leap:py39-cu113

You can also build your own docker image by running the following command.

docker build -t nlos-leap:py39-cu113 .

Using pip

You can install the required dependencies using pip. We recommend using a virtual environment to avoid conflicts with other packages.

pip install -r requirements.txt

We tested our code on Python 3.9, torch 2.0.1, and CUDA 11.3.

Dataset

Synthetic dataset

To generate synthetic datasets, we reproduce the NLOS renderer provided by LFE with headless rendering and multi GPU (multi-process) support. Our reproduced renderer will be released soon. You can alternatively use the original renderer to generate synthetic datasets.

Real dataset

We use the Stanford dataset provided by FK. Download the original data, and modify raw_root_dir in the config/data/stanford.yaml to the path of the downloaded data. Our evaluation script will automatically preprocess the data.

Training

To train the model, run the following command.

python train.py config/train_n16.yaml

Available command line arguments:

  • --name, -n: name of the experiment. Logs and checkpoints will be saved in logs/{name}.
  • --gpus, -g: GPUs to use. This follows the pytorch-lightning style. Examples: "-1" (all), "2" (2 GPU), "0,1" (GPU id 0, 1), "[0]" (GPU id 0)
  • --debug, -d: Running in debug mode (run one step with a single GPU).

Evaluation

Once the model is trained, you can evaluate the model using the following command.

python3 evaluate.py logs/{experiment_name}

Acknowledgements

Our code is built upon the Pytorch Lightning. We sincerely appreciate the authors for sharing their code and data, which greatly helped our research. Our RSD implementation is based on the original MATLAB code and the pytorch implementation.

Citation

@article{cho2024leap,
  author    = {Cho, In and Shim, Hyunbo and Kim, Seon Joo},
  title     = {Learning to Enhance Aperture Phasor Field for Non-Line-of-Sight Imaging},
  journal   = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
}

About

Learning to Enhance Aperture Phasor Field for Non-Line-of-Sight Imaging

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published