Skip to content

Files

Latest commit

fd60a37 · Feb 9, 2025

History

History
132 lines (90 loc) · 4.49 KB

README.md

File metadata and controls

132 lines (90 loc) · 4.49 KB

Tactile-Augmented Radiance Fields (CVPR 2024)

teaser

Installation

1. Install Nerfstudio modules

1.1. Install Nerfstudio dependencies

Follow these instructions to install dependencies and create an environment.

1.2. Clone this repo

git clone https://github.com/Dou-Yiming/TaRF.git

1.3. Install tarf modules as a python package
cd TaRF/nerfstudio_modules
python -m pip install -e .
1.4. Run ns-install-cli
1.5. Check the install

Run ns-train -h: you should see a list of "subcommands" with tarf included among them.

2. Install Diffusion modules

2.1 Create another Conda environment

conda create --name ldm -y python=3.8

2.2 Install ldm
cd TaRF/img2touch
python -m pip install -e .

Using TaRF

1. Prepare data and pretrained models

1.1 Download COLMAP database

Download the COLMAP databases (including images and camera poses of 13 scenes) from this link, then extract them:

cd TaRF/nerfstudio_modules
mkdir data
cd data
tar -xvf {dir of the downloaded colmap folder}/*.tar.gz ./
1.2 Download pertrained models

Download the pretrained NeRF models from this link, then extract them:

cd TaRF/nerfstudio_modules
mkdir outputs
cd outputs
tar -xvf {dir of the downloaded NeRF folder}/*.tar.gz ./

Download the pretrained Diffusion models from this link, then extract them:

tar -xvf {dir of the downloaded Diffusion folder}/pretrained_models.tar.gz ./
mv pretrained_models TaRF/img2touch
mv TaRF/img2touch/pretrained_models/first_stage_model.ckpt TaRF/img2touch/models/first_stage_models/kl-f8/model.ckpt

2. Run & interact with TaRF

2.1 Launch Nerfstudio viewer

First, pick a scene from {conference_room, clothes, office_1, stairs, table_outdoor, workroom, bench, snow, bus_stop, office_2, chair_outdoor, table_indoor lounge}

Next, launch the Nerfstudio viewer:

cd TaRF/nerfstudio_modules
ns-viewer --load-config ./outputs/{scene}_colmap/tarf/test/config.yml --vis viewer --viewer.max-num-display-images 64

This will provide you with a link to the Nerfstudio viewer.

2.2 Interact with the TaRF

Now you may interact with the TaRF in the browser with the following steps:

  1. Click the "Touch on Scene" button at the bottom-right corner.
  2. Click the point you want to touch in the TaRF.

This will give you the egocentric RGBD signal of the clicked point, saved in TaRF/nerfstudio_modules/outputs/touch_estimation_input_cache.

3. Estimate touch signals

First, change the bg_path argument in TaRF/img2touch/scripts/bash_scripts/run_touch_estimator_real_time.sh to touch_bg/{scene}_colmap_40_50/bg.jpg.

Next, launch real-time touch estimator on another GPU:

cd TaRF/img2touch
bash scripts/bash_scripts/run_touch_estimator_real_time.sh

The tactile signals can now be estimated in real-time whenever a new point is clicked, the results will be saved at TaRF/img2touch/outputs/touch_estimator_real_time/best.png.

Train your own TaRF

1. Train touch estimator
cd TaRF/img2touch
bash scripts/bash_scripts/train_touch_estimator.sh

Coming soon!

Bibtex

If you find TaRF useful, please consider citing:

@inproceedings{dou2024tactile,
  title={Tactile-augmented radiance fields},
  author={Dou, Yiming and Yang, Fengyu and Liu, Yi and Loquercio, Antonio and Owens, Andrew},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={26529--26539},
  year={2024}
}