Yunze Man · Liang-Yan Gui · Yu-Xiong Wang
[CVPR 2024] [Project Page
] [arXiv
] [pdf
] [BibTeX
]
This repository contains the official PyTorch implementation of the paper "Situational Awareness Matters in 3D Vision Language Reasoning" (CVPR 2024). The paper is available on arXiv. The project page is online at here.
Previous methods perform direct 3D vision language reasoning without modeling the situation of an embodied agent in the 3D environment. Our method, SIG3D, grounds the situational description in the 3D space, and then re-encodes the visual tokens from the agent's intended perspective before vision-language fusion, resulting in a more comprehensive and generalized 3D vision language (3DVL) representation and reasoning pipeline.Please install the required packages and dependencies according to the environment.yml.
In addition,
- in order to use the SQA3D dataset, please follow this repo to download the dataset and necessary toolsets.
- in order to use the ScanQA dataset, please follow this repo to download the dataset and necessary toolsets.
- in order to use the 3D-LLM backbone model, please follow this repo to install necessary dependencies and download the pre-trained model.
Finally, please download the ScanNet dataset from the official website and follow the instructions here to preprocess the ScanNet dataset and get RGB video frames and point clouds for each scannet scene.
If you use our work in your research, please cite our publication:
@inproceedings{man2024situation3d,
title={Situational Awareness Matters in 3D Vision Language Reasoning},
author={Man, Yunze and Gui, Liang-Yan and Wang, Yu-Xiong},
booktitle={CVPR},
year={2024}
}
This repo is built based on the fantastic work SQA3D, ScanQA, and 3D-LLM. We thank the authors for their great work and open-sourcing their codebase.