cd lib/pointops/
python setup.py install
Before running this code, you must check the path parameters defined in utils/config.py
.
Parse the ScanNet data into *.npy
files and save them in SCANNET_DIR/preprocessing/scannet_scenes/
python preprocessing/collect_scannet_scenes.py
Note: you can comment line 88 ~ 90 in preprocessing/collect_scannet_scenes.py
to process all scenes.
Sanity check: Don't forget to visualize the preprocessed scenes to check the consistency
python preprocessing/visualize_prep_scene.py --scene_id <scene_id>
The visualized <scene_id>.ply
is stored in preprocessing/label_point_clouds/
- Drag that file into MeshLab for visualization.
Train-test split follows the Pointnet2.ScanNet
python scripts/train_partial_scene.py --use_color --tag POINTTRANS_C_N8192 --epoch 200 --npoint 8192
python scripts/visualize_partial_scene.py --folder ${EXP_STAMP} --use_color --npoints 8192 --scene_id scene0654_00
The results will be saved in your CONF.OUTPUT_ROOT
folder. Some results are visualizing as follows:
Train-test split follows the Pointnet2.ScanNet
python scripts/train_complete_scene.py --use_color --tag POINTTRANS_C_N32768 --epoch 200 --npoint 32768
python scripts/visualize_complete_scene.py --folder ${EXP_STAMP} --use_color --npoints 32768 --scene_id scene0654_00
If you use this code, please cite the follow two papers:
@inproceedings{zhao2021point,
title={Point transformer},
author={Zhao, Hengshuang and Jiang, Li and Jia, Jiaya and Torr, Philip HS and Koltun, Vladlen},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={16259--16268},
year={2021}
}
@inproceedings{dai2017scannet,
title={Scannet: Richly-annotated 3d reconstructions of indoor scenes},
author={Dai, Angela and Chang, Angel X and Savva, Manolis and Halber, Maciej and Funkhouser, Thomas and Nie{\ss}ner, Matthias},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={5828--5839},
year={2017}
}