Skip to content

Silverster98/point_transformer.scannet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Install

cd lib/pointops/
python setup.py install

Usage

change the path

Before running this code, you must check the path parameters defined in utils/config.py.

preprocess scannet scenes

Parse the ScanNet data into *.npy files and save them in SCANNET_DIR/preprocessing/scannet_scenes/

python preprocessing/collect_scannet_scenes.py

Note: you can comment line 88 ~ 90 in preprocessing/collect_scannet_scenes.py to process all scenes.

Sanity check: Don't forget to visualize the preprocessed scenes to check the consistency

python preprocessing/visualize_prep_scene.py --scene_id <scene_id>

The visualized <scene_id>.ply is stored in preprocessing/label_point_clouds/ - Drag that file into MeshLab for visualization.

Trian with chunked scenes

Setting

Train-test split follows the Pointnet2.ScanNet

Trian

python scripts/train_partial_scene.py --use_color --tag POINTTRANS_C_N8192 --epoch 200 --npoint 8192

Visualize

python scripts/visualize_partial_scene.py --folder ${EXP_STAMP} --use_color --npoints 8192 --scene_id scene0654_00

The results will be saved in your CONF.OUTPUT_ROOT folder. Some results are visualizing as follows:

Train with complete scenes

Setting

Train-test split follows the Pointnet2.ScanNet

Trian

python scripts/train_complete_scene.py --use_color --tag POINTTRANS_C_N32768 --epoch 200 --npoint 32768

Visualize

python scripts/visualize_complete_scene.py --folder ${EXP_STAMP} --use_color --npoints 32768 --scene_id scene0654_00

References

If you use this code, please cite the follow two papers:

@inproceedings{zhao2021point,
  title={Point transformer},
  author={Zhao, Hengshuang and Jiang, Li and Jia, Jiaya and Torr, Philip HS and Koltun, Vladlen},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={16259--16268},
  year={2021}
}
@inproceedings{dai2017scannet,
  title={Scannet: Richly-annotated 3d reconstructions of indoor scenes},
  author={Dai, Angela and Chang, Angel X and Savva, Manolis and Halber, Maciej and Funkhouser, Thomas and Nie{\ss}ner, Matthias},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={5828--5839},
  year={2017}
}

Acknowledgements

About

point transformer pre-trained on scannet

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published