demo and pre-trained weight of AANet --- a dense descriptor for local feature matching.
Our work was accepted by IEEE Transactions on Circuits and Systems for Video Technology 2023, and can be accessed via manuscript.
We trained our AANet with one-stage end-to-end triplet training strategy on MS-COCO, Multi-illumination and VIDIT datasets (same as LISRD) and the pre-trained weight is compressed as dna.rar
The core implementation of AANet is shown in AANet_core.py
We provide the demo of exporting SIFT keypoints and AANet descriptor in export_descriptor_sift.py, and it can be easily modified to other off-the-shelf detectors and matchers for evaluation.
CUDA_VISIBLE_DEVICES=0 python export_descriptor_sift.py
For more evaluation details, please refer to the HIFT and LISRD
If you are interested in this work, please cite the following work:
@ARTICLE{10102528,
author={Rao, Yuan and Ju, Yakun and Li, Cong and Rigall, Eric and Yang, Jian and Fan, Hao and Dong, Junyu},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
title={Learning General Descriptors for Image Matching With Regression Feedback},
year={2023},
volume={33},
number={11},
pages={6693-6707},
doi={10.1109/TCSVT.2023.3267279}}
Our work is based on LISRD and we use their code. We appreciate the previous open-source repository LISRD