Skip to content
/ A2RNet Public

AAAI 2025 | A2RNet: Adversarial Attack Resilient Network for Robust Infrared and Visible Image Fusion

License

Notifications You must be signed in to change notification settings

lok-18/A2RNet

Repository files navigation

A2RNet

AAAI LICENSE Python PyTorch

A2RNet: Adversarial Attack Resilient Network for Robust Infrared and Visible Image Fusion

in the 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025) 🔥🔥🔥
by Jiawei Li, Hongwei Yu, Jiansheng Chen, Xinlong Ding, Jinlong Wang, Jinyuan Liu, Bochao Zou and Huimin Ma

Different adversarial operations in IVIF (Motivation):

Framework of our proposed A2RNet:

‼️Requirements

  • python 3.10
  • torch 1.13.0
  • torchvision 0.14.0
  • opencv 4.9
  • numpy 1.26.4
  • pillow 10.3.0

📑Dataset setting

We give several test image pairs as examples in [MFNet] and [M3FD] datasets, respectively.

Moreover, you can set your own test datasets of different modalities under ./test_images/..., like:

test_images
├── ir
|   ├── 1.png
|   ├── 2.png
|   └── ...
├── pseudo_label
|   ├── 1.png
|   ├── 2.png
|   └── ...
├── vis
|   ├── 1.png
|   ├── 2.png
|   └── ...

Note that the detailed process of generating pseudo-labels is provided in the [Supplementary]. Alternatively, you may use the results from other SOTA methods as pseudo-labels and place them in the ./test_images/pseudo_label/ for supervision.

The configuration of the training dataset is similar to the aforementioned format.

🖥️Test

The pre-trained model model.pth has given in [Google Drive] and [Baidu Yun].

Please put model.pth into ./model/ and run test_robust.py to get fused results. You can check them in:

results
├── 1.png
├── 2.png
└── ...

Train

You can also utilize your own data to train a new robust fusion model with:

python train_robust.py

🌟Experimental results

Under PGD attacks, we compared our proposed A2RNet with [TarDAL], [SeAFusion], [IGNet], [PAIF], [CoCoNet], [LRRNet] and [EMMA].

Fusion results:


After retaining the fusion results of all methods on [YOLOv5] and [DeepLabV3+], we compare the corresponding detection and segmentation results with A2RNet.

Detection & Segmentation results:

Please refer to the paper for more experimental results and details.

🗒️Citation

@article{li2024a2rnet,
  title={A2RNet: Adversarial Attack Resilient Network for Robust Infrared and Visible Image Fusion},
  author={Li, Jiawei and Yu, Hongwei and Chen, Jiansheng and Ding, Xinlong and Wang, Jinlong and Liu, Jinyuan and Zou, Bochao and Ma, Huimin},
  journal={arXiv preprint arXiv:2412.09954},
  year={2024}
}

🧩Realted works

  • Jiawei Li, Jiansheng Chen, Jinyuan Liu and Huimin Ma. Learning a Graph Neural Network with Cross Modality Interaction for Image Fusion. Proceedings of the 31st ACM International Conference on Multimedia (ACM MM), 2023: 4471-4479. [Paper] [Code]
  • Jiawei Li, Jinyuan Liu, Shihua Zhou, Qiang Zhang and Nikola K. Kasabov. GeSeNet: A General Semantic-guided Network with Couple Mask Ensemble for Medical Image Fusion. IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2024, 35(11): 16248-16261. [Paper] [Code]
  • Jiawei Li, Jinyuan Liu, Shihua Zhou, Qiang Zhang and Nikola K. Kasabov. Learning a Coordinated Network for Detail-refinement Multi-exposure Image Fusion. IEEE Transactions on Circuits and Systems for Video Technology (IEEE TCSVT), 2023, 33(2): 713-727. [Paper]
  • Jia Lei, Jiawei Li, Jinyuan Liu, Bin Wang, Shihua Zhou, Qiang Zhang, Xiaopeng Wei and Nikola K. Kasabov. MLFuse: Multi-scenario Feature Joint Learning for Multi-Modality Image Fusion. IEEE Transactions on Multimedia (IEEE TMM), 2024. [Paper] [Code]

🙇‍♂️Acknowledgement

We would like to express our gratitude to [ESSAformer] for inspiring our work! Please refer to their excellent work for more details.

📬Contact

If you have any questions, please create an issue or email to me (Jiawei Li).

About

AAAI 2025 | A2RNet: Adversarial Attack Resilient Network for Robust Infrared and Visible Image Fusion

Topics

Resources

License

Stars

Watchers

Forks

Languages