Skip to content
/ STI-VQA Public

[TCSVT'22] Official Implementation of STI-VQA

Notifications You must be signed in to change notification settings

h4nwei/STI-VQA

Repository files navigation

STI-VQA

This repository contains the offical implementations along with the experimental splits for the paper "Learning Spatiotemporal Interactions for User-Generated Video Quality Assessment", in IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 3, pp. 1031-1042, Mar. 2023. Hanwei Zhu, Baoliang Chen, Lingyu Zhu, and Shiqi Wang.

Framework

framework

Prerequisites

The release codes were implemented and have been tested in Ubuntu 18.04 with

  • Python = 3.6.13
  • PyTorch = 1.8.1
  • torchvision = 0.9.0

Feature extraction

More details can be found in README.md in the folder of extract_features.

Training on VQA Databases

You can change the paramers in param.py to train each dataset with intra-/cross-dataset settings:

python main.py --test_only False

Testing on VQA Databases

You can change the paramers in param.py to train and test each dataset, and the trained parameters of the proposed model on each dataset can be found at Google Drive:

python main.py --test_only True

Acknowledgement

The authors would like to thank Dingquan Li for his implementation of VSFA, Yang Li for his code architecture, the BVQA_Benchmark, and the implementation of ViT.

Citation

@article{zhu2022learing,
title={Learning Spatiotemporal Interactions for User-Generated Video Quality Assessment},
author={Zhu, Hanwei and Chen, Baoliang and Zhu, lingyu and Wang, Shiqi},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
volume={33},
number={3},
pages={1031-1042},
month={Mar.},
year={2023}
}

About

[TCSVT'22] Official Implementation of STI-VQA

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages