The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
-
Updated
Feb 26, 2021 - Python
The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
Deep Multiset Canonical Correlation Analysis - An extension of CCA to multiple datasets
Real-world photo sequence question answering system (MemexQA). CVPR'18 and TPAMI'19
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT
Collects a multimodal dataset of Wikipedia articles and their images
Code for COLING2020 paper: Probing Multimodal Embeddings for Linguistic Properties.
Segment-level autoencoders for multimodal representation
My master thesis: Siamese multi-hop attention for cross-modal retrieval.
PyTorch Implementation of HUSE: Hierarchical Universal Semantic Embeddings ( https://arxiv.org/pdf/1911.05978.pdf )
User modelling using Multi-modal fusion
Gowers Method for finding latent networks of multi-modal data
Add a description, image, and links to the multimodal-representation topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-representation topic, visit your repo's landing page and select "manage topics."