Skip to content

The code and data used to train multimodal (and unimodal) belief prediction models in Murzaku, Soubki, and Rambow (2024), which was accepted to Interspeech 2024.

Notifications You must be signed in to change notification settings

cogstates/multimodal-belief

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multimodal Belief

This is the repository corresponding to Multimodal Belief Prediction, which was accepted to Interspeech 2024. It contains the code and data used for our unimodal baseline experiments as well as our multimodal fusion runs.

Installation

Supposing conda and poetry are installed, the project dependencies can be setup using the following commands.

conda create -n multimodal-belief python=3.10
conda activate multimodal-belief
poetry install

By default, all scripts will log their output to /home/{username}/scratch/logs/. To change this behavior see ~line 40 of src/core/context.py.

Content

A summary of the content and structure of the repository is shown below.

multimodal-belief/
|- bin/
|  |- multimodal_classification.py - runs classification experiments.
|- configs/                        - example training configurations.
|- data/cb/                        - commitment bank data.
|- src/
|  |- ...                          - additional utilities and code.

Citation

@inproceedings{murzaku24_interspeech,
    title={Multimodal Belief Prediction},
    author={Adil Soubki and John Murzaku and Owen Rambow},
    year={2024},
    booktitle={Proc. INTERSPEECH 2024},
    organization={ISCA}
}

About

The code and data used to train multimodal (and unimodal) belief prediction models in Murzaku, Soubki, and Rambow (2024), which was accepted to Interspeech 2024.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages