Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?

02/18/2022
by   Vandana Rajan, et al.
0

Humans express their emotions via facial expressions, voice intonation and word choices. To infer the nature of the underlying emotion, recognition models may use a single modality, such as vision, audio, and text, or a combination of modalities. Generally, models that fuse complementary information from multiple modalities outperform their uni-modal counterparts. However, a successful model that fuses modalities requires components that can effectively aggregate task-relevant information from each modality. As cross-modal attention is seen as an effective mechanism for multi-modal fusion, in this paper we quantify the gain that such a mechanism brings compared to the corresponding self-attention mechanism. To this end, we implement and compare a cross-attention and a self-attention model. In addition to attention, each model uses convolutional layers for local feature extraction and recurrent layers for global sequential modelling. We compare the models using different modality combinations for a 7-class emotion classification task using the IEMOCAP dataset. Experimental results indicate that albeit both models improve upon the state-of-the-art in terms of weighted and unweighted accuracy for tri- and bi-modal configurations, their performance is generally statistically comparable. The code to replicate the experiments is available at https://github.com/smartcameras/SelfCrossAttn

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/14/2023

HCAM – Hierarchical Cross Attention Model for Multi-modal Emotion Recognition

Emotion recognition in conversations is challenging due to the multi-mod...
research
05/18/2023

ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities

In this work, we explore a scalable way for building a general represent...
research
11/22/2020

Hierachical Delta-Attention Method for Multimodal Fusion

In vision and linguistics; the main input modalities are facial expressi...
research
10/21/2020

AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies

In this work, we propose different variants of the self-attention based ...
research
01/15/2022

Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition

Multi-modal Multi-label Emotion Recognition (MMER) aims to identify vari...
research
02/16/2023

NUAA-QMUL-AIIT at Memotion 3: Multi-modal Fusion with Squeeze-and-Excitation for Internet Meme Emotion Analysis

This paper describes the participation of our NUAA-QMUL-AIIT team in the...
research
07/09/2021

Cross-modal Attention for MRI and Ultrasound Volume Registration

Prostate cancer biopsy benefits from accurate fusion of transrectal ultr...

Please sign up or login with your details

Forgot password? Click here to reset