Hierachical Delta-Attention Method for Multimodal Fusion

11/22/2020
by   Kunjal Panchal, et al.
0

In vision and linguistics; the main input modalities are facial expressions, speech patterns, and the words uttered. The issue with analysis of any one mode of expression (Visual, Verbal or Vocal) is that lot of contextual information can get lost. This asks researchers to inspect multiple modalities to get a thorough understanding of the cross-modal dependencies and temporal context of the situation to analyze the expression. This work attempts at preserving the long-range dependencies within and across different modalities, which would be bottle-necked by the use of recurrent networks and adds the concept of delta-attention to focus on local differences per modality to capture the idiosyncrasy of different people. We explore a cross-attention fusion technique to get the global view of the emotion expressed through these delta-self-attended modalities, in order to fuse all the local nuances and global context together. The addition of attention is new to the multi-modal fusion field and currently being scrutinized for on what stage the attention mechanism should be used, this work achieves competitive accuracy for overall and per-class classification which is close to the current state-of-the-art with almost half number of parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2022

Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?

Humans express their emotions via facial expressions, voice intonation a...
research
09/04/2017

Multi-modal Conditional Attention Fusion for Dimensional Emotion Prediction

Continuous dimensional emotion prediction is a challenging task where th...
research
07/25/2022

GA2MIF: Graph and Attention based Two-stage Multi-source Information Fusion for Conversational Emotion Detection

Multimodal Emotion Recognition in Conversation (ERC) plays an influentia...
research
04/15/2018

Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning

A major challenge for video captioning is to combine audio and visual cu...
research
04/09/2019

Cross-Modal Self-Attention Network for Referring Image Segmentation

We consider the problem of referring image segmentation. Given an input ...
research
07/11/2023

One-Versus-Others Attention: Scalable Multimodal Integration

Multimodal learning models have become increasingly important as they su...
research
01/08/2023

Multi-scale multi-modal micro-expression recognition algorithm based on transformer

A micro-expression is a spontaneous unconscious facial muscle movement t...

Please sign up or login with your details

Forgot password? Click here to reset