Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition

08/08/2023
by   Bobo Li, et al.
0

It has been a hot research topic to enable machines to understand human emotions in multimodal contexts under dialogue scenarios, which is tasked with multimodal emotion analysis in conversation (MM-ERC). MM-ERC has received consistent attention in recent years, where a diverse range of methods has been proposed for securing better task performance. Most existing works treat MM-ERC as a standard multimodal classification problem and perform multimodal feature disentanglement and fusion for maximizing feature utility. Yet after revisiting the characteristic of MM-ERC, we argue that both the feature multimodality and conversational contextualization should be properly modeled simultaneously during the feature disentanglement and fusion steps. In this work, we target further pushing the task performance by taking full consideration of the above insights. On the one hand, during feature disentanglement, based on the contrastive learning technique, we devise a Dual-level Disentanglement Mechanism (DDM) to decouple the features into both the modality space and utterance space. On the other hand, during the feature fusion stage, we propose a Contribution-aware Fusion Mechanism (CFM) and a Context Refusion Mechanism (CRM) for multimodal and context integration, respectively. They together schedule the proper integrations of multimodal and context features. Specifically, CFM explicitly manages the multimodal feature contributions dynamically, while CRM flexibly coordinates the introduction of dialogue contexts. On two public MM-ERC datasets, our system achieves new state-of-the-art performance consistently. Further analyses demonstrate that all our proposed mechanisms greatly facilitate the MM-ERC task by making full use of the multimodal and context features adaptively. Note that our proposed methods have the great potential to facilitate a broader range of other conversational multimodal tasks.

READ FULL TEXT

page 2

page 12

page 13

research
03/04/2022

MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in Conversations

Emotion Recognition in Conversations (ERC) has considerable prospects fo...
research
11/21/2022

UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion Recognition

Multimodal sentiment analysis (MSA) and emotion recognition in conversat...
research
07/31/2022

GraphMFT: A Graph Network based Multimodal Fusion Technique for Emotion Recognition in Conversation

Multimodal machine learning is an emerging area of research, which has r...
research
09/15/2021

Fusion with Hierarchical Graphs for Mulitmodal Emotion Recognition

Automatic emotion recognition (AER) based on enriched multimodal inputs,...
research
02/28/2022

MSCTD: A Multimodal Sentiment Chat Translation Dataset

Multimodal machine translation and textual chat translation have receive...
research
07/31/2023

DCTM: Dilated Convolutional Transformer Model for Multimodal Engagement Estimation in Conversation

Conversational engagement estimation is posed as a regression problem, e...
research
04/18/2022

Gated Multimodal Fusion with Contrastive Learning for Turn-taking Prediction in Human-robot Dialogue

Turn-taking, aiming to decide when the next speaker can start talking, i...

Please sign up or login with your details

Forgot password? Click here to reset