GraphCFC: A Directed Graph based Cross-modal Feature Complementation Approach for Multimodal Conversational Emotion Recognition

07/06/2022
by   Jiang Li, et al.
0

Emotion Recognition in Conversation (ERC) plays a significant part in Human-Computer Interaction (HCI) systems since it can provide empathetic services. Multimodal ERC can mitigate the drawbacks of uni-modal approaches. Recently, Graph Neural Networks (GNNs) have been widely used in a variety of fields due to their superior performance in relation modeling. In multimodal ERC, GNNs are capable of extracting both long-distance contextual information and inter-modal interactive information. Unfortunately, since existing methods such as MMGCN directly fuse multiple modalities, redundant information may be generated and heterogeneous information may be lost. In this work, we present a directed Graph based Cross-modal Feature Complementation (GraphCFC) module that can efficiently model contextual and interactive information. GraphCFC alleviates the problem of heterogeneity gap in multimodal fusion by utilizing multiple subspace extractors and Pair-wise Cross-modal Complementary (PairCC) strategy. We extract various types of edges from the constructed graph for encoding, thus enabling GNNs to extract crucial contextual and interactive information more accurately when performing message passing. Furthermore, we design a GNN structure called GAT-MLP, which can provide a new unified network framework for multimodal learning. The experimental results on two benchmark datasets show that our GraphCFC outperforms the state-of-the-art (SOTA) approaches.

READ FULL TEXT
research
07/25/2022

GA2MIF: Graph and Attention based Two-stage Multi-source Information Fusion for Conversational Emotion Detection

Multimodal Emotion Recognition in Conversation (ERC) plays an influentia...
research
03/04/2022

MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in Conversations

Emotion Recognition in Conversations (ERC) has considerable prospects fo...
research
07/28/2023

CFN-ESA: A Cross-Modal Fusion Network with Emotion-Shift Awareness for Dialogue Emotion Recognition

Multimodal Emotion Recognition in Conversation (ERC) has garnered growin...
research
07/31/2022

GraphMFT: A Graph Network based Multimodal Fusion Technique for Emotion Recognition in Conversation

Multimodal machine learning is an emerging area of research, which has r...
research
08/04/2021

Attentive Cross-modal Connections for Deep Multimodal Wearable-based Emotion Recognition

Classification of human emotions can play an essential role in the desig...
research
03/01/2023

Single-Cell Multimodal Prediction via Transformers

The recent development of multimodal single-cell technology has made the...
research
07/03/2022

A Graph Isomorphism Network with Weighted Multiple Aggregators for Speech Emotion Recognition

Speech emotion recognition (SER) is an essential part of human-computer ...

Please sign up or login with your details

Forgot password? Click here to reset