GraphMFT: A Graph Network based Multimodal Fusion Technique for Emotion Recognition in Conversation

07/31/2022
by   Jiang Li, et al.
0

Multimodal machine learning is an emerging area of research, which has received a great deal of scholarly attention in recent years. Up to now, there are few studies on multimodal conversational emotion recognition. Since Graph Neural Networks (GNNs) possess the powerful capacity of relational modeling, they have an inherent advantage in the field of multimodal learning. GNNs leverage the graph constructed from multimodal data to perform intra- and inter-modal information interaction, which effectively facilitates the integration and complementation of multimodal data. In this work, we propose a novel Graph network based Multimodal Fusion Technique (GraphMFT) for emotion recognition in conversation. Multimodal data can be modeled as a graph, where each data object is regarded as a node, and both intra- and inter-modal dependencies existing between data objects can be regarded as edges. GraphMFT utilizes multiple improved graph attention networks to capture intra-modal contextual information and inter-modal complementary information. In addition, the proposed GraphMFT attempts to address the challenges of existing graph-based multimodal ERC models such as MMGCN. Empirical results on two public multimodal datasets reveal that our model outperforms the State-Of-The-Art (SOTA) approachs with the accuracy of 67.90

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

research
07/25/2022

GA2MIF: Graph and Attention based Two-stage Multi-source Information Fusion for Conversational Emotion Detection

Multimodal Emotion Recognition in Conversation (ERC) plays an influentia...
research
07/14/2021

MMGCN: Multimodal Fusion via Deep Graph Convolution Network for Emotion Recognition in Conversation

Emotion recognition in conversation (ERC) is a crucial component in affe...
research
06/07/2018

Multimodal Relational Tensor Network for Sentiment and Emotion Classification

Understanding Affect from video segments has brought researchers from th...
research
08/08/2023

Revisiting Disentanglement and Fusion on Modality and Context in Conversational Multimodal Emotion Recognition

It has been a hot research topic to enable machines to understand human ...
research
03/11/2023

Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review

Cancer has relational information residing at varying scales, modalities...
research
07/06/2022

GraphCFC: A Directed Graph based Cross-modal Feature Complementation Approach for Multimodal Conversational Emotion Recognition

Emotion Recognition in Conversation (ERC) plays a significant part in Hu...
research
04/28/2023

SGED: A Benchmark dataset for Performance Evaluation of Spiking Gesture Emotion Recognition

In the field of affective computing, researchers in the community have p...

Please sign up or login with your details

Forgot password? Click here to reset