Sentiment Analysis using Deep Robust Complementary Fusion of Multi-Features and Multi-Modalities

04/17/2019
by   Feiyang Chen, et al.
0

Sentiment analysis research has been rapidly developing in the last decade and has attracted widespread attention from academia and industry, most of which is based on text. However, the information in the real world usually comes as different modalities. In this paper, we consider the task of Multimodal Sentiment Analysis, using Audio and Text Modalities, proposed a novel fusion strategy including Multi-Feature Fusion and Multi-Modality Fusion to improve the accuracy of Audio-Text Sentiment Analysis. We call this the Deep Feature Fusion-Audio and Text Modal Fusion (DFF-ATMF) model, and the features learned from it are complementary to each other and robust. Experiments with the CMU-MOSI corpus and the recently released CMU-MOSEI corpus for Youtube video sentiment analysis show the very competitive results of our proposed model. Surprisingly, our method also achieved the state-of-the-art results in the IEMOCAP dataset, indicating that our proposed fusion strategy is also extremely generalization ability to Multimodal Emotion Recognition.

READ FULL TEXT

page 1

page 4

page 6

page 8

page 12

research
04/17/2019

Audio-Text Sentiment Analysis using Deep Robust Complementary Fusion of Multi-Features and Multi-Modalities

Sentiment analysis research has been rapidly developing in the last deca...
research
03/26/2023

Exploring Multimodal Sentiment Analysis via CBAM Attention and Double-layer BiLSTM Architecture

Because multimodal data contains more modal information, multimodal sent...
research
03/03/2021

Video Sentiment Analysis with Bimodal Information-augmented Multi-Head Attention

Sentiment analysis is the basis of intelligent human-computer interactio...
research
03/13/2021

Targeted aspect based multimodal sentiment analysis:an attention capsule extraction and multi-head fusion network

Multimodal sentiment analysis has currently identified its significance ...
research
07/03/2018

Getting the subtext without the text: Scalable multimodal sentiment classification from visual and acoustic modalities

In the last decade, video blogs (vlogs) have become an extremely popular...
research
02/19/2020

Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation

Sentiment Analysis and Emotion Detection in conversation is key in a num...
research
09/28/2021

Neural Dependency Coding inspired Multimodal Fusion

Information integration from different modalities is an active area of r...

Please sign up or login with your details

Forgot password? Click here to reset