Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors

03/01/2022
by   Yang Wu, et al.
0

Multimodal sentiment analysis has attracted increasing attention and lots of models have been proposed. However, the performance of the state-of-the-art models decreases sharply when they are deployed in the real world. We find that the main reason is that real-world applications can only access the text outputs by the automatic speech recognition (ASR) models, which may be with errors because of the limitation of model capacity. Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment models directly. To address this problem, we propose the sentiment word aware multimodal refinement model (SWRM), which can dynamically refine the erroneous sentiment words by leveraging multimodal sentiment clues. Specifically, we first use the sentiment word position detection module to obtain the most possible position of the sentiment word in the text and then utilize the multimodal sentiment word refinement module to dynamically refine the sentiment word embeddings. The refined embeddings are taken as the textual inputs of the multimodal feature fusion module to predict the sentiment labels. We conduct extensive experiments on the real-world datasets including MOSI-Speechbrain, MOSI-IBM, and MOSI-iFlytek and the results demonstrate the effectiveness of our model, which surpasses the current state-of-the-art models on three datasets. Furthermore, our approach can be adapted for other multimodal feature fusion models easily. Data and code are available at https://github.com/albertwy/SWRM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/03/2018

Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning

With the increasing popularity of video sharing websites such as YouTube...
research
04/12/2022

CLMLF:A Contrastive Learning and Multi-Layer Fusion Method for Multimodal Sentiment Detection

Compared with unimodal data, multimodal data can provide more features t...
research
09/28/2022

Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results

Humour is a substantial element of human affect and cognition. Its autom...
research
11/24/2022

Robust-MSA: Understanding the Impact of Modality Noise on Multimodal Sentiment Analysis

Improving model robustness against potential modality noise, as an essen...
research
08/03/2021

Exploiting BERT For Multimodal Target Sentiment Classification Through Input Space Translation

Multimodal target/aspect sentiment classification combines multimodal se...
research
06/27/2021

Transfer-based adaptive tree for multimodal sentiment analysis based on user latent aspects

Multimodal sentiment analysis benefits various applications such as huma...
research
02/28/2019

Incorporating End-to-End Speech Recognition Models for Sentiment Analysis

Previous work on emotion recognition demonstrated a synergistic effect o...

Please sign up or login with your details

Forgot password? Click here to reset