Sequential Late Fusion Technique for Multi-modal Sentiment Analysis

06/22/2021
by   Debapriya Banerjee, et al.
0

Multi-modal sentiment analysis plays an important role for providing better interactive experiences to users. Each modality in multi-modal data can provide different viewpoints or reveal unique aspects of a user's emotional state. In this work, we use text, audio and visual modalities from MOSI dataset and we propose a novel fusion technique using a multi-head attention LSTM network. Finally, we perform a classification task and evaluate its performance.

READ FULL TEXT

page 1

page 2

research
04/17/2019

Audio-Text Sentiment Analysis using Deep Robust Complementary Fusion of Multi-Features and Multi-Modalities

Sentiment analysis research has been rapidly developing in the last deca...
research
12/29/2020

Detecting Hate Speech in Multi-modal Memes

In the past few years, there has been a surge of interest in multi-modal...
research
08/04/2019

Improving IT Support by Enhancing Incident Management Process with Multi-modal Analysis

IT support services industry is going through a major transformation wit...
research
11/02/2020

Multi-Modal Active Learning for Automatic Liver Fibrosis Diagnosis based on Ultrasound Shear Wave Elastography

With the development of radiomics, noninvasive diagnosis like ultrasound...
research
08/11/2022

H4M: Heterogeneous, Multi-source, Multi-modal, Multi-view and Multi-distributional Dataset for Socioeconomic Analytics in the Case of Beijing

The study of socioeconomic status has been reformed by the availability ...
research
04/12/2021

MeToo Tweets Sentiment Analysis Using Multi Modal frameworks

In this paper, We present our approach for IEEEBigMM 2020, Grand Challen...
research
12/27/2021

VibEmoji: Exploring User-authoring Multi-modal Emoticons in Social Communication

Emoticons are indispensable in online communications. With users' growin...

Please sign up or login with your details

Forgot password? Click here to reset