Variational Fusion for Multimodal Sentiment Analysis

08/13/2019
by   Navonil Majumder, et al.
5

Multimodal fusion is considered a key step in multimodal tasks such as sentiment analysis, emotion detection, question answering, and others. Most of the recent work on multimodal fusion does not guarantee the fidelity of the multimodal representation with respect to the unimodal representations. In this paper, we propose a variational autoencoder-based approach for modality fusion that minimizes information loss between unimodal and multimodal representations. We empirically show that this method outperforms the state-of-the-art methods by a significant margin on several popular datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2017

Tensor Fusion Network for Multimodal Sentiment Analysis

Multimodal sentiment analysis is an increasingly popular research area, ...
research
03/31/2022

Dynamic Multimodal Fusion

Deep multimodal learning has achieved great progress in recent years. Ho...
research
09/01/2022

Progressive Fusion for Multimodal Integration

Integration of multimodal information from various sources has been show...
research
12/16/2022

EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis

Humans are skilled in reading the interlocutor's emotion from multimodal...
research
01/24/2022

MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis

Current deep learning approaches for multimodal fusion rely on bottom-up...
research
12/01/2022

Adapted Multimodal BERT with Layer-wise Fusion for Sentiment Analysis

Multimodal learning pipelines have benefited from the success of pretrai...
research
04/05/2019

Combining Sentiment Lexica with a Multi-View Variational Autoencoder

When assigning quantitative labels to a dataset, different methodologies...

Please sign up or login with your details

Forgot password? Click here to reset