Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction

05/02/2018
by   Elham J. Barezi, et al.
0

We propose a tri-modal architecture to predict Big Five personality trait scores from video clips with different channels for audio, text, and video data. For each channel, stacked Convolutional Neural Networks are employed. The channels are fused both on decision-level and by concatenating their respective fully connected layers. It is shown that a multimodal fusion approach outperforms each single modality channel, with an improvement of 9.4% over the best individual modality (video). Full backpropagation is also shown to be better than a linear combination of modalities, meaning complex interactions between modalities can be leveraged to build better models. Furthermore, we can see the prediction relevance of each modality for each trait. The described model can be used to increase the emotional intelligence of virtual agents.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2020

Not made for each other- Audio-Visual Dissonance-based Deepfake Detection and Localization

We propose detection of deepfake videos based on the dissimilarity betwe...
research
08/06/2020

Attentive Fusion Enhanced Audio-Visual Encoding for Transformer Based Robust Speech Recognition

Audio-visual information fusion enables a performance improvement in spe...
research
08/22/2019

EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition

We focus on multi-modal fusion for egocentric action recognition, and pr...
research
10/10/2018

A Multimodal Approach towards Emotion Recognition of Music using Audio and Lyrical Content

We propose MoodNet - A Deep Convolutional Neural Network based architect...
research
09/03/2019

Multi-level Attention network using text, audio and video for Depression Prediction

Depression has been the leading cause of mental-health illness worldwide...
research
07/02/2019

E-Sports Talent Scouting Based on Multimodal Twitch Stream Data

We propose and investigate feasibility of a novel task that consists in ...
research
07/28/2021

Squeeze-Excitation Convolutional Recurrent Neural Networks for Audio-Visual Scene Classification

The use of multiple and semantically correlated sources can provide comp...

Please sign up or login with your details

Forgot password? Click here to reset