EmoBed: Strengthening Monomodal Emotion Recognition via Training with Crossmodal Emotion Embeddings

07/23/2019
by   Jing Han, et al.
0

Despite remarkable advances in emotion recognition, they are severely restrained from either the essentially limited property of the employed single modality, or the synchronous presence of all involved multiple modalities. Motivated by this, we propose a novel crossmodal emotion embedding framework called EmoBed, which aims to leverage the knowledge from other auxiliary modalities to improve the performance of an emotion recognition system at hand. The framework generally includes two main learning components, i. e., joint multimodal training and crossmodal training. Both of them tend to explore the underlying semantic emotion information but with a shared recognition network or with a shared emotion embedding space, respectively. In doing this, the enhanced system trained with this approach can efficiently make use of the complementary information from other modalities. Nevertheless, the presence of these auxiliary modalities is not demanded during inference. To empirically investigate the effectiveness and robustness of the proposed framework, we perform extensive experiments on the two benchmark databases RECOLA and OMG-Emotion for the tasks of dimensional emotion regression and categorical emotion classification, respectively. The obtained results show that the proposed framework significantly outperforms related baselines in monomodal inference, and are also competitive or superior to the recently reported systems, which emphasises the importance of the proposed crossmodal learning for emotion recognition.

READ FULL TEXT
research
06/05/2023

Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals

This paper aims to demonstrate the importance and feasibility of fusing ...
research
09/26/2018

Dynamic Difficulty Awareness Training for Continuous Emotion Prediction

Time-continuous emotion prediction has become an increasingly compelling...
research
07/26/2021

Towards Unbiased Visual Emotion Recognition via Causal Intervention

Although much progress has been made in visual emotion recognition, rese...
research
04/30/2022

Gaze-enhanced Crossmodal Embeddings for Emotion Recognition

Emotional expressions are inherently multimodal – integrating facial beh...
research
06/27/2022

SpeechEQ: Speech Emotion Recognition based on Multi-scale Unified Datasets and Multitask Learning

Speech emotion recognition (SER) has many challenges, but one of the mai...
research
05/17/2021

MUSER: MUltimodal Stress Detection using Emotion Recognition as an Auxiliary Task

The capability to automatically detect human stress can benefit artifici...
research
08/06/2020

Learnable Graph Inception Network for Emotion Recognition

Analyzing emotion from verbal and non-verbal behavioral cues is critical...

Please sign up or login with your details

Forgot password? Click here to reset