DeepAI AI Chat
Log In Sign Up

Disentanglement for audio-visual emotion recognition using multitask setup

by   Raghuveer Peri, et al.

Deep learning models trained on audio-visual data have been successfully used to achieve state-of-the-art performance for emotion recognition. In particular, models trained with multitask learning have shown additional performance improvements. However, such multitask models entangle information between the tasks, encoding the mutual dependencies present in label distributions in the real world data used for training. This work explores the disentanglement of multimodal signal representations for the primary task of emotion recognition and a secondary person identification task. In particular, we developed a multitask framework to extract low-dimensional embeddings that aim to capture emotion specific information, while containing minimal information related to person identity. We evaluate three different techniques for disentanglement and report results of up to 13 recognition performance.


SpeechEQ: Speech Emotion Recognition based on Multi-scale Unified Datasets and Multitask Learning

Speech emotion recognition (SER) has many challenges, but one of the mai...

Multitask vocal burst modeling with ResNets and pre-trained paralinguistic Conformers

This technical report presents the modeling approaches used in our submi...

Multimodal Speech Emotion Recognition and Ambiguity Resolution

Identifying emotion from speech is a non-trivial task pertaining to the ...

Domain Adversarial for Acoustic Emotion Recognition

The performance of speech emotion recognition is affected by the differe...

Continuous Learning Based Novelty Aware Emotion Recognition System

Current works in human emotion recognition follow the traditional closed...

Privacy Enhanced Multimodal Neural Representations for Emotion Recognition

Many mobile applications and virtual conversational agents now aim to re...