Domain Adversarial for Acoustic Emotion Recognition

04/20/2018
by   Mohammed Abdelwahab, et al.
0

The performance of speech emotion recognition is affected by the differences in data distributions between train (source domain) and test (target domain) sets used to build and evaluate the models. This is a common problem, as multiple studies have shown that the performance of emotional classifiers drop when they are exposed to data that does not match the distribution used to build the emotion classifiers. The difference in data distributions becomes very clear when the training and testing data come from different domains, causing a large performance gap between validation and testing performance. Due to the high cost of annotating new data and the abundance of unlabeled data, it is crucial to extract as much useful information as possible from the available unlabeled data. This study looks into the use of adversarial multitask training to extract a common representation between train and test domains. The primary task is to predict emotional attribute-based descriptors for arousal, valence, or dominance. The secondary task is to learn a common representation where the train and test domains cannot be distinguished. By using a gradient reversal layer, the gradients coming from the domain classifier are used to bring the source and target domain representations closer. We show that exploiting unlabeled data consistently leads to better emotion recognition performance across all emotional dimensions. We visualize the effect of adversarial training on the feature representation across the proposed deep learning architecture. The analysis shows that the data representations for the train and test domains converge as the data is passed to deeper layers of the network. We also evaluate the difference in performance when we use a shallow neural network versus a deep neural network (DNN) and the effect of the number of shared layers used by the task and domain classifiers.

READ FULL TEXT

page 1

page 7

page 8

research
10/24/2019

Domain adversarial learning for emotion recognition

In practical applications for emotion recognition, users do not always e...
research
11/21/2017

Unsupervised Adaptation with Domain Separation Networks for Robust Speech Recognition

Unsupervised domain adaptation of speech signal aims at adapting a well-...
research
12/23/2019

Learning Transferable Features for Speech Emotion Recognition

Emotion recognition from speech is one of the key steps towards emotiona...
research
02/11/2021

Disentanglement for audio-visual emotion recognition using multitask setup

Deep learning models trained on audio-visual data have been successfully...
research
11/25/2020

Emotional Semantics-Preserved and Feature-Aligned CycleGAN for Visual Emotion Adaptation

Thanks to large-scale labeled training data, deep neural networks (DNNs)...
research
04/28/2018

Ladder Networks for Emotion Recognition: Using Unsupervised Auxiliary Tasks to Improve Predictions of Emotional Attributes

Recognizing emotions using few attribute dimensions such as arousal, val...
research
09/09/2021

Accounting for Variations in Speech Emotion Recognition with Nonparametric Hierarchical Neural Network

In recent years, deep-learning-based speech emotion recognition models h...

Please sign up or login with your details

Forgot password? Click here to reset