Multimodal Emotion Recognition using Transfer Learning from Speaker Recognition and BERT-based models

02/16/2022
by   Sarala Padi, et al.
21

Automatic emotion recognition plays a key role in computer-human interaction as it has the potential to enrich the next-generation artificial intelligence with emotional intelligence. It finds applications in customer and/or representative behavior analysis in call centers, gaming, personal assistants, and social robots, to mention a few. Therefore, there has been an increasing demand to develop robust automatic methods to analyze and recognize the various emotions. In this paper, we propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities. More specifically, we i) adapt a residual network (ResNet) based model trained on a large-scale speaker recognition task using transfer learning along with a spectrogram augmentation approach to recognize emotions from speech, and ii) use a fine-tuned bidirectional encoder representations from transformers (BERT) based model to represent and recognize emotions from the text. The proposed system then combines the ResNet and BERT-based model scores using a late fusion strategy to further improve the emotion recognition performance. The proposed multimodal solution addresses the data scarcity limitation in emotion recognition using transfer learning, data augmentation, and fine-tuning, thereby improving the generalization performance of the emotion recognition models. We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture (IEMOCAP) dataset. Experimental results indicate that both audio and text-based models improve the emotion recognition performance and that the proposed multimodal solution achieves state-of-the-art results on the IEMOCAP benchmark.

READ FULL TEXT
research
08/05/2021

Improved Speech Emotion Recognition using Transfer Learning and Spectrogram Augmentation

Automatic speech emotion recognition (SER) is a challenging task that pl...
research
11/20/2022

Contrastive Regularization for Multimodal Emotion Recognition Using Audio and Text

Speech emotion recognition is a challenge and an important step towards ...
research
08/09/2022

Emotion Detection From Tweets Using a BERT and SVM Ensemble Model

Automatic identification of emotions expressed in Twitter data has a wid...
research
09/06/2019

Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning

The recognition of emotions by humans is a complex process which conside...
research
06/04/2020

A Siamese Neural Network with Modified Distance Loss For Transfer Learning in Speech Emotion Recognition

Automatic emotion recognition plays a significant role in the process of...
research
11/10/2021

Multimodal End-to-End Group Emotion Recognition using Cross-Modal Attention

Classifying group-level emotions is a challenging task due to complexity...
research
08/13/2019

Multimodal Emotion Recognition Using Deep Canonical Correlation Analysis

Multimodal signals are more powerful than unimodal data for emotion reco...

Please sign up or login with your details

Forgot password? Click here to reset