A pairwise discriminative task for speech emotion recognition

by   Zheng Lian, et al.

Speech emotion recognition is an important task in human-machine interaction. However, it faces many challenges such as the ambiguity of emotion expression and the lack of training samples. To solve these problems, we propose a novel 'Pairwise discriminative task', which attempts to learn the similarity and distinction between two audios rather than specific labels. In the task, pairwise audios are fed into audio encode networks to extract audio vectors, followed with discrimination networks behind to judge whether audios belong to the same emotion category or not. The system is optimized in an end-to-end manner to minimize the loss function, which cooperates cosine similarity loss and cross entropy loss together. To verify the performance of audio representation vectors extracted from the system, we test them on IEMOCAP database-a common evaluation corpus. We gain 56.33 test database, which surpasses above 5 speech emotion recognition networks.



There are no comments yet.


page 1

page 2

page 3

page 4


Speech Emotion Recognition via Contrastive Loss under Siamese Networks

Speech emotion recognition is an important aspect of human-computer inte...

Emotion Recognition in Speech using Cross-Modal Transfer in the Wild

Obtaining large, human labelled speech datasets to train models for emot...

OMG Emotion Challenge - ExCouple Team

The proposed model is only for the audio module. All videos in the OMG E...

"I have vxxx bxx connexxxn!": Facing Packet Loss in Deep Speech Emotion Recognition

In applications that use emotion recognition via speech, frame-loss can ...

Attention-Augmented End-to-End Multi-Task Learning for Emotion Prediction from Speech

Despite the increasing research interest in end-to-end learning systems ...

Adversarial Auto-encoders for Speech Based Emotion Recognition

Recently, generative adversarial networks and adversarial autoencoders h...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.