Is Style All You Need? Dependencies Between Emotion and GST-based Speaker Recognition

11/15/2022
by   Morgan Sandler, et al.
0

In this work, we study the hypothesis that speaker identity embeddings extracted from speech samples may be used for detection and classification of emotion. In particular, we show that emotions can be effectively identified by learning speaker identities by use of a 1-D Triplet Convolutional Neural Network (CNN) Global Style Token (GST) scheme (e.g., DeepTalk Network) and reusing the trained speaker recognition model weights to generate features in the emotion classification domain. The automatic speaker recognition (ASR) network is trained with VoxCeleb1, VoxCeleb2, and Librispeech datasets with a triplet training loss function using speaker identity labels. Using an Support Vector Machine (SVM) classifier, we map speaker identity embeddings into discrete emotion categories from the CREMA-D, IEMOCAP, and MSP-Podcast datasets. On the task of speech emotion detection, we obtain 80.8 acted emotion samples from CREMA-D, 81.2 in IEMOCAP, and 66.9 propose a novel two-stage hierarchical classifier (HC) approach which demonstrates +2 we seek to convey the importance of holistically modeling intra-user variation within audio samples

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset