Employing Emotion Cues to Verify Speakers in Emotional Talking Environments

07/01/2017
by   Ismail Shahin, et al.
0

Usually, people talk neutrally in environments where there are no abnormal talking conditions such as stress and emotion. Other emotional conditions that might affect people talking tone like happiness, anger, and sadness. Such emotions are directly affected by the patient health status. In neutral talking environments, speakers can be easily verified, however, in emotional talking environments, speakers cannot be easily verified as in neutral talking ones. Consequently, speaker verification systems do not perform well in emotional talking environments as they do in neutral talking environments. In this work, a two-stage approach has been employed and evaluated to improve speaker verification performance in emotional talking environments. This approach employs speaker emotion cues (text-independent and emotion-dependent speaker verification problem) based on both Hidden Markov Models (HMMs) and Suprasegmental Hidden Markov Models (SPHMMs) as classifiers. The approach is comprised of two cascaded stages that combines and integrates emotion recognizer and speaker recognizer into one recognizer. The architecture has been tested on two different and separate emotional speech databases: our collected database and Emotional Prosody Speech and Transcripts database. The results of this work show that the proposed approach gives promising results with a significant improvement over previous studies and other approaches such as emotion-independent speaker verification approach and emotion-dependent speaker verification approach based completely on HMMs.

READ FULL TEXT

page 36

page 37

research
06/29/2017

Employing both Gender and Emotion Cues to Enhance Speaker Identification Performance in Emotional Talking Environments

Speaker recognition performance in emotional talking environments is not...
research
01/22/2018

Identifying Speakers Using Their Emotion Cues

This paper addresses the formulation of a new speaker identification app...
research
09/03/2018

Three-Stage Speaker Verification Architecture in Emotional Talking Environments

Speaker verification performance in neutral talking environment is usual...
research
03/31/2018

Speaker Verification in Emotional Talking Environments based on Three-Stage Framework

This work is dedicated to introducing, executing, and assessing a three-...
research
06/29/2017

Speaker Identification in each of the Neutral and Shouted Talking Environments based on Gender-Dependent Approach Using SPHMMs

It is well known that speaker identification performs extremely well in ...
research
06/14/2022

Exploring speaker enrolment for few-shot personalisation in emotional vocalisation prediction

In this work, we explore a novel few-shot personalisation architecture f...
research
11/17/2022

Privacy against Real-Time Speech Emotion Detection via Acoustic Adversarial Evasion of Machine Learning

Emotional Surveillance is an emerging area with wide-reaching privacy co...

Please sign up or login with your details

Forgot password? Click here to reset