SVEva Fair: A Framework for Evaluating Fairness in Speaker Verification

07/26/2021
by   Wiebke Toussaint, et al.
0

Despite the success of deep neural networks (DNNs) in enabling on-device voice assistants, increasing evidence of bias and discrimination in machine learning is raising the urgency of investigating the fairness of these systems. Speaker verification is a form of biometric identification that gives access to voice assistants. Due to a lack of fairness metrics and evaluation frameworks that are appropriate for testing the fairness of speaker verification components, little is known about how model performance varies across subgroups, and what factors influence performance variation. To tackle this emerging challenge, we design and develop SVEva Fair, an accessible, actionable and model-agnostic framework for evaluating the fairness of speaker verification components. The framework provides evaluation measures and visualisations to interrogate model performance across speaker subgroups and compare fairness between models. We demonstrate SVEva Fair in a case study with end-to-end DNNs trained on the VoxCeleb datasets to reveal potential bias in existing embedded speech recognition systems based on the demographic attributes of speakers. Our evaluation shows that publicly accessible benchmark models are not fair and consistently produce worse predictions for some nationalities, and for female speakers of most nationalities. To pave the way for fair and reliable embedded speaker verification, SVEva Fair has been implemented as an open-source python library and can be integrated into the embedded ML development pipeline to facilitate developers and researchers in troubleshooting unreliable speaker verification performance, and selecting high impact approaches for mitigating fairness challenges

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2022

Study on the Fairness of Speaker Verification Systems on Underrepresented Accents in English

Speaker verification (SV) systems are currently being used to make sensi...
research
04/05/2022

Design Guidelines for Inclusive Speaker Verification Evaluation Datasets

Speaker verification (SV) provides billions of voice-enabled devices wit...
research
01/24/2022

Bias in Automated Speaker Recognition

Automated speaker recognition uses data processing to identify speakers ...
research
02/23/2022

Improving fairness in speaker verification via Group-adapted Fusion Network

Modern speaker verification models use deep neural networks to encode ut...
research
12/25/2021

NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification

Deep neural networks (DNNs) have demonstrated their outperformance in va...
research
08/27/2023

Fairness and Privacy in Voice Biometrics:A Study of Gender Influences Using wav2vec 2.0

This study investigates the impact of gender information on utility, pri...
research
07/15/2022

Adversarial Reweighting for Speaker Verification Fairness

We address performance fairness for speaker verification using the adver...

Please sign up or login with your details

Forgot password? Click here to reset