DeepVOX: Discovering Features from Raw Audio for Speaker Recognition in Degraded Audio Signals

08/26/2020
by   Anurag Chowdhury, et al.
0

Automatic speaker recognition algorithms typically use pre-defined filterbanks, such as Mel-Frequency and Gammatone filterbanks, for characterizing speech audio. The design of these filterbanks is based on domain-knowledge and limited empirical observations. The resultant features, therefore, may not generalize well to different types of audio degradation. In this work, we propose a deep learning-based technique to induce the filterbank design from vast amounts of speech audio. The purpose of such a filterbank is to extract features robust to degradations in the input audio. To this effect, a 1D convolutional neural network is designed to learn a time-domain filterbank called DeepVOX directly from raw speech audio. Secondly, an adaptive triplet mining technique is developed to efficiently mine the data samples best suited to train the filterbank. Thirdly, a detailed ablation study of the DeepVOX filterbanks reveals the presence of both vocal source and vocal tract characteristics in the extracted features. Experimental results on VOXCeleb2, NIST SRE 2008 and 2010, and Fisher speech datasets demonstrate the efficacy of the DeepVOX features across a variety of audio degradations, multi-lingual speech data, and varying-duration speech audio. The DeepVOX features also improve the performance of existing speaker recognition algorithms, such as the xVector-PLDA and the iVector-PLDA.

READ FULL TEXT

page 5

page 10

page 18

research
12/09/2020

DeepTalk: Vocal Style Encoding for Speaker Recognition and Speech Synthesis

Automatic speaker recognition algorithms typically characterize speech a...
research
09/19/2023

Discrete Audio Representation as an Alternative to Mel-Spectrograms for Speaker and Speech Recognition

Discrete audio representation, aka audio tokenization, has seen renewed ...
research
04/02/2022

Acoustic-to-articulatory Inversion based on Speech Decomposition and Auxiliary Feature

Acoustic-to-articulatory inversion (AAI) is to obtain the movement of ar...
research
03/19/2015

Deep Transform: Time-Domain Audio Error Correction via Probabilistic Re-Synthesis

In the process of recording, storage and transmission of time-domain aud...
research
06/07/2023

Multi-microphone Automatic Speech Segmentation in Meetings Based on Circular Harmonics Features

Speaker diarization is the task of answering Who spoke and when? in an a...
research
02/22/2018

Neural Predictive Coding using Convolutional Neural Networks towards Unsupervised Learning of Speaker Characteristics

Learning speaker-specific features is vital in many applications like sp...
research
06/13/2023

A Novel Scheme to classify Read and Spontaneous Speech

The COVID-19 pandemic has led to an increased use of remote telephonic i...

Please sign up or login with your details

Forgot password? Click here to reset