Dual-label Deep LSTM Dereverberation For Speaker Verification

09/08/2018
by   Hao Zhang, et al.
0

In this paper, we present a reverberation removal approach for speaker verification, utilizing dual-label deep neural networks (DNNs). The networks perform feature mapping between the spectral features of reverberant and clean speech. Long short term memory recurrent neural networks (LSTMs) are trained to map corrupted Mel filterbank (MFB) features to two sets of labels: i) the clean MFB features, and ii) either estimated pitch tracks or the fast Fourier transform (FFT) spectrogram of clean speech. The performance of reverberation removal is evaluated by equal error rates (EERs) of speaker verification experiments.

READ FULL TEXT
research
05/02/2018

Text-Independent Speaker Verification Using Long Short-Term Memory Networks

In this paper, an architecture based on Long Short-Term Memory Networks ...
research
08/24/2018

Memory Time Span in LSTMs for Multi-Speaker Source Separation

With deep learning approaches becoming state-of-the-art in many speech (...
research
05/10/2017

Deep Speaker Feature Learning for Text-independent Speaker Verification

Recently deep neural networks (DNNs) have been used to learn speaker fea...
research
07/30/2020

A Comparative Re-Assessment of Feature Extractors for Deep Speaker Embeddings

Modern automatic speaker verification relies largely on deep neural netw...
research
09/25/2018

An Exploration of Mimic Architectures for Residual Network Based Spectral Mapping

Spectral mapping uses a deep neural network (DNN) to map directly from n...
research
02/20/2021

Learnable MFCCs for Speaker Verification

We propose a learnable mel-frequency cepstral coefficient (MFCC) fronten...
research
09/24/2021

Optimized Power Normalized Cepstral Coefficients towards Robust Deep Speaker Verification

After their introduction to robust speech recognition, power normalized ...

Please sign up or login with your details

Forgot password? Click here to reset