Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification

05/11/2019
by   Achintya kr. Sarkar, et al.
0

There are a number of studies about extraction of bottleneck (BN) features from deep neural networks (DNNs)trained to discriminate speakers, pass-phrases and triphone states for improving the performance of text-dependent speaker verification (TD-SV). However, a moderate success has been achieved. A recent study [1] presented a time contrastive learning (TCL) concept to explore the non-stationarity of brain signals for classification of brain states. Speech signals have similar non-stationarity property, and TCL further has the advantage of having no need for labeled data. We therefore present a TCL based BN feature extraction method. The method uniformly partitions each speech utterance in a training dataset into a predefined number of multi-frame segments. Each segment in an utterance corresponds to one class, and class labels are shared across utterances. DNNs are then trained to discriminate all speech frames among the classes to exploit the temporal structure of speech. In addition, we propose a segment-based unsupervised clustering algorithm to re-assign class labels to the segments. TD-SV experiments were conducted on the RedDots challenge database. The TCL-DNNs were trained using speech data of fixed pass-phrases that were excluded from the TD-SV evaluation set, so the learned features can be considered phrase-independent. We compare the performance of the proposed TCL bottleneck (BN) feature with those of short-time cepstral features and BN features extracted from DNNs discriminating speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels and boundaries are generated by three different automatic speech recognition (ASR) systems. Experimental results show that the proposed TCL-BN outperforms cepstral features and speaker+pass-phrase discriminant BN features, and its performance is on par with those of ASR derived BN features. Moreover,....

READ FULL TEXT

page 1

page 10

page 13

research
02/03/2021

Data Generation Using Pass-phrase-dependent Deep Auto-encoders for Text-Dependent Speaker Verification

In this paper, we propose a novel method that trains pass-phrase specifi...
research
05/15/2020

On Bottleneck Features for Text-Dependent Speaker Verification Using X-vectors

Applying x-vectors for speaker verification has recently attracted great...
research
09/28/2018

Spoken Pass-Phrase Verification in the i-vector Space

The task of spoken pass-phrase verification is to decide whether a test ...
research
11/19/2016

Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker Verification

In this paper, we propose pass-phrase dependent background models (PBMs)...
research
07/14/2017

Comparison of Multiple Features and Modeling Methods for Text-dependent Speaker Verification

Text-dependent speaker verification is becoming popular in the speaker r...
research
11/25/2020

Vocal Tract Length Perturbation for Text-Dependent Speaker Verification with Autoregressive Prediction Coding

In this letter, we propose a vocal tract length (VTL) perturbation metho...
research
06/25/2019

LipReading with 3D-2D-CNN BLSTM-HMM and word-CTC models

In recent years, deep learning based machine lipreading has gained promi...

Please sign up or login with your details

Forgot password? Click here to reset