Short utterance compensation in speaker verification via cosine-based teacher-student learning of speaker embeddings

10/25/2018
by   Jee-weon Jung, et al.
0

Input utterance with short duration is one of the most critical threats that degrade the performance of speaker verification systems. This study aimed to develop an integrated text-independent speaker verification system that inputs utterances with short durations of 2.05 seconds. For this goal, we propose an approach using a teacher-student learning framework that maximizes the cosine similarity of two speaker embeddings extracted from long and short utterances. In the proposed architecture, phonetic-level features in which each feature represents a segment of 130 ms are extracted using convolutional layers. The gated recurrent units extract an utterance-level speaker embedding using the phonetic-level features. Experiments were conducted using deep neural networks that take raw waveforms as input, and output speaker embeddings on the VoxCeleb 1 dataset. The equal error rates without short utterance compensation are 8.72 respectively. The proposed model with compensation exhibits an equal error rate of 10.08 performance degradation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset