DeepAI AI Chat
Log In Sign Up

Deep Speaker: an End-to-End Neural Speaker Embedding System

by   Chao Li, et al.

We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50 (relatively) and improves the identification accuracy by 60 text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.


page 1

page 2

page 3

page 4


Cosine-Distance Virtual Adversarial Training for Semi-Supervised Speaker-Discriminative Acoustic Embeddings

In this paper, we propose a semi-supervised learning (SSL) technique for...

Speaker Diarization using Deep Recurrent Convolutional Neural Networks for Speaker Embeddings

In this paper we propose a new method of speaker diarization that employ...

Cosine similarity-based adversarial process

An adversarial process between two deep neural networks is a promising a...

Robust speaker recognition using unsupervised adversarial invariance

In this paper, we address the problem of speaker recognition in challeng...

Unified Hypersphere Embedding for Speaker Recognition

Incremental improvements in accuracy of Convolutional Neural Networks ar...

Code Repositories


Speaker embedding(verification and recognition) using Pytorch

view repo