TristouNet: Triplet Loss for Speaker Turn Embedding

09/14/2016
by   Hervé Bredin, et al.
0

TristouNet is a neural network architecture based on Long Short-Term Memory recurrent networks, meant to project speech sequences into a fixed-dimensional euclidean space. Thanks to the triplet loss paradigm used for training, the resulting sequence embeddings can be compared directly with the euclidean distance, for speaker comparison purposes. Experiments on short (between 500ms and 5s) speech turn comparison and speaker change detection show that TristouNet brings significant improvements over the current state-of-the-art techniques for both tasks.

READ FULL TEXT
research
08/06/2019

Triplet Based Embedding Distance and Similarity Learning for Text-independent Speaker Verification

Speaker embeddings become growing popular in the text-independent speake...
research
02/24/2021

Triplet loss based embeddings for forensic speaker identification in Spanish

With the advent of digital technology, it is more common that committed ...
research
02/21/2019

Deep Speaker Embedding Learning with Multi-Level Pooling for Text-Independent Speaker Verification

This paper aims to improve the widely used deep speaker embedding x-vect...
research
08/24/2018

Memory Time Span in LSTMs for Multi-Speaker Source Separation

With deep learning approaches becoming state-of-the-art in many speech (...
research
08/04/2017

Improving Speaker-Independent Lipreading with Domain-Adversarial Training

We present a Lipreading system, i.e. a speech recognition system using o...
research
02/24/2020

End-to-End Neural Diarization: Reformulating Speaker Diarization as Simple Multi-label Classification

The most common approach to speaker diarization is clustering of speaker...
research
04/02/2020

Towards Relevance and Sequence Modeling in Language Recognition

The task of automatic language identification (LID) involving multiple d...

Please sign up or login with your details

Forgot password? Click here to reset