Gaussian meta-embeddings for efficient scoring of a heavy-tailed PLDA model

02/27/2018
by   Niko Brümmer, et al.
0

Embeddings in machine learning are low-dimensional representations of complex input patterns, with the property that simple geometric operations like Euclidean distances and dot products can be used for classification and comparison tasks. The proposed meta-embeddings are special embeddings that live in more general inner product spaces. They are designed to propagate uncertainty to the final output in speaker recognition and similar applications. The familiar Gaussian PLDA model (GPLDA) can be re-formulated as an extractor for Gaussian meta-embeddings (GMEs), such that likelihood ratio scores are given by Hilbert space inner products between Gaussian likelihood functions. GMEs extracted by the GPLDA model have fixed precisions and do not propagate uncertainty. We show that a generalization to heavy-tailed PLDA gives GMEs with variable precisions, which do propagate uncertainty. Experiments on NIST SRE 2010 and 2016 show that the proposed method applied to i-vectors without length normalization is up to 20 length-normalized ivectors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2018

Fast variational Bayes for heavy-tailed PLDA applied to i-vectors and x-vectors

The standard state-of-the-art backend for text-independent speaker recog...
research
06/08/2018

Analysis of Length Normalization in End-to-End Speaker Verification System

The classical i-vectors and the latest end-to-end deep speaker embedding...
research
08/31/2020

Complex-valued embeddings of generic proximity data

Proximities are at the heart of almost all machine learning methods. If ...
research
03/28/2022

Probabilistic Spherical Discriminant Analysis: An Alternative to PLDA for length-normalized embeddings

In speaker recognition, where speech segments are mapped to embeddings o...
research
04/06/2020

Probabilistic embeddings for speaker diarization

Speaker embeddings (x-vectors) extracted from very short segments of spe...
research
08/13/2018

Angular-Based Word Meta-Embedding Learning

Ensembling word embeddings to improve distributed word representations h...
research
07/01/2021

Almost-Orthogonal Bases for Inner Product Polynomials

In this paper, we consider low-degree polynomials of inner products betw...

Please sign up or login with your details

Forgot password? Click here to reset