Monaural Audio Speaker Separation with Source Contrastive Estimation

05/12/2017
by   Cory Stephenson, et al.
0

We propose an algorithm to separate simultaneously speaking persons from each other, the "cocktail party problem", using a single microphone. Our approach involves a deep recurrent neural networks regression to a vector space that is descriptive of independent speakers. Such a vector space can embed empirically determined speaker characteristics and is optimized by distinguishing between speaker masks. We call this technique source-contrastive estimation. The methodology is inspired by negative sampling, which has seen success in natural language processing, where an embedding is learned by correlating and de-correlating a given input vector with output weights. Although the matrix determined by the output weights is dependent on a set of known speakers, we only use the input vectors during inference. Doing so will ensure that source separation is explicitly speaker-independent. Our approach is similar to recent deep neural network clustering and permutation-invariant training research; we use weighted spectral features and masks to augment individual speaker frequencies while filtering out other speakers. We avoid, however, the severe computational burden of other approaches with our technique. Furthermore, by training a vector space rather than combinations of different speakers or differences thereof, we avoid the so-called permutation problem during training. Our algorithm offers an intuitive, computationally efficient response to the cocktail party problem, and most importantly boasts better empirical performance than other current techniques.

READ FULL TEXT

page 3

page 5

research
04/27/2018

Deep Speech Denoising with Vector Space Projections

We propose an algorithm to denoise speakers from a single microphone in ...
research
08/29/2017

Improving Source Separation via Multi-Speaker Representations

Lately there have been novel developments in deep learning towards solvi...
research
10/08/2021

Location-based training for multi-channel talker-independent speaker separation

Permutation-invariant training (PIT) is a dominant approach for addressi...
research
05/26/2019

Auditory Separation of a Conversation from Background via Attentional Gating

We present a model for separating a set of voices out of a sound mixture...
research
07/17/2023

Vocoder drift compensation by x-vector alignment in speaker anonymisation

For the most popular x-vector-based approaches to speaker anonymisation,...
research
05/18/2023

Speech Separation based on Contrastive Learning and Deep Modularization

The current monaural state of the art tools for speech separation relies...
research
10/26/2020

Speaker Anonymization with Distribution-Preserving X-Vector Generation for the VoicePrivacy Challenge 2020

In this paper, we present a Distribution-Preserving Voice Anonymization ...

Please sign up or login with your details

Forgot password? Click here to reset