Deep clustering: Discriminative embeddings for segmentation and separation

08/18/2015
by   John R. Hershey, et al.
0

We address the problem of acoustic source separation in a deep learning framework we call "deep clustering." Rather than directly estimating signals or masking functions, we train a deep network to produce spectrogram embeddings that are discriminative for partition labels given in training data. Previous deep network approaches provide great advantages in terms of learning power and speed, but previously it has been unclear how to use them to separate signals in a class-independent way. In contrast, spectral clustering approaches are flexible with respect to the classes and number of items to be segmented, but it has been unclear how to leverage the learning power and speed of deep networks. To obtain the best of both worlds, we use an objective function that to train embeddings that yield a low-rank approximation to an ideal pairwise affinity matrix, in a class-independent way. This avoids the high cost of spectral factorization and instead produces compact clusters that are amenable to simple clustering methods. The segmentations are therefore implicitly encoded in the embeddings, and can be "decoded" by clustering. Preliminary experiments show that the proposed method can separate speech: when trained on spectrogram features containing mixtures of two speakers, and tested on mixtures of a held-out set of speakers, it can infer masking functions that improve signal quality by around 6dB. We show that the model can generalize to three-speaker mixtures despite training only on two-speaker mixtures. The framework can be used without class labels, and therefore has the potential to be trained on a diverse set of sound types, and to generalize to novel sources. We hope that future work will lead to segmentation of arbitrary sounds, with extensions to microphone array methods as well as image segmentation and other domains.

READ FULL TEXT

page 8

page 9

research
11/06/2018

Bootstrapping single-channel source separation via unsupervised spatial clustering on stereo mixtures

Separating an audio scene into isolated sources is a fundamental problem...
research
11/18/2019

Signal Clustering with Class-independent Segmentation

Radar signals have been dramatically increasing in complexity, limiting ...
research
10/24/2022

Spectral Clustering-aware Learning of Embeddings for Speaker Diarisation

In speaker diarisation, speaker embedding extraction models often suffer...
research
02/02/2020

DropClass and DropAdapt: Dropping classes for deep speaker representation learning

Many recent works on deep speaker embeddings train their feature extract...
research
11/18/2016

Deep Clustering and Conventional Networks for Music Separation: Stronger Together

Deep clustering is the first method to handle general audio separation s...
research
11/06/2019

Finding Strength in Weakness: Learning to Separate Sounds with Weak Supervision

While there has been much recent progress using deep learning techniques...
research
05/25/2023

Towards Solving Cocktail-Party: The First Method to Build a Realistic Dataset with Ground Truths for Speech Separation

Speech separation is very important in real-world applications such as h...

Please sign up or login with your details

Forgot password? Click here to reset