Unsupervised Low Latency Speech Enhancement with RT-GCC-NMF

04/05/2019
by   Sean U. N. Wood, et al.
0

In this paper, we present RT-GCC-NMF: a real-time (RT), two-channel blind speech enhancement algorithm that combines the non-negative matrix factorization (NMF) dictionary learning algorithm with the generalized cross-correlation (GCC) spatial localization method. Using a pre-learned universal NMF dictionary, RT-GCC-NMF operates in a frame-by-frame fashion by associating individual dictionary atoms to target speech or background interference based on their estimated time-delay of arrivals (TDOA). We evaluate RT-GCC-NMF on two-channel mixtures of speech and real-world noise from the Signal Separation and Evaluation Campaign (SiSEC). We demonstrate that this approach generalizes to new speakers, acoustic environments, and recording setups from very little training data, and outperforms all but one of the algorithms from the SiSEC challenge in terms of overall Perceptual Evaluation methods for Audio Source Separation (PEASS) scores and compares favourably to the ideal binary mask baseline. Over a wide range of input SNRs, we show that this approach simultaneously improves the PEASS and signal to noise ratio (SNR)-based Blind Source Separation (BSS) Eval objective quality metrics as well as the short-time objective intelligibility (STOI) and extended STOI (ESTOI) objective speech intelligibility metrics. A flexible, soft masking function in the space of NMF activation coefficients offers real-time control of the trade-off between interference suppression and target speaker fidelity. Finally, we use an asymmetric short-time Fourier transform (STFT) to reduce the inherent algorithmic latency of RT-GCC-NMF from 64 ms to 2 ms with no loss in performance. We demonstrate that latencies within the tolerable range for hearing aids are possible on current hardware platforms.

READ FULL TEXT
research
02/05/2019

A variance modeling framework based on variational autoencoders for speech enhancement

In this paper we address the problem of enhancing speech signals in nois...
research
11/16/2018

Semi-supervised multichannel speech enhancement with variational autoencoders and non-negative matrix factorization

In this paper we address speaker-independent multichannel speech enhance...
research
02/15/2018

Blind Source Separation with Optimal Transport Non-negative Matrix Factorization

Optimal transport as a loss for machine learning optimization problems h...
research
09/19/2018

Switching divergences for spectral learning in blind speech dereverberation

When recorded in an enclosed room, a sound signal will most certainly ge...
research
10/27/2017

Separation of Moving Sound Sources Using Multichannel NMF and Acoustic Tracking

In this paper we propose a method for separation of moving sound sources...
research
11/28/2022

Differentiable Dictionary Search: Integrating Linear Mixing with Deep Non-Linear Modelling for Audio Source Separation

This paper describes several improvements to a new method for signal dec...
research
06/30/2020

A Speech Enhancement Algorithm based on Non-negative Hidden Markov Model and Kullback-Leibler Divergence

In this paper, we propose a novel supervised single-channel speech enhan...

Please sign up or login with your details

Forgot password? Click here to reset