WHAMR!: Noisy and Reverberant Single-Channel Speech Separation

10/22/2019
by   Matthew Maciejewski, et al.
0

While significant advances have been made in recent years in the separation of overlapping speech signals, studies have been largely constrained to mixtures of clean, near-field speech, not representative of many real-world scenarios. Although the WHAM! dataset introduced noise to the ubiquitous wsj0-2mix dataset, it did not include the addition of reverberation, generally present in indoor recordings outside of recording studios. The spectral smearing caused by reverberation can result in significant performance degradation for standard deep learning-based speech separation systems, which rely on spectral structure and the sparsity of speech signals to tease apart sources. To address this, we introduce WHAMR!, an augmented version of WHAM! with synthetic reverberated sources, and provide a thorough baseline analysis of current techniques as well as novel cascaded architectures on the newly introduced conditions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset