Universal Sound Separation

05/08/2019
by   Ilya Kavalerov, et al.
0

Recent deep learning approaches have achieved impressive performance on speech enhancement and separation tasks. However, these approaches have not been investigated for separating mixtures of arbitrary sounds of different types, a task we refer to as universal sound separation, and it is unknown whether performance on speech tasks carries over to non-speech tasks. To study this question, we develop a universal dataset of mixtures containing arbitrary sounds, and use it to investigate the space of mask-based separation architectures, varying both the overall network architecture and the framewise analysis-synthesis basis for signal transformations. These network architectures include convolutional long short-term memory networks and time-dilated convolution stacks inspired by the recent success of time-domain enhancement networks like ConvTasNet. For the latter architecture, we also propose novel modifications that further improve separation performance. In terms of the framewise analysis-synthesis basis, we explore using either a short-time Fourier transform (STFT) or a learnable basis, as used in ConvTasNet, and for both of these bases, we examine the effect of window size. In particular, for STFTs, we find that longer windows (25-50 ms) work best for speech/non-speech separation, while shorter windows (2.5 ms) work best for arbitrary sounds. For learnable bases, shorter windows (2.5 ms) work best on all tasks. Surprisingly, for universal sound separation, STFTs outperform learnable bases. Our best methods produce an improvement in scale-invariant signal-to-distortion ratio of over 13 dB for speech/non-speech separation and close to 10 dB for universal sound separation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2019

Low-Latency Deep Clustering For Speech Separation

This paper proposes a low algorithmic latency adaptation of the deep clu...
research
06/22/2021

Deep neural network Based Low-latency Speech Separation with Asymmetric analysis-Synthesis Window Pair

Time-frequency masking or spectrum prediction computed via short symmetr...
research
10/21/2022

Adversarial Permutation Invariant Training for Universal Sound Separation

Universal sound separation consists of separating mixes with arbitrary s...
research
11/02/2020

What's All the FUSS About Free Universal Sound Separation Data?

We introduce the Free Universal Sound Separation (FUSS) dataset, a new c...
research
11/18/2019

Alternating Between Spectral and Spatial Estimation for Speech Separation and Enhancement

This work investigates alternation between spectral separation using mas...
research
11/08/2021

Learning Filterbanks for End-to-End Acoustic Beamforming

Recent work on monaural source separation has shown that performance can...
research
07/01/2022

Distance-Based Sound Separation

We propose the novel task of distance-based sound separation, where soun...

Please sign up or login with your details

Forgot password? Click here to reset