DeepAI AI Chat
Log In Sign Up

Filterbank design for end-to-end speech separation

by   Manuel Pariente, et al.

Single-channel speech separation has recently made great progress thanks to learned filterbanks as used in ConvTasNet. In parallel, parameterized filterbanks have been proposed for speaker recognition where only center frequencies and bandwidths are learned. In this work, we extend real-valued learned and parameterized filterbanks into complex-valued analytic filterbanks and define a set of corresponding representations and masking strategies. We evaluate these filterbanks on a newly released noisy speech separation dataset (WHAM). The results show that the proposed analytic learned filterbank consistently outperforms the real-valued filterbank of ConvTasNet. Also, we validate the use of parameterized filterbanks and show that complex-valued representations and masks are beneficial in all conditions. Finally, we show that the STFT achieves its best performance for 2ms windows.


page 1

page 2

page 3

page 4


Neural Spatio-Temporal Beamformer for Target Speech Separation

Purely neural network (NN) based speech separation and enhancement metho...

Wavesplit: End-to-End Speech Separation by Speaker Clustering

We introduce Wavesplit, an end-to-end speech separation system. From a s...

Phasebook and Friends: Leveraging Discrete Representations for Source Separation

Deep learning based speech enhancement and source separation systems hav...

Iterated integrals over letters induced by quadratic forms

An automated treatment of iterated integrals based on letters induced by...

A Time-domain Generalized Wiener Filter for Multi-channel Speech Separation

Frequency-domain neural beamformers are the mainstream methods for recen...

Adversarial Audio Synthesis with Complex-valued Polynomial Networks

Time-frequency (TF) representations in audio synthesis have been increas...