TasNet: Surpassing Ideal Time-Frequency Masking for Speech Separation

09/20/2018
by   Yi Luo, et al.
0

Robust speech processing in multitalker acoustic environments requires automatic speech separation. While single-channel, speaker-independent speech separation methods have recently seen great progress, the accuracy, latency, and computational cost of speech separation remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of spectrogram representations for speech separation, and the long latency in calculating the spectrogram. To address these shortcomings, we propose the time-domain audio separation network (TasNet), which is a deep learning autoencoder framework for time-domain speech separation. TasNet uses a convolutional encoder to create a representation of the signal that is optimized for extracting individual speakers. Speaker extraction is achieved by applying a weighting function (mask) to the encoder output. The modified encoder representation is then inverted to the sound waveform using a linear decoder. The masks are found using a temporal convolutional network consisting of dilated convolutions, which allow the network to model the long-term dependencies of the speech signal. This end-to-end speech separation algorithm significantly outperforms previous time-frequency methods in terms of separating speakers in mixed audio, even when compared to the separation accuracy achieved with the ideal time-frequency mask of the speakers. In addition, TasNet has a smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward actualizing speech separation for real-world speech processing technologies.

READ FULL TEXT
research
11/01/2017

TasNet: time-domain audio separation network for real-time, single-channel speech separation

Robust speech processing in multi-talker environments requires effective...
research
02/16/2020

Real-time binaural speech separation with preserved spatial cues

Deep learning speech separation algorithms have achieved great success i...
research
12/17/2019

A Unified Framework for Speech Separation

Speech separation refers to extracting each individual speech source in ...
research
07/01/2020

Exploring the time-domain deep attractor network with two-stream architectures in a reverberant environment

With the success of deep learning in speech signal processing, speaker-i...
research
12/16/2022

Towards Unified All-Neural Beamforming for Time and Frequency Domain Speech Separation

Recently, frequency domain all-neural beamforming methods have achieved ...
research
02/02/2019

Is CQT more suitable for monaural speech separation than STFT? an empirical study

Short-time Fourier transform (STFT) is used as the front end of many pop...
research
04/27/2018

Deep Speech Denoising with Vector Space Projections

We propose an algorithm to denoise speakers from a single microphone in ...

Please sign up or login with your details

Forgot password? Click here to reset