Exploring the time-domain deep attractor network with two-stream architectures in a reverberant environment

07/01/2020
by   Hangting Chen, et al.
0

With the success of deep learning in speech signal processing, speaker-independent speech separation under a reverberant environment remains challenging. The deep attractor network (DAN) performs speech separation with speaker attractors on the time-frequency domain. The recently proposed convolutional time-domain audio separation network (Conv-TasNet) surpasses ideal masks in anechoic mixture signals, while its architecture renders the problem of separating signals with variable numbers of speakers. Moreover, these models will suffer performance degradation in a reverberant environment. In this study, we propose a time-domain deep attractor network (TD-DAN) with two-stream convolutional networks that efficiently performs both dereverberation and separation tasks under the condition of variable numbers of speakers. The speaker encoding stream (SES) of the TD-DAN models speaker information, and is explored with various waveform encoders. The speech decoding steam (SDS) accepts speaker attractors from SES, and learns to predict early reflections. Experiment results demonstrated that the TD-DAN achieved scale-invariant source-to-distortion ratio (SI-SDR) gains of 10.40/9.78 dB and 9.15/7.92 dB on the reverberant two- and three-speaker development/evaluation set, exceeding Conv-TasNet by 1.55/1.33 dB and 0.94/1.21 dB, respectively.

READ FULL TEXT
research
03/31/2022

EEND-SS: Joint End-to-End Neural Speaker Diarization and Speech Separation for Flexible Number of Speakers

In this paper, we present a novel framework that jointly performs speake...
research
06/28/2021

Sparsely Overlapped Speech Training in the Time Domain: Joint Learning of Target Speech Separation and Personal VAD Benefits

Target speech separation is the process of filtering a certain speaker's...
research
09/20/2018

TasNet: Surpassing Ideal Time-Frequency Masking for Speech Separation

Robust speech processing in multitalker acoustic environments requires a...
research
10/27/2022

Deformable Temporal Convolutional Networks for Monaural Noisy Reverberant Speech Separation

Speech separation models are used for isolating individual speakers in m...
research
11/20/2019

Demystifying TasNet: A Dissecting Approach

In recent years time domain speech separation has excelled over frequenc...
research
02/08/2021

Speaker and Direction Inferred Dual-channel Speech Separation

Most speech separation methods, trying to separate all channel sources s...
research
09/08/2022

TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation

We propose TF-GridNet, a novel multi-path deep neural network (DNN) oper...

Please sign up or login with your details

Forgot password? Click here to reset