DeepAI
Log In Sign Up

Real-time Streaming Wave-U-Net with Temporal Convolutions for Multichannel Speech Enhancement

04/05/2021
by   Vasiliy Kuzmin, et al.
0

In this paper, we describe the work that we have done to participate in Task1 of the ConferencingSpeech2021 challenge. This task set a goal to develop the solution for multi-channel speech enhancement in a real-time manner. We propose a novel system for streaming speech enhancement. We employ Wave-U-Net architecture with temporal convolutions in encoder and decoder. We incorporate self-attention in the decoder to apply attention mask retrieved from skip-connection on features from down-blocks. We explore history cache mechanisms that work like hidden states in recurrent networks and implemented them in proposal solution. It helps us to run an inference with chunks length 40ms and Real-Time Factor 0.4 with the same precision.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/03/2020

Dense CNN with Self-Attention for Time-Domain Speech Enhancement

Speech enhancement in the time domain is becoming increasingly popular i...
11/27/2018

Improved Speech Enhancement with the Wave-U-Net

We study the use of the Wave-U-Net architecture for speech enhancement, ...
04/12/2021

Complex Spectral Mapping With Attention Based Convolution Recurrent Neural Network for Speech Enhancement

Speech enhancement has benefited from the success of deep learning in te...
11/08/2022

Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement

Personalised speech enhancement (PSE), which extracts only the speech of...
12/07/2020

Towards end-to-end speech enhancement with a variational U-Net architecture

In this paper, we investigate the viability of a variational U-Net archi...
11/03/2022

Iterative autoregression: a novel trick to improve your low-latency speech enhancement model

Streaming models are an essential component of real-time speech enhancem...
06/30/2022

GLD-Net: Improving Monaural Speech Enhancement by Learning Global and Local Dependency Features with GLD Block

For monaural speech enhancement, contextual information is important for...