A Dual-Staged Context Aggregation Method Towards Efficient End-To-End Speech Enhancement

08/18/2019
by   Kai Zhen, et al.
0

In speech enhancement, an end-to-end deep neural network converts a noisy speech signal to a clean speech directly in time domain without time-frequency transformation or mask estimation. However, aggregating contextual information from a high-resolution time domain signal with an affordable model complexity still remains challenging. In this paper, we propose a densely connected convolutional and recurrent network (DCCRN), a hybrid architecture, to enable dual-staged temporal context aggregation. With the dense connectivity and cross-component identical shortcut, DCCRN consistently outperforms competing convolutional baselines with an average STOI improvement of 0.23 and PESQ of 1.38 at three SNR levels. The proposed method is computationally efficient with only 1.38 million parameters. The generalizability performance on the unseen noise types is still decent considering its low complexity, although it is relatively weaker comparing to Wave-U-Net with 7.25 times more parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/18/2019

Efficient Context Aggregation for End-to-End Speech Enhancement Using a Densely Connected Convolutional and Recurrent Network

In speech enhancement, an end-to-end deep neural network converts a nois...
research
06/20/2019

Parameter Enhancement for MELP Speech Codec in Noisy Communication Environment

In this paper, we propose a deep learning (DL)-based parameter enhanceme...
research
06/08/2023

Convolutional Recurrent Neural Network with Attention for 3D Speech Enhancement

3D speech enhancement can effectively improve the auditory experience an...
research
05/17/2021

Dual-Stage Low-Complexity Reconfigurable Speech Enhancement

This paper proposes a dual-stage, low complexity, and reconfigurable tec...
research
10/11/2021

A Multi-Resolution Front-End for End-to-End Speech Anti-Spoofing

The choice of an optimal time-frequency resolution is usually a difficul...
research
12/07/2020

Towards end-to-end speech enhancement with a variational U-Net architecture

In this paper, we investigate the viability of a variational U-Net archi...
research
10/20/2019

Deep speech inpainting of time-frequency masks

In particularly noisy environments, transient loud intrusions can comple...

Please sign up or login with your details

Forgot password? Click here to reset