A Multiscale Autoencoder (MSAE) Framework for End-to-End Neural Network Speech Enhancement

09/21/2023
by   Bengt J. Borgstrom, et al.
0

Neural network approaches to single-channel speech enhancement have received much recent attention. In particular, mask-based architectures have achieved significant performance improvements over conventional methods. This paper proposes a multiscale autoencoder (MSAE) for mask-based end-to-end neural network speech enhancement. The MSAE performs spectral decomposition of an input waveform within separate band-limited branches, each operating with a different rate and scale, to extract a sequence of multiscale embeddings. The proposed framework features intuitive parameterization of the autoencoder, including a flexible spectral band design based on the Constant-Q transform. Additionally, the MSAE is constructed entirely of differentiable operators, allowing it to be implemented within an end-to-end neural network, and be discriminatively trained. The MSAE draws motivation both from recent multiscale network topologies and from traditional multiresolution transforms in speech processing. Experimental results show the MSAE to provide clear performance benefits relative to conventional single-branch autoencoders. Additionally, the proposed framework is shown to outperform a variety of state-of-the-art enhancement systems, both in terms of objective speech quality metrics and automatic speech recognition accuracy.

READ FULL TEXT

page 1

page 5

page 6

research
03/27/2018

Student-Teacher Learning for BLSTM Mask-based Speech Enhancement

Spectral mask estimation using bidirectional long short-term memory (BLS...
research
05/09/2023

Inter-SubNet: Speech Enhancement with Subband Interaction

Subband-based approaches process subbands in parallel through the model ...
research
11/18/2022

Exploring WavLM on Speech Enhancement

There is a surge in interest in self-supervised learning approaches for ...
research
11/09/2020

Gated Recurrent Fusion with Joint Training Framework for Robust End-to-End Speech Recognition

The joint training framework for speech enhancement and recognition meth...
research
12/11/2021

Perceptual Loss with Recognition Model for Single-Channel Enhancement and Robust ASR

Single-channel speech enhancement approaches do not always improve autom...
research
06/27/2022

A two-stage full-band speech enhancement model with effective spectral compression mapping

The direct expansion of deep neural network (DNN) based wide-band speech...
research
10/02/2021

End-to-End Complex-Valued Multidilated Convolutional Neural Network for Joint Acoustic Echo Cancellation and Noise Suppression

Echo and noise suppression is an integral part of a full-duplex communic...

Please sign up or login with your details

Forgot password? Click here to reset