T-GSA: Transformer with Gaussian-weighted self-attention for speech enhancement

10/13/2019
by   Jaeyoung Kim, et al.
0

Transformer neural networks (TNN) demonstrated state-of-art performance on many natural language processing (NLP) tasks, replacing recurrent neural networks (RNNs), such as LSTMs or GRUs. However, TNNs did not perform well in speech enhancement, whose contextual nature is different than NLP tasks, like machine translation. Self-attention is a core building block of the Transformer, which not only enables parallelization of sequence computation, but also provides the constant path length between symbols that is essential to learning long-range dependencies. In this paper, we propose a Transformer with Gaussian-weighted self-attention (T-GSA), whose attention weights are attenuated according to the distance between target and context symbols. The experimental results show that the proposed T-GSA has significantly improved speech-enhancement performance, compared to the Transformer and RNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2019

Transformer with Gaussian weighted self-attention for speech enhancement

The Transformer architecture recently replaced recurrent neural networks...
research
05/15/2023

Ripple sparse self-attention for monaural speech enhancement

The use of Transformer represents a recent success in speech enhancement...
research
08/04/2023

Efficient Monaural Speech Enhancement using Spectrum Attention Fusion

Speech enhancement is a demanding task in automated speech processing pi...
research
06/18/2020

Boosting Objective Scores of Speech Enhancement Model through MetricGAN Post-Processing

The Transformer architecture has shown its superior ability than recurre...
research
07/24/2022

A Cognitive Study on Semantic Similarity Analysis of Large Corpora: A Transformer-based Approach

Semantic similarity analysis and modeling is a fundamentally acclaimed t...
research
06/28/2020

Self-Attention Networks for Intent Detection

Self-attention networks (SAN) have shown promising performance in variou...
research
08/27/2018

Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures

Recently, non-recurrent architectures (convolutional, self-attentional) ...

Please sign up or login with your details

Forgot password? Click here to reset