Self-Attentional Acoustic Models

03/26/2018
by   Matthias Sperber, et al.
0

Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities. These models have recently shown promising results for modeling discrete sequences, but they are non-trivial to apply to acoustic modeling due to computational and modeling issues. In this paper, we apply self-attention to acoustic modeling, proposing several improvements to mitigate these issues: First, self-attention memory grows quadratically in the sequence length, which we address through a downsampling technique. Second, we find that previous approaches to incorporate position information into the model are unsuitable and explore other representations and hybrid models to this end. Third, to stress the importance of local context in the acoustic signal, we propose a Gaussian biasing approach that allows explicit control over the context range. Experiments find that our model approaches a strong baseline based on LSTMs with network-in-network connections while being much faster to compute. Besides speed, we find that interpretability is a strength of self-attentional acoustic models, and demonstrate that self-attention heads learn a linguistically plausible division of labor.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2020

Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory

Transformer-based acoustic modeling has achieved great suc-cess for both...
research
10/23/2019

A Transformer with Interleaved Self-attention and Convolution for Hybrid Acoustic Models

Transformer with self-attention has achieved great success in the area o...
research
08/24/2022

Deep model with built-in self-attention alignment for acoustic echo cancellation

With recent research advances, deep learning models have become an attra...
research
03/29/2021

Transformer-based end-to-end speech recognition with residual Gaussian-based self-attention

Self-attention (SA), which encodes vector sequences according to their p...
research
10/24/2018

Modeling Localness for Self-Attention Networks

Self-attention networks have proven to be of profound value for its stre...
research
12/07/2021

Hybrid Self-Attention NEAT: A novel evolutionary approach to improve the NEAT algorithm

This article presents a "Hybrid Self-Attention NEAT" method to improve t...
research
10/07/2019

Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling Cross-Context for NER

State-of-the-art approaches of NER have used sequence-labeling BiLSTM as...

Please sign up or login with your details

Forgot password? Click here to reset