Attention-Based Models for Speech Recognition

06/24/2015
by   Jan Chorowski, et al.
0

Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks in- cluding machine translation, handwriting synthesis and image caption gen- eration. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in reaches a competitive 18.7 TIMIT phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to alleviate this issue. The new method yields a model that is robust to long inputs and achieves 18 single utterances and 20 propose a change to the at- tention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6 level.

READ FULL TEXT

page 11

page 12

page 14

page 15

page 16

page 17

page 18

research
10/11/2021

SRU++: Pioneering Fast Recurrence with Attention for Speech Recognition

The Transformer architecture has been well adopted as a dominant archite...
research
03/31/2016

Neural Attention Models for Sequence Classification: Analysis and Application to Key Term Extraction and Dialogue Act Detection

Recurrent neural network architectures combining with attention mechanis...
research
06/16/2017

One Model To Learn Them All

Deep learning yields great results across many fields, from speech recog...
research
10/08/2021

Explaining the Attention Mechanism of End-to-End Speech Recognition Using Decision Trees

The attention mechanism has largely improved the performance of end-to-e...
research
02/01/2019

Exploring attention mechanism for acoustic-based classification of speech utterances into system-directed and non-system-directed

Voice controlled virtual assistants (VAs) are now available in smartphon...
research
10/23/2019

Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis

Despite the ability to produce human-level speech for in-domain text, at...
research
05/21/2019

Improving Minimal Gated Unit for Sequential Data

In order to obtain a model which can process sequential data related to ...

Please sign up or login with your details

Forgot password? Click here to reset