Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition

11/03/2020
by   Ching-Feng Yeh, et al.
0

Attention-based models have been gaining popularity recently for their strong performance demonstrated in fields such as machine translation and automatic speech recognition. One major challenge of attention-based models is the need of access to the full sequence and the quadratically growing computational cost concerning the sequence length. These characteristics pose challenges, especially for low-latency scenarios, where the system is often required to be streaming. In this paper, we build a compact and streaming speech recognition system on top of the end-to-end neural transducer architecture with attention-based modules augmented with convolution. The proposed system equips the end-to-end models with the streaming capability and reduces the large footprint from the streaming attention-based model using augmented memory. On the LibriSpeech dataset, our proposed system achieves word error rates 2.7 test-clean and 5.8 streaming approaches reported so far.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset