Residual LSTM: Design of a Deep Recurrent Architecture for Distant Speech Recognition

01/10/2017
by   Jaeyoung Kim, et al.
0

In this paper, a novel architecture for a deep recurrent neural network, residual LSTM is introduced. A plain LSTM has an internal memory cell that can learn long term dependencies of sequential data. It also provides a temporal shortcut path to avoid vanishing or exploding gradients in the temporal domain. The residual LSTM provides an additional spatial shortcut path from lower layers for efficient training of deep networks with multiple LSTM layers. Compared with the previous work, highway LSTM, residual LSTM separates a spatial shortcut path with temporal one by using output layers, which can help to avoid a conflict between spatial and temporal-domain gradient flows. Furthermore, residual LSTM reuses the output projection matrix and the output gate of LSTM to control the spatial information flow instead of additional gate networks, which effectively reduces more than 10 experiment for distant speech recognition on the AMI SDM corpus shows that 10-layer plain and highway LSTM networks presented 13.7 WER over 3-layer aselines, respectively. On the contrary, 10-layer residual LSTM networks provided the lowest WER 41.0 WER reduction over plain and highway LSTM networks, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2017

An Improved Residual LSTM Architecture for Acoustic Modeling

Long Short-Term Memory (LSTM) is the primary recurrent neural networks a...
research
08/28/2018

Layer Trajectory LSTM

It is popular to stack LSTM layers to get better modeling power, especia...
research
08/06/2018

Residual Memory Networks: Feed-forward approach to learn long temporal dependencies

Training deep recurrent neural network (RNN) architectures is complicate...
research
09/09/2019

Self-Teaching Networks

We propose self-teaching networks to improve the generalization capacity...
research
07/13/2020

Transformer with Depth-Wise LSTM

Increasing the depth of models allows neural models to model complicated...
research
06/26/2022

Improved Processing of Ultrasound Tongue Videos by Combining ConvLSTM and 3D Convolutional Networks

Silent Speech Interfaces aim to reconstruct the acoustic signal from a s...
research
08/11/2016

Faster Training of Very Deep Networks Via p-Norm Gates

A major contributing factor to the recent advances in deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset