h-detach: Modifying the LSTM Gradient Towards Better Optimization

10/06/2018
by   Devansh Arpit, et al.
0

Recurrent neural networks are known for their notorious exploding and vanishing gradient problem (EVGP). This problem becomes more evident in tasks where the information needed to correctly solve them exist over long time scales, because EVGP prevents important gradient components from being back-propagated adequately over a large number of steps. We introduce a simple stochastic algorithm (h-detach) that is specific to LSTM optimization and targeted towards addressing this problem. Specifically, we show that when the LSTM weights are large, the gradient components through the linear path (cell state) in the LSTM computational graph get suppressed. Based on the hypothesis that these components carry information about long term dependencies (which we show empirically), their suppression can prevent LSTMs from capturing them. Our algorithm prevents gradients flowing through this path from getting suppressed, thus allowing the LSTM to capture such dependencies better. We show significant convergence and generalization improvements using our algorithm on various benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/01/2021

RotLSTM: Rotating Memories in Recurrent Neural Networks

Long Short-Term Memory (LSTM) units have the ability to memorise and use...
research
03/23/2018

Can recurrent neural networks warp time?

Successful recurrent models such as long short-term memories (LSTMs) and...
research
01/22/2019

Reducing state updates via Gaussian-gated LSTMs

Recurrent neural networks can be difficult to train on long sequence dat...
research
03/17/2018

Learning Long Term Dependencies via Fourier Recurrent Units

It is a known fact that training recurrent neural networks for tasks tha...
research
01/31/2017

On orthogonality and learning recurrent networks with long term dependencies

It is well known that it is challenging to train deep neural networks an...
research
10/04/2022

Fast Saturating Gate for Learning Long Time Scales with Recurrent Neural Networks

Gate functions in recurrent models, such as an LSTM and GRU, play a cent...
research
04/17/2018

PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning

We present PredRNN++, an improved recurrent network for video predictive...

Please sign up or login with your details

Forgot password? Click here to reset