Working Memory Connections for LSTM

by   Federico Landi, et al.

Recurrent Neural Networks with Long Short-Term Memory (LSTM) make use of gating mechanisms to mitigate exploding and vanishing gradients when learning long-term dependencies. For this reason, LSTMs and other gated RNNs are widely adopted, being the standard de facto for many sequence modeling tasks. Although the memory cell inside the LSTM contains essential information, it is not allowed to influence the gating mechanism directly. In this work, we improve the gate potential by including information coming from the internal cell state. The proposed modification, named Working Memory Connection, consists in adding a learnable nonlinear projection of the cell content into the network gates. This modification can fit into the classical LSTM gates without any assumption on the underlying task, being particularly effective when dealing with longer sequences. Previous research effort in this direction, which goes back to the early 2000s, could not bring a consistent improvement over vanilla LSTM. As part of this paper, we identify a key issue tied to previous connections that heavily limits their effectiveness, hence preventing a successful integration of the knowledge coming from the internal cell state. We show through extensive experimental evaluation that Working Memory Connections constantly improve the performance of LSTMs on a variety of tasks. Numerical results suggest that the cell state contains useful information that is worth including in the gate structure.


page 1

page 2

page 3

page 4


The unreasonable effectiveness of the forget gate

Given the success of the gated recurrent unit, a natural question is whe...

Can recurrent neural networks warp time?

Successful recurrent models such as long short-term memories (LSTMs) and...

MCRM: Mother Compact Recurrent Memory

LSTMs and GRUs are the most common recurrent neural network architecture...

MCRM: Mother Compact Recurrent Memory A Biologically Inspired Recurrent Neural Network Architecture

LSTMs and GRUs are the most common recurrent neural network architecture...

Iterative evaluation of LSTM cells

In this work we present a modification in the conventional flow of infor...

Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay

Sequential information contains short- to long-range dependencies; howev...

Bivariate Beta LSTM

Long Short-Term Memory (LSTM) infers the long term dependency through a ...

Please sign up or login with your details

Forgot password? Click here to reset