Persistent Hidden States and Nonlinear Transformation for Long Short-Term Memory

06/22/2018
by   Heeyoul Choi, et al.
0

Recurrent neural networks (RNNs) have been drawing much attention with great success in many applications like speech recognition and neural machine translation. Long short-term memory (LSTM) is one of the most popular RNN units in deep learning applications. LSTM transforms the input and the previous hidden states to the next states with the affine transformation, multiplication operations and a nonlinear activation function, which makes a good data representation for a given task. The affine transformation includes rotation and reflection, which change the semantic or syntactic information of dimensions in the hidden states. However, considering that a model interprets the output sequence of LSTM over the whole input sequence, the dimensions of the states need to keep the same type of semantic or syntactic information regardless of the location in the sequence. In this paper, we propose a simple variant of the LSTM unit, persistent recurrent unit (PRU), where each dimension of hidden states keeps persistent information across time, so that the space keeps the same meaning over the whole sequence. In addition, to improve the nonlinear transformation power, we add a feedforward layer in the PRU structure. In the experiment, we evaluate our proposed methods with three different tasks, and the results confirm that our methods have better performance than the conventional LSTM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2015

Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks

Because of their superior ability to preserve sequence information over ...
research
11/19/2018

Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition

Recurrent Neural Networks (RNNs) and their variants, such as Long-Short ...
research
12/06/2019

Knowledge extraction from the learning of sequences in a long short term memory (LSTM) architecture

We introduce a general method to extract knowledge from a recurrent neur...
research
11/05/2017

Wider and Deeper, Cheaper and Faster: Tensorized LSTMs for Sequence Learning

Long Short-Term Memory (LSTM) is a popular approach to boosting the abil...
research
06/04/2019

Detecting Syntactic Change Using a Neural Part-of-Speech Tagger

We train a diachronic long short-term memory (LSTM) part-of-speech tagge...
research
01/04/2021

High-bandwidth nonlinear control for soft actuators with recursive network models

We present a high-bandwidth, lightweight, and nonlinear output tracking ...
research
03/13/2015

LSTM: A Search Space Odyssey

Several variants of the Long Short-Term Memory (LSTM) architecture for r...

Please sign up or login with your details

Forgot password? Click here to reset