Persistent Hidden States and Nonlinear Transformation for Long Short-Term Memory
Recurrent neural networks (RNNs) have been drawing much attention with great success in many applications like speech recognition and neural machine translation. Long short-term memory (LSTM) is one of the most popular RNN units in deep learning applications. LSTM transforms the input and the previous hidden states to the next states with the affine transformation, multiplication operations and a nonlinear activation function, which makes a good data representation for a given task. The affine transformation includes rotation and reflection, which change the semantic or syntactic information of dimensions in the hidden states. However, considering that a model interprets the output sequence of LSTM over the whole input sequence, the dimensions of the states need to keep the same type of semantic or syntactic information regardless of the location in the sequence. In this paper, we propose a simple variant of the LSTM unit, persistent recurrent unit (PRU), where each dimension of hidden states keeps persistent information across time, so that the space keeps the same meaning over the whole sequence. In addition, to improve the nonlinear transformation power, we add a feedforward layer in the PRU structure. In the experiment, we evaluate our proposed methods with three different tasks, and the results confirm that our methods have better performance than the conventional LSTM.
READ FULL TEXT