Fast-Slow Recurrent Neural Networks

05/24/2017 ∙ by Asier Mujika, et al. ∙ ETH Zurich 0

Processing sequential data of variable length is a major challenge in a wide range of applications, such as speech recognition, language modeling, generative image modeling and machine translation. Here, we address this challenge by proposing a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FS-RNN). The FS-RNN incorporates the strengths of both multiscale RNNs and deep transition RNNs as it processes sequential data on different timescales and learns complex transition functions from one time step to the next. We evaluate the FS-RNN on two character level language modeling data sets, Penn Treebank and Hutter Prize Wikipedia, where we improve state of the art results to 1.19 and 1.25 bits-per-character (BPC), respectively. In addition, an ensemble of two FS-RNNs achieves 1.20 BPC on Hutter Prize Wikipedia outperforming the best known compression algorithm with respect to the BPC measure. We also present an empirical investigation of the learning and network dynamics of the FS-RNN, which explains the improved performance compared to other RNN architectures. Our approach is general as any kind of RNN cell is a possible building block for the FS-RNN architecture, and thus can be flexibly applied to different tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Processing, modeling and predicting sequential data of variable length is a major challenge in the field of machine learning. In recent years, recurrent neural networks (RNNs)

rumelhart1988learning ; robinson1987utility ; werbos1988generalization ; williams1989complexity

have been the most popular tool to approach this challenge. RNNs have been successfully applied to improve state of the art results in complex tasks like language modeling and speech recognition. A popular variation of RNNs are long short-term memories (LSTMs)

hochreiter1997long

, which have been proposed to address the vanishing gradient problem

hochreiter1991untersuchungen ; bengio1994learning ; hochreiter1998vanishing . LSTMs maintain constant error flow and thus are more suitable to learn long-term dependencies compared to standard RNNs.

Our work contributes to the ongoing debate on how to interconnect several RNN cells with the goals of promoting the learning of long-term dependencies, favoring efficient hierarchical representations of information, exploiting the computational advantages of deep over shallow networks and increasing computational efficiency of training and testing. In deep RNN architectures, RNNs or LSTMs are stacked layer-wise on top of each other el1995hierarchical ; jaeger2007discovering ; graves2013sequences . The additional layers enable the network to learn complex input to output relations and encourage a efficient hierarchical representation of information. In multiscale RNN architectures schmidhuber1992learning ; el1995hierarchical ; koutnik2014clockwork ; chung2016multiscale , the operation on different timescales is enforced by updating the higher layers less frequently, which further encourages an efficient hierarchical representation of information. The slower update rate of higher layers leads to computationally efficient implementations and gives rise to short gradient paths that favor the learning of long-term dependencies. In deep transition RNN architectures, intermediate sequentially connected layers are interposed between two consecutive hidden states in order to increase the depth of the transition function from one time step to the next, as for example in deep transition networks pascanu2013construct or Recurrent Highway Networks (RHN) zilly2016recurrent . The intermediate layers enable the network to learn complex non-linear transition functions. Thus, the model exploits the fact that deep models can represent some functions exponentially more efficiently than shallow models bengio2009learning . We interpret these networks as shallow networks that share the hidden state, rather than a single deep network. Despite being the same in practice, this interpretation makes it trivial to convert any RNN cell to a deep RNN by connecting the cells sequentially, see Figure 1(b).

Here, we propose the Fast-Slow RNN (FS-RNN) architecture, a novel way of interconnecting RNN cells, that combines advantages of multiscale RNNs and deep transition RNNs. In its simplest form the architecture consists of two sequentially connected, fast operating RNN cells in the lower hierarchical layer and a slow operating RNN cell in the higher hierarchical layer, see Figure 1 and Section 3. We evaluate the FS-RNN on two standard character level language modeling data sets, namely Penn Treebank and Hutter Prize Wikipedia. Additionally, following pascanu2013construct , we present an empirical analysis that reveals advantages of the FS-RNN architecture over other RNN architectures.

The main contributions of this paper are:

  • We propose the FS-RNN as a novel RNN architecture.

  • We improve state of the art results on the Penn Treebank and Hutter Prize Wikipedia data sets.

  • We surpass the BPC performance of the best known text compression algorithm evaluated on Hutter Prize Wikipedia by using an ensemble of two FS-RNNs.

  • We show empirically that the FS-RNN incorporates strengths of both multiscale RNNs and deep transition RNNs, as it stores long-term dependencies efficiently and it adapts quickly to unexpected input.

  • We provide our code in the following URL https://github.com/amujika/Fast-Slow-LSTM.

2 Related work

In the following, we review the work that relates to our approach in more detail. First, we focus on deep transition RNNs and multiscale RNNs since these two architectures are the main sources of inspiration for the FS-RNN architecture. Then, we discuss how our approach differs from these two architectures. Finally, we review other approaches that address the issue of learning long-term dependencies when processing sequential data.

Pascanu et al. pascanu2013construct

investigated how a RNN can be converted into a deep RNN. In standard RNNs, the transition function from one hidden state to the next is shallow, that is, the function can be written as one linear transformation concatenated with a point wise non-linearity. The authors added intermediate layers to increase the depth of the transition function, and they found empirically that such deeper architectures boost performance. Since deeper architectures are more difficult to train, they equip the network with skip connections, which give rise to shorter gradient paths (DT(S)-RNN, see

pascanu2013construct ). Following a similar line of research, Zilly et al. zilly2016recurrent further increased the transition depth between two consecutive hidden states. They used highway layers srivastava2015highway to address the issue of training deep architectures. The resulting RHN zilly2016recurrent achieved state of the art results on the Penn Treebank and Hutter Prize Wikipedia data sets. Furthermore, a vague similarity to deep transition networks can be seen in adaptive computation graves2016adaptive , where an LSTM cell learns how many times it should update its state after receiving the input to produce the next output.

Multiscale RNNs are obtained by stacking multiple RNNs with decreasing order of update frequencies on top of each other. Early attempts proposed such architectures for sequential data compression schmidhuber1992learning , where the higher layer is only updated in case of prediction errors of the lower layer, and for sequence classification el1995hierarchical , where the higher layers are updated with a fixed smaller frequency. More recently, Koutnik et al. koutnik2014clockwork proposed the Clockwork RNN, in which the hidden units are divided into several modules, of which the -th module is only updated every -th time-step. General advantages of this multiscale RNN architecture are improved computational efficiency, efficient propagation of long-term dependencies and flexibility in allocating resources (units) to the hierarchical layers. Multiscale RNNs have been applied for speech recognition in bahdanau2016timepooling

, where the slower operating RNN pools information over time and the timescales are fixed hyperparameters as in Clockwork RNNs. In

sordoni2015hierarchical , multiscale RNNs are applied to make context-aware query suggestions. In this case, explicit hierarchical boundary information is provided. Chung et al. chung2016multiscale presented a hierarchical multiscale RNN (HM-RNN) that discovers the latent hierarchical structure of the sequence without explicitly given boundary information. If a parametrized boundary detector indicates the end of a segment, then a summarized representation of the segment is fed to the upper layer and the state of the lower layer is reset chung2016multiscale .

Our FS-RNN architectures borrows elements from both deep transition RNNs and multiscale RNNs. The major difference to multiscale RNNs is that our lower hierarchical layer zooms in in time, that is, it operates faster than the timescale that is naturally given by the input sequence. The major difference to deep transition RNNs is our approach to facilitate long-term dependencies, namely, we employ a RNN operating on a slow timescale.

Many approaches aim at solving the problem of learning long-term dependencies in sequential data. A very popular one is to use external memory cells that can be accessed and modified by the network, see Neural Turing Machines

graves2014neural , Memory Networks weston2014memory and Differentiable Neural Computer graves2016hybrid . Other approaches focus on different optimization techniques rather than network architectures. One attempt is Hessian Free optimization martens2011learning , a second order training method that achieved good results on RNNs. The use of different optimization techniques can improve learning in a wide range of RNN architectures and therefore, the FS-RNN may also benefit from it.

3 Fast-Slow RNN

We propose the FS-RNN architecture, see Figure 1. It consists of sequentially connected RNN cells on the lower hierarchical layer and one RNN cell on the higher hierarchical layer. We call the Fast cells, the Slow cell and the corresponding hierarchical layers the Fast and Slow layer, respectively. receives input from and feeds its state to . receives the sequential input data , and

outputs the predicted probability distribution

of the next element of the sequence.

Figure 1: Diagram of a Fast-Slow RNN with Fast cells. Observe that only the second Fast cell receives the input from the Slow cell.

Intuitively, the Fast cells are able to learn complex transition functions from one time step to the next one. The Slow cell gives rise to shorter gradient paths between sequential inputs that are distant in time, and thus, it facilitates the learning of long-term dependencies. Therefore, the FS-RNN architecture incorporates advantages of deep transition RNNs and of multiscale RNNs, see Section 2.

Since any kind of RNN cell can be used as building block for the FS-RNN architecture, we state the formal update rules of the FS-RNN for arbitrary RNN cells. We define a RNN cell to be a differentiable function that maps a hidden state and an additional input to a new hidden state. Note that can be input data or input from a cell in a higher or lower hierarchical layer. If a cell does not receive an additional input, then we will omit . The following equations define the FS-RNN architecture for arbitrary RNN cells and .

The output is computed as an affine transformation of . It is possible to extend the FS-RNN architecture in order to further facilitate the learning of long-term dependencies by adding hierarchical layers, each of which operates on a slower timescale than the ones below, resembling clockwork RNNs koutnik2014clockwork . However, for the tasks considered in Section 4, we observed that this led to overfitting the training data even when applying regularization techniques and reduced the performance at test time. Therefore, we will not further investigate this extension of the model in this paper, even though it might be beneficial for other tasks or larger data sets.

In the experiments in Section 4, we use LSTM cells as building blocks for the FS-RNN architecture. For completeness, we state the update function for an LSTM . The state of an LSTM is a pair , consisting of the hidden state and the cell state. The function maps the previous state and input to the next state according to

where , and are commonly referred to as forget, input and output gates, and are the new candidate cell states. Moreover, , and are the learnable parameters,

denotes the sigmoid function, and

denotes the element-wise multiplication.

4 Experiments

For the experiments, we consider the Fast-Slow LSTM (FS-LSTM) that is a FS-RNN, where each RNN cell is a LSTM cell. The FS-LSTM is evaluated on two character level language modeling data sets, namely Penn Treebank and Hutter Prize Wikipedia, which will be referred to as in this section. The task consists of predicting the probability distribution of the next character given all the previous ones. In Section 4.1, we compare the performance of the FS-LSTM with other approaches. In Section 4.2, we empirically compare the network dynamics of different RNN architectures and show the FS-LSTM combines the benefits of both, deep transition RNNs and multiscale RNNs.

4.1 Performance on Penn Treebank and Hutter Prize Wikipedia

The FS-LSTM achieves BPC and BPC on the Penn Treebank and data sets, respectively. These results are compared to other approaches in Table 1 and Table 2 (the baseline LSTM results without citations are taken from zoph2016NASCell for Penn Treebank and from ha2016hyper for ). For the Penn Treebank, the FS-LSTM outperforms all previous approaches with significantly less parameters than the previous top approaches. We did not observe any improvement when increasing the model size, probably due to overfitting. In the data set, the FS-LSTM surpasses all other neural approaches. Following graves2014neural , we compare the results with text compression algorithms using the BPC measure. An ensemble of two FS-LSTM models ( BPC) outperforms cmix ( BPC) cmix , the current best text compression algorithm on largetextcompressionbenchmark . However, a fair comparison is difficult. Compression algorithms are usually evaluated by the final size of the compressed data set including the decompressor size. For character prediction models, the network size is usually not taken into account and the performance is measured on the test set. We remark that as the FS-LSTM is evaluated on the test set, it should achieve similar performance on any part of the English Wikipedia.

The FS-LSTM-2 and FS-LSTM-4 model consist of two and four cells in the Fast layer, respectively. The FS-LSTM-4 model outperforms the FS-LSTM-2 model, but its processing time for one time step is higher than the one of the FS-LSTM-2. Adding more cells to the Fast layer could further improve the performance as observed for RHN zilly2016recurrent , but would increase the processing time, because the cell states are computed sequentially. Therefore, we did not further increase the number of Fast cells.

The model is trained to minimize the cross-entropy loss between the predictions and the training data. Formally, the loss function is defined as

, where is the probability that a model with parameters assigns to the next character given all the previous ones. The model is evaluated by the BPC measure, which uses the binary logarithm instead of the natural logarithm in the loss function. All the hyperparameters used for the experiments are summarized in Table 3. We regularize the FS-LSTM with dropout srivastava2014dropout . In each time step, a different dropout mask is applied for the non-recurrent connections zaremba14rnndropout , and Zoneout krueger16zoneout is applied for the recurrent connections. The network is trained with minibatch gradient descent using the Adam optimizer kingma14adam . If the gradients have norm larger than they are normalized to

. Truncated backpropagation through time (TBPTT)

rumelhart1988learning ; elman90entropy is used to approximate the gradients, and the final hidden state is passed to the next sequence. The learning rate is divided by a factor for the last epochs in the Penn Treebank experiments, and it is divided by a factor whenever the validation error does not improve in two consecutive epochs in the experiments. The forget bias of every LSTM cell is initialized to , and all weight matrices are initialized to orthogonal matrices. Layer normalization ba16layernorm is applied to the cell and to each gate separately. The network with the smallest validation error is evaluated on the test set. The two data sets that we use for evaluation are:

Penn Treebank marcus1993ptb

The dataset is a collection of Wall Street Journal articles written in English. It only contains different words, all written in lower-case, and rare words are replaced with "". Following mikolov2012ptb_division , we split the data set into train, validation and test sets consisting of M, K and K characters, respectively.

Hutter Prize Wikipedia hutter2012enwik8

This dataset is also known as and it consists of "raw" Wikipedia data, that is, English articles, tables, XML data, hyperlinks and special characters. The data set contains M characters with unique tokens. Following chung2015gated , we split the data set into train, validation and test sets consisting of M, M and M characters, respectively.

Model BPC Param Count
Zoneout LSTM krueger16zoneout 1.27 -
2-Layers LSTM 1.243 6.6M
HM-LSTM chung2016multiscale 1.24 -
HyperLSTM - small ha2016hyper 1.233 5.1M
HyperLSTM ha2016hyper 1.219 14.4M
NASCell - small zoph2016NASCell 1.228 6.6M
NASCell zoph2016NASCell 1.214 16.3M
FS-LSTM-2 (ours) 1.190 7.2M
FS-LSTM-4 (ours) 1.193 6.5M
Table 1: BPC on Penn Treebank
Model BPC Param Count
LSTM, units 1.461 18M
Layer Norm LSTM, units 1.402 14M
HyperLSTM ha2016hyper 1.340 27M
HM-LSTM chung2016multiscale 1.32 35M
Surprisal-driven Zoneout rocki2016surprisal 1.31 64M
RHN - depth 5 zilly2016recurrent 1.31 23M
RHN - depth 10 zilly2016recurrent 1.30 21M
Large RHN - depth 10 zilly2016recurrent 1.27 46M
FS-LSTM-2 (ours) 1.290 27M
FS-LSTM-4 (ours) 1.277 27M
Large FS-LSTM-4 (ours) 1.245 47M
2 Large FS-LSTM-4 (ours) 1.198 2 47M
cmix v13 cmix 1.225 -
Table 2: BPC on
Penn Treebank enwik8
FS-LSTM-2 FS-LSTM-4 FS-LSTM-2 FS-LSTM-4
Large
FS-LSTM-4

Non-recurrent dropout
0.35 0.35 0.2 0.2 0.25
Cell zoneout 0.5 0.5 0.3 0.3 0.3
Hidden zoneout 0.1 0.1 0.05 0.05 0.05
Fast cell size 700 500 900 730 1200
Slow cell size 400 400 1500 1500 1500
TBPTT length 150 150 150 150 100
Minibatch size 128 128 128 128 128
Input embedding size 128 128 256 256 256
Initial Learning rate 0.002 0.002 0.001 0.001 0.001
Epochs 200 200 35 35 50


Table 3: Hyperparameters for the character-level language model experiments.

4.2 Comparison of network dynamics of different architectures

We compare the FS-LSTM architecture with the stacked-LSTM and the sequential-LSTM architectures, depicted in Figure 2, by investigating the network dynamics. In order to conduct a fair comparison we chose the number of parameters to roughly be the same for all three models. The FS-LSTM consists of one Slow and four Fast LSTM cells of units each. The stacked-LSTM consists of five LSTM cells stacked on top of each other consisting of units each, which will be referred to as Stacked-1, … , Stacked-5, from bottom to top. The sequential-LSTM consists of five sequentially connected LSTM cells of units each. All three models require roughly the same time to process one time step. The models are trained on for epochs with minibatch gradient descent using the Adam optimizer kingma14adam without any regularization, but layer normalization ba16layernorm is applied on the cell states of the LSTMs. The hyperparameters are not optimized for any of the three models.

The experiments suggest that the FS-LSTM architecture favors the learning of long-term dependencies (Figure 3), enforces hidden cell states to change at different rates (Figure 4) and facilitates a quick adaptation to unexpected inputs (Figure 5). Moreover, the FS-LSTM achieves BPC and outperforms the stacked-LSTM ( BPC) and the sequential-LSTM ( BPC).

(a) Stacked

(b) Sequential
Figure 2: Diagram of (a) stacked-LSTM and (b) sequential-LSTM with cells each.

In Figure 3, we asses the ability to capture long-term dependencies by investigating the effect of the cell state on the loss at later time points, following krueger16zoneout . We measure the effect of the cell state at time on the loss at time by the gradient . This gradient is the largest for the Slow LSTM, and it is small and steeply decaying as increases for the Fast LSTM. Evidently, the Slow cell captures long-term dependencies, whereas the Fast cell only stores short-term information. In the stacked-LSTM, the gradients decrease from the top layer to the bottom layer, which can be explained by the vanishing gradient problem. The small, steeply decaying gradients of the sequential-LSTM indicate that it is less capable to learn long-term dependencies than the other two models.

Figure 3: Long-term effect of the cell states on the loss function. The average value of , which is the effect of the cell state at time on the loss function at time , is plotted against for the different layers in the three RNN architectures. For the sequential-LSTM only the first cell is considered.

Figure 4 gives further evidence that the FS-LSTM stores long-term dependencies efficiently in the Slow LSTM cell. It shows that among all the layers of the three RNN architectures, the cell states of the Slow LSTM change the least from one time step to the next. The highest change is observed for the cells of the sequential model followed by the Fast LSTM cells.

Figure 4: Rate of change of the cell states from one time step to the next. We plot averaged over all time steps, where is the value of the th unit at time step , for the different layers of the three RNN architectures. For the sequential-LSTM only the first cell is considered.

In Figure 5, we investigate whether the FS-LSTM quickly adapts to unexpected characters, that is, whether it performs well on the subsequent ones. In text modeling, the initial character of a word has the highest entropy, whereas later characters in a word are usually less ambiguous elman90entropy . Since the first character of a word is the most difficult one to predict, the performance at the following positions should reflect the ability to adapt to unexpected inputs. While the prediction qualities at the first position are rather close for all three models, the FS-LSTM outperforms the stacked-LSTM and sequential-LSTM significantly on subsequent positions. It is possible that new information is incorporated quickly in the Fast layer, because it only stores short-term information, see Figure 3.

Figure 5: Bits-per-character at each character position. The left panel shows the average bits-per-character at each character positions in the test set. The right panel shows the average relative loss with respect to the stacked-LSTM at each character position. For this Figure, a word is considered to be a sequence of lower-case letters of length at least in-between two spaces.

5 Conclusion

In this paper, we have proposed the FS-RNN architecture. Up to our knowledge, it is the first architecture that incorporates ideas of both multiscale and deep transition RNNs. The FS-RNN architecture improved state of the art results on character level language modeling evaluated on the Penn Treebank and Hutter Prize Wikipedia data sets. An ensemble of two FS-RNNs achieves better BPC performance than the best known compression algorithm. Further experiments provided evidence that the Slow cell enables the network to learn long-term dependencies, while the Fast cells enable the network to quickly adapt to unexpected inputs and learn complex transition functions from one time step to the next.

Our FS-RNN architecture provides a general framework for connecting RNN cells as any type of RNN cell can be used as building block. Thus, there is a lot of flexibility in applying the architecture to different tasks. For instance using RNN cells with good long-term memory, like EURNNs jing2016EUNN or NARX RNNs lin1996learning ; narx2017 , for the Slow cell might boost the long-term memory of the FS-RNN architecture. Therefore, the FS-RNN architecture might improve performance in many different applications.

Acknowledgments

We thank Julian Zilly for many helpful discussions.

References