1 Introduction
Processing, modeling and predicting sequential data of variable length is a major challenge in the field of machine learning. In recent years, recurrent neural networks (RNNs)
rumelhart1988learning ; robinson1987utility ; werbos1988generalization ; williams1989complexityhave been the most popular tool to approach this challenge. RNNs have been successfully applied to improve state of the art results in complex tasks like language modeling and speech recognition. A popular variation of RNNs are long shortterm memories (LSTMs)
hochreiter1997long, which have been proposed to address the vanishing gradient problem
hochreiter1991untersuchungen ; bengio1994learning ; hochreiter1998vanishing . LSTMs maintain constant error flow and thus are more suitable to learn longterm dependencies compared to standard RNNs.Our work contributes to the ongoing debate on how to interconnect several RNN cells with the goals of promoting the learning of longterm dependencies, favoring efficient hierarchical representations of information, exploiting the computational advantages of deep over shallow networks and increasing computational efficiency of training and testing. In deep RNN architectures, RNNs or LSTMs are stacked layerwise on top of each other el1995hierarchical ; jaeger2007discovering ; graves2013sequences . The additional layers enable the network to learn complex input to output relations and encourage a efficient hierarchical representation of information. In multiscale RNN architectures schmidhuber1992learning ; el1995hierarchical ; koutnik2014clockwork ; chung2016multiscale , the operation on different timescales is enforced by updating the higher layers less frequently, which further encourages an efficient hierarchical representation of information. The slower update rate of higher layers leads to computationally efficient implementations and gives rise to short gradient paths that favor the learning of longterm dependencies. In deep transition RNN architectures, intermediate sequentially connected layers are interposed between two consecutive hidden states in order to increase the depth of the transition function from one time step to the next, as for example in deep transition networks pascanu2013construct or Recurrent Highway Networks (RHN) zilly2016recurrent . The intermediate layers enable the network to learn complex nonlinear transition functions. Thus, the model exploits the fact that deep models can represent some functions exponentially more efficiently than shallow models bengio2009learning . We interpret these networks as shallow networks that share the hidden state, rather than a single deep network. Despite being the same in practice, this interpretation makes it trivial to convert any RNN cell to a deep RNN by connecting the cells sequentially, see Figure 1(b).
Here, we propose the FastSlow RNN (FSRNN) architecture, a novel way of interconnecting RNN cells, that combines advantages of multiscale RNNs and deep transition RNNs. In its simplest form the architecture consists of two sequentially connected, fast operating RNN cells in the lower hierarchical layer and a slow operating RNN cell in the higher hierarchical layer, see Figure 1 and Section 3. We evaluate the FSRNN on two standard character level language modeling data sets, namely Penn Treebank and Hutter Prize Wikipedia. Additionally, following pascanu2013construct , we present an empirical analysis that reveals advantages of the FSRNN architecture over other RNN architectures.
The main contributions of this paper are:

We propose the FSRNN as a novel RNN architecture.

We improve state of the art results on the Penn Treebank and Hutter Prize Wikipedia data sets.

We surpass the BPC performance of the best known text compression algorithm evaluated on Hutter Prize Wikipedia by using an ensemble of two FSRNNs.

We show empirically that the FSRNN incorporates strengths of both multiscale RNNs and deep transition RNNs, as it stores longterm dependencies efficiently and it adapts quickly to unexpected input.

We provide our code in the following URL https://github.com/amujika/FastSlowLSTM.
2 Related work
In the following, we review the work that relates to our approach in more detail. First, we focus on deep transition RNNs and multiscale RNNs since these two architectures are the main sources of inspiration for the FSRNN architecture. Then, we discuss how our approach differs from these two architectures. Finally, we review other approaches that address the issue of learning longterm dependencies when processing sequential data.
Pascanu et al. pascanu2013construct
investigated how a RNN can be converted into a deep RNN. In standard RNNs, the transition function from one hidden state to the next is shallow, that is, the function can be written as one linear transformation concatenated with a point wise nonlinearity. The authors added intermediate layers to increase the depth of the transition function, and they found empirically that such deeper architectures boost performance. Since deeper architectures are more difficult to train, they equip the network with skip connections, which give rise to shorter gradient paths (DT(S)RNN, see
pascanu2013construct ). Following a similar line of research, Zilly et al. zilly2016recurrent further increased the transition depth between two consecutive hidden states. They used highway layers srivastava2015highway to address the issue of training deep architectures. The resulting RHN zilly2016recurrent achieved state of the art results on the Penn Treebank and Hutter Prize Wikipedia data sets. Furthermore, a vague similarity to deep transition networks can be seen in adaptive computation graves2016adaptive , where an LSTM cell learns how many times it should update its state after receiving the input to produce the next output.Multiscale RNNs are obtained by stacking multiple RNNs with decreasing order of update frequencies on top of each other. Early attempts proposed such architectures for sequential data compression schmidhuber1992learning , where the higher layer is only updated in case of prediction errors of the lower layer, and for sequence classification el1995hierarchical , where the higher layers are updated with a fixed smaller frequency. More recently, Koutnik et al. koutnik2014clockwork proposed the Clockwork RNN, in which the hidden units are divided into several modules, of which the th module is only updated every th timestep. General advantages of this multiscale RNN architecture are improved computational efficiency, efficient propagation of longterm dependencies and flexibility in allocating resources (units) to the hierarchical layers. Multiscale RNNs have been applied for speech recognition in bahdanau2016timepooling
, where the slower operating RNN pools information over time and the timescales are fixed hyperparameters as in Clockwork RNNs. In
sordoni2015hierarchical , multiscale RNNs are applied to make contextaware query suggestions. In this case, explicit hierarchical boundary information is provided. Chung et al. chung2016multiscale presented a hierarchical multiscale RNN (HMRNN) that discovers the latent hierarchical structure of the sequence without explicitly given boundary information. If a parametrized boundary detector indicates the end of a segment, then a summarized representation of the segment is fed to the upper layer and the state of the lower layer is reset chung2016multiscale .Our FSRNN architectures borrows elements from both deep transition RNNs and multiscale RNNs. The major difference to multiscale RNNs is that our lower hierarchical layer zooms in in time, that is, it operates faster than the timescale that is naturally given by the input sequence. The major difference to deep transition RNNs is our approach to facilitate longterm dependencies, namely, we employ a RNN operating on a slow timescale.
Many approaches aim at solving the problem of learning longterm dependencies in sequential data. A very popular one is to use external memory cells that can be accessed and modified by the network, see Neural Turing Machines
graves2014neural , Memory Networks weston2014memory and Differentiable Neural Computer graves2016hybrid . Other approaches focus on different optimization techniques rather than network architectures. One attempt is Hessian Free optimization martens2011learning , a second order training method that achieved good results on RNNs. The use of different optimization techniques can improve learning in a wide range of RNN architectures and therefore, the FSRNN may also benefit from it.3 FastSlow RNN
We propose the FSRNN architecture, see Figure 1. It consists of sequentially connected RNN cells on the lower hierarchical layer and one RNN cell on the higher hierarchical layer. We call the Fast cells, the Slow cell and the corresponding hierarchical layers the Fast and Slow layer, respectively. receives input from and feeds its state to . receives the sequential input data , and
outputs the predicted probability distribution
of the next element of the sequence.Intuitively, the Fast cells are able to learn complex transition functions from one time step to the next one. The Slow cell gives rise to shorter gradient paths between sequential inputs that are distant in time, and thus, it facilitates the learning of longterm dependencies. Therefore, the FSRNN architecture incorporates advantages of deep transition RNNs and of multiscale RNNs, see Section 2.
Since any kind of RNN cell can be used as building block for the FSRNN architecture, we state the formal update rules of the FSRNN for arbitrary RNN cells. We define a RNN cell to be a differentiable function that maps a hidden state and an additional input to a new hidden state. Note that can be input data or input from a cell in a higher or lower hierarchical layer. If a cell does not receive an additional input, then we will omit . The following equations define the FSRNN architecture for arbitrary RNN cells and .
The output is computed as an affine transformation of . It is possible to extend the FSRNN architecture in order to further facilitate the learning of longterm dependencies by adding hierarchical layers, each of which operates on a slower timescale than the ones below, resembling clockwork RNNs koutnik2014clockwork . However, for the tasks considered in Section 4, we observed that this led to overfitting the training data even when applying regularization techniques and reduced the performance at test time. Therefore, we will not further investigate this extension of the model in this paper, even though it might be beneficial for other tasks or larger data sets.
In the experiments in Section 4, we use LSTM cells as building blocks for the FSRNN architecture. For completeness, we state the update function for an LSTM . The state of an LSTM is a pair , consisting of the hidden state and the cell state. The function maps the previous state and input to the next state according to
where , and are commonly referred to as forget, input and output gates, and are the new candidate cell states. Moreover, , and are the learnable parameters,
denotes the sigmoid function, and
denotes the elementwise multiplication.4 Experiments
For the experiments, we consider the FastSlow LSTM (FSLSTM) that is a FSRNN, where each RNN cell is a LSTM cell. The FSLSTM is evaluated on two character level language modeling data sets, namely Penn Treebank and Hutter Prize Wikipedia, which will be referred to as in this section. The task consists of predicting the probability distribution of the next character given all the previous ones. In Section 4.1, we compare the performance of the FSLSTM with other approaches. In Section 4.2, we empirically compare the network dynamics of different RNN architectures and show the FSLSTM combines the benefits of both, deep transition RNNs and multiscale RNNs.
4.1 Performance on Penn Treebank and Hutter Prize Wikipedia
The FSLSTM achieves BPC and BPC on the Penn Treebank and data sets, respectively. These results are compared to other approaches in Table 1 and Table 2 (the baseline LSTM results without citations are taken from zoph2016NASCell for Penn Treebank and from ha2016hyper for ). For the Penn Treebank, the FSLSTM outperforms all previous approaches with significantly less parameters than the previous top approaches. We did not observe any improvement when increasing the model size, probably due to overfitting. In the data set, the FSLSTM surpasses all other neural approaches. Following graves2014neural , we compare the results with text compression algorithms using the BPC measure. An ensemble of two FSLSTM models ( BPC) outperforms cmix ( BPC) cmix , the current best text compression algorithm on largetextcompressionbenchmark . However, a fair comparison is difficult. Compression algorithms are usually evaluated by the final size of the compressed data set including the decompressor size. For character prediction models, the network size is usually not taken into account and the performance is measured on the test set. We remark that as the FSLSTM is evaluated on the test set, it should achieve similar performance on any part of the English Wikipedia.
The FSLSTM2 and FSLSTM4 model consist of two and four cells in the Fast layer, respectively. The FSLSTM4 model outperforms the FSLSTM2 model, but its processing time for one time step is higher than the one of the FSLSTM2. Adding more cells to the Fast layer could further improve the performance as observed for RHN zilly2016recurrent , but would increase the processing time, because the cell states are computed sequentially. Therefore, we did not further increase the number of Fast cells.
The model is trained to minimize the crossentropy loss between the predictions and the training data. Formally, the loss function is defined as
, where is the probability that a model with parameters assigns to the next character given all the previous ones. The model is evaluated by the BPC measure, which uses the binary logarithm instead of the natural logarithm in the loss function. All the hyperparameters used for the experiments are summarized in Table 3. We regularize the FSLSTM with dropout srivastava2014dropout . In each time step, a different dropout mask is applied for the nonrecurrent connections zaremba14rnndropout , and Zoneout krueger16zoneout is applied for the recurrent connections. The network is trained with minibatch gradient descent using the Adam optimizer kingma14adam . If the gradients have norm larger than they are normalized to. Truncated backpropagation through time (TBPTT)
rumelhart1988learning ; elman90entropy is used to approximate the gradients, and the final hidden state is passed to the next sequence. The learning rate is divided by a factor for the last epochs in the Penn Treebank experiments, and it is divided by a factor whenever the validation error does not improve in two consecutive epochs in the experiments. The forget bias of every LSTM cell is initialized to , and all weight matrices are initialized to orthogonal matrices. Layer normalization ba16layernorm is applied to the cell and to each gate separately. The network with the smallest validation error is evaluated on the test set. The two data sets that we use for evaluation are:Penn Treebank marcus1993ptb
The dataset is a collection of Wall Street Journal articles written in English. It only contains different words, all written in lowercase, and rare words are replaced with "". Following mikolov2012ptb_division , we split the data set into train, validation and test sets consisting of M, K and K characters, respectively.
Hutter Prize Wikipedia hutter2012enwik8
This dataset is also known as and it consists of "raw" Wikipedia data, that is, English articles, tables, XML data, hyperlinks and special characters. The data set contains M characters with unique tokens. Following chung2015gated , we split the data set into train, validation and test sets consisting of M, M and M characters, respectively.
Model  BPC  Param Count 
Zoneout LSTM krueger16zoneout  1.27   
2Layers LSTM  1.243  6.6M 
HMLSTM chung2016multiscale  1.24   
HyperLSTM  small ha2016hyper  1.233  5.1M 
HyperLSTM ha2016hyper  1.219  14.4M 
NASCell  small zoph2016NASCell  1.228  6.6M 
NASCell zoph2016NASCell  1.214  16.3M 
FSLSTM2 (ours)  1.190  7.2M 
FSLSTM4 (ours)  1.193  6.5M 
Model  BPC  Param Count 
LSTM, units  1.461  18M 
Layer Norm LSTM, units  1.402  14M 
HyperLSTM ha2016hyper  1.340  27M 
HMLSTM chung2016multiscale  1.32  35M 
Surprisaldriven Zoneout rocki2016surprisal  1.31  64M 
RHN  depth 5 zilly2016recurrent  1.31  23M 
RHN  depth 10 zilly2016recurrent  1.30  21M 
Large RHN  depth 10 zilly2016recurrent  1.27  46M 
FSLSTM2 (ours)  1.290  27M 
FSLSTM4 (ours)  1.277  27M 
Large FSLSTM4 (ours)  1.245  47M 
2 Large FSLSTM4 (ours)  1.198  2 47M 
cmix v13 cmix  1.225   
Penn Treebank  enwik8  
FSLSTM2  FSLSTM4  FSLSTM2  FSLSTM4 


Nonrecurrent dropout 
0.35  0.35  0.2  0.2  0.25  
Cell zoneout  0.5  0.5  0.3  0.3  0.3  
Hidden zoneout  0.1  0.1  0.05  0.05  0.05  
Fast cell size  700  500  900  730  1200  
Slow cell size  400  400  1500  1500  1500  
TBPTT length  150  150  150  150  100  
Minibatch size  128  128  128  128  128  
Input embedding size  128  128  256  256  256  
Initial Learning rate  0.002  0.002  0.001  0.001  0.001  
Epochs  200  200  35  35  50  

4.2 Comparison of network dynamics of different architectures
We compare the FSLSTM architecture with the stackedLSTM and the sequentialLSTM architectures, depicted in Figure 2, by investigating the network dynamics. In order to conduct a fair comparison we chose the number of parameters to roughly be the same for all three models. The FSLSTM consists of one Slow and four Fast LSTM cells of units each. The stackedLSTM consists of five LSTM cells stacked on top of each other consisting of units each, which will be referred to as Stacked1, … , Stacked5, from bottom to top. The sequentialLSTM consists of five sequentially connected LSTM cells of units each. All three models require roughly the same time to process one time step. The models are trained on for epochs with minibatch gradient descent using the Adam optimizer kingma14adam without any regularization, but layer normalization ba16layernorm is applied on the cell states of the LSTMs. The hyperparameters are not optimized for any of the three models.
The experiments suggest that the FSLSTM architecture favors the learning of longterm dependencies (Figure 3), enforces hidden cell states to change at different rates (Figure 4) and facilitates a quick adaptation to unexpected inputs (Figure 5). Moreover, the FSLSTM achieves BPC and outperforms the stackedLSTM ( BPC) and the sequentialLSTM ( BPC).
In Figure 3, we asses the ability to capture longterm dependencies by investigating the effect of the cell state on the loss at later time points, following krueger16zoneout . We measure the effect of the cell state at time on the loss at time by the gradient . This gradient is the largest for the Slow LSTM, and it is small and steeply decaying as increases for the Fast LSTM. Evidently, the Slow cell captures longterm dependencies, whereas the Fast cell only stores shortterm information. In the stackedLSTM, the gradients decrease from the top layer to the bottom layer, which can be explained by the vanishing gradient problem. The small, steeply decaying gradients of the sequentialLSTM indicate that it is less capable to learn longterm dependencies than the other two models.
Figure 4 gives further evidence that the FSLSTM stores longterm dependencies efficiently in the Slow LSTM cell. It shows that among all the layers of the three RNN architectures, the cell states of the Slow LSTM change the least from one time step to the next. The highest change is observed for the cells of the sequential model followed by the Fast LSTM cells.
In Figure 5, we investigate whether the FSLSTM quickly adapts to unexpected characters, that is, whether it performs well on the subsequent ones. In text modeling, the initial character of a word has the highest entropy, whereas later characters in a word are usually less ambiguous elman90entropy . Since the first character of a word is the most difficult one to predict, the performance at the following positions should reflect the ability to adapt to unexpected inputs. While the prediction qualities at the first position are rather close for all three models, the FSLSTM outperforms the stackedLSTM and sequentialLSTM significantly on subsequent positions. It is possible that new information is incorporated quickly in the Fast layer, because it only stores shortterm information, see Figure 3.
5 Conclusion
In this paper, we have proposed the FSRNN architecture. Up to our knowledge, it is the first architecture that incorporates ideas of both multiscale and deep transition RNNs. The FSRNN architecture improved state of the art results on character level language modeling evaluated on the Penn Treebank and Hutter Prize Wikipedia data sets. An ensemble of two FSRNNs achieves better BPC performance than the best known compression algorithm. Further experiments provided evidence that the Slow cell enables the network to learn longterm dependencies, while the Fast cells enable the network to quickly adapt to unexpected inputs and learn complex transition functions from one time step to the next.
Our FSRNN architecture provides a general framework for connecting RNN cells as any type of RNN cell can be used as building block. Thus, there is a lot of flexibility in applying the architecture to different tasks. For instance using RNN cells with good longterm memory, like EURNNs jing2016EUNN or NARX RNNs lin1996learning ; narx2017 , for the Slow cell might boost the longterm memory of the FSRNN architecture. Therefore, the FSRNN architecture might improve performance in many different applications.
Acknowledgments
We thank Julian Zilly for many helpful discussions.
References
 [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
 [2] David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville and Chris Pal. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
 [3] Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. Endtoend attentionbased large vocabulary speech recognition. Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference, 2016.
 [4] Yoshua Bengio et al. Learning deep architectures for ai. Foundations and trends® in Machine Learning, 2(1):1–127, 2009.
 [5] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning longterm dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166, 1994.
 [6] Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016.
 [7] Junyoung Chung, Caglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural networks. In ICML, pages 2067–2075, 2015.
 [8] Robert DiPietro, Nassir Navab, and Gregory D. Hager. Revisiting narx recurrent neural networks for longterm dependencies, 2017.
 [9] Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for longterm dependencies. In Nips, volume 409, 1995.
 [10] Jeffrey L. Elman. Finding structure in time. COGNITIVE SCIENCE, 14:179–211, 1990.
 [11] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
 [12] Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
 [13] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
 [14] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471–476, 2016.
 [15] David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. arXiv preprint arXiv:1611.01578, 2016.
 [16] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. PhD thesis, diploma thesis, institut für informatik, lehrstuhl prof. brauer, technische universität münchen, 1991.
 [17] Sepp Hochreiter. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and KnowledgeBased Systems, 6(02):107–116, 1998.
 [18] Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 [19] Marcus Hutter. The human knowledge compression contest. http://prize.hutter1.net, 2012.
 [20] Herbert Jaeger. Discovering multiscale dynamical features with hierarchical echo state networks. Technical report, Jacobs University Bremen, 2007.
 [21] Li Jing, Yichen Shen, Tena Dubček, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin Soljačić. Tunable efficient unitary neural networks (eunn) and their application to rnns, 2016.
 [22] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [23] Bryon Knoll. Cmix. http://www.byronknoll.com/cmix.html, 2017. Accessed: 20170518.
 [24] Jan Koutník, Klaus Greff, Faustino Gomez, and Jürgen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1603.08983, 2016.
 [25] Tsungnan Lin, Bill G Horne, Peter Tino, and C Lee Giles. Learning longterm dependencies in narx recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329–1338, 1996.
 [26] Matt Mahoney. Large text compression benchmark. http://mattmahoney.net/dc/text.html, 2017. Accessed: 20170518.
 [27] Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Comput. Linguist., 19(2):313–330, June 1993.
 [28] James Martens and Ilya Sutskever. Learning recurrent neural networks with hessianfree optimization. In Proceedings of the 28th International Conference on Machine Learning (ICML11), pages 1033–1040, 2011.
 [29] Tomás̆ Mikolov, Ilya Sutskever, Anoop Deoras, HaiSon Le, Kombrink Stefan, and Jan C̆ernocký. Subword language modeling with neural networks. preprint: http://www.fit.vutbr.cz/ imikolov/rnnlm/char.pdf, 2012.
 [30] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013.
 [31] AJ Robinson and Frank Fallside. The utility driven dynamic error propagation network. University of Cambridge Department of Engineering, 1987.
 [32] Kamil Rocki, Tomasz Kornuta, and Tegan Maharaj. Surprisaldriven zoneout. arXiv preprint arXiv:1610.07675, 2016.
 [33] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. Cognitive modeling, 5(3):1, 1988.
 [34] Jürgen Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234–242, 1992.
 [35] Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. A hierarchical recurrent encoderdecoder for generative contextaware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 553–562. ACM, 2015.
 [36] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958, January 2014.
 [37] Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
 [38] Paul J Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural networks, 1(4):339–356, 1988.
 [39] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
 [40] Ronald J Williams. Complexity of exact gradient computation algorithms for recurrent neural networks. Technical report, Technical Report Technical Report NUCCS8927, Boston: Northeastern University, College of Computer Science, 1989.
 [41] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
 [42] Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
 [43] Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
Comments
There are no comments yet.