NumpyRNNs
None
view repo
We stabilize the activations of Recurrent Neural Networks (RNNs) by penalizing the squared distance between successive hidden states' norms. This penalty term is an effective regularizer for RNNs including LSTMs and IRNNs, improving performance on characterlevel language modeling and phoneme recognition, and outperforming weight noise and dropout. We achieve competitive performance (18.6% PER) on the TIMIT phoneme recognition task for RNNs evaluated without beam search or an RNN transducer. With this penalty term, IRNN can achieve similar performance to LSTM on language modeling, although adding the penalty term to the LSTM results in superior performance. Our penalty term also prevents the exponential growth of IRNN's activations outside of their training horizon, allowing them to generalize to much longer sequences.
READ FULL TEXT VIEW PDF
Recurrent neural networks (RNNs) serve as a fundamental building block f...
read it
In this paper we examine a possible reason for the LSTM outperforming th...
read it
We propose zoneout, a novel method for regularizing RNNs. At each timest...
read it
Recurrent neural networks (RNNs), such as long shortterm memory network...
read it
Unitary recurrent neural networks (URNNs) have been proposed as a method...
read it
Training recurrent neural networks (RNNs) with backpropagation through t...
read it
Recurrent neural networks (RNNs) have shown promising performance for
la...
read it
None
Overfitting in machine learning is addressed by restricting the space of hypotheses ( i.e. functions) considered. This can be accomplished by reducing the number of parameters or using a regularizer with an inductive bias for simpler models, such as early stopping. More effective regularization can be achieved by incorporating more sophisticated prior knowledge. Keeping an RNN’s hidden activations on a reasonable path can be difficult, especially across long timesequences. With this in mind, we devise a regularizer for the state representation learned by temporal models, such as RNNs, that aims to encourage stability of the path taken through representation space. Specifically, we propose the following additional cost term for Recurrent Neural Networks (RNNs):
Where
is the vector of hidden activations at timestep
, andis a hyperparameter controlling the amounts of regularization. We call this penalty the
normstabilizer, as it successfully encourages the norms of the hiddens to be stable (i.e. approximately constant across time). Unlike the “temporal coherence” penalty of Jonschkowski & Brock (2015), our penalty does not encourage the state representation to remain constant, only its norm.In the absence of inputs and nonlinearities, a constant norm would imply orthogonality of the hiddentohidden transition matrix for simple RNNs (SRNNs). However, in the case of an orthogonal transition matrix, inputs and nonlinearities can still change the norm of the hidden state, resulting in instability. This makes targeting the hidden activations directly a more attractive option for achieving norm stability. Stability becomes especially important when we seek to generalize to longer sequences at test time than those seen during training (the “training horizon”).
The hidden state in LSTM (Hochreiter & Schmidhuber, 1997) is usually the product of two squashing nonlinearities, and hence bounded. The norm of the memory cell, however, can grow linearly when the input, input modulation, and forget gates are all saturated at 1. Nonetheless, we find that the memory cells exhibit norm stability far past the training horizon, and suggest that this may be part of what makes LSTM so successful.
The activation norms of simple RNNs (SRNNs) with saturating nonlinearities are bounded. With ReLU nonlinearities, however, activations can explode instead of saturating. When the transition matrix,
has any eigenvalues
with absolute value greater than 1, the part of the hidden state that is aligned with the corresponding eigenvector will grow exponentially to the extent that the ReLU or inputs fails to cancel out this growth.
Simple RNNs with ReLU (Le et al., 2015) or clipped ReLU (Hannun et al., 2014) nonlinearities have performed competitively on several tasks, suggesting they can learn to be stable. We show, however, that IRNNs performance can rapidly degrade outside of their training horizon, while the normstabilizer prevents activations from exploding outside of the training horizon allowing IRNNs to generalize to much longer sequences. Additionally, we show that this penalty results in improved validation performance for IRNNs. Somewhat surprisingly, it also improves performance for LSTMs, but not tanhRNNs.
To the best of our knowledge, our proposal is entirely novel. Pascanu et al. (2012) proposed vanishing gradient regularization, which encourages the hidden transition to preserve norm in the direction of the cost derivative. Like the normstabilizer, their cost depends on the path taken through representation space, but the norm stabilzer does not prioritize costrelevant directions, and accounts for the effects of inputs as well. A hard constraint (clipping) on the activations of LSTM memory cells was previously proposed by Sak et al. (2015). Hannun et al. (2014) use a clipped ReLU, which also has the effect of limiting activations. Both of these techniques operate elementwise however, whereas we target the activations’ norms. Several other works have used penalties on the difference of hidden states rather than their norms (Jonschkowski & Brock, 2015; Wen et al., 2015). Other regularizers for RNNs that do not target norm stability include weight noise (Jim et al., 1996) and dropout (Pham et al., 2013; Pachitariu & Sahani, 2013; Zaremba et al., 2014).
We show that the normstabilizer improves performance for characterlevel language modeling on PennTreebank (Marcus et al., 1993) for LSTM and IRNNs ^{1}^{1}1As in Le et al. (2015), we initialize
to be an identity matrix in our experiments
, but not tanhRNNs. We present results for . We found that values of could slightly improve performance, but also resulted in much longer training time on this task. Schedulingto increase throughout training might allow for faster training. Unless otherwise specified, we use 1000/1600 units for LSTM/SRNN, and SGD with learning rate=.002, momentum=.99, and gradient clipping=1. We train for a maximum of 1000 epochs and use sequences of length 50 taken without overlap. When we encounter a NaN in the cost function, we divide the learning rate by 2, and restart with the previous epoch’s parameters.
For LSTMs, we either apply the normstabilizer penalty only to the memory cells, or only to the hidden state (in which case we remove the output tanh, as in (Gers & Schmidhuber, 2000)). Although Greff et al. (2015) found the output tanh to be essential for good performance, removing it gave us a slight improvement in this task. We compare to tanh and ReLU (with and without bias), with a grid search across cost weight, gradient clipping, and learning rate. For simple RNNs, we found that the zerobias ReLU (i.e. TRec (Konda et al., 2014)
with threshold 0) gave the best performance. The best performance for ReLU activation functions is obtained with the penalty applied. For tanhRNNs, the best performance is obtained without any regularization. Results are better with the penalty than without for 9 out of 12 experiment settings.
penalize hidden state  

penalize memory cell 
We compare 8 alternatives to the normstabilizer cost on PennTreeBank for IRNNs without biases (see Table 3), using the same setup as in 2.1. These include relative error, norm, absolute difference, and penalties that don’t target successive timesteps. The following two penalties performed very poorly and were not included in the table: , .
We find that our proposal of penalizing successive states’ norms gives the best performance, but some alternatives seem promising and deserve further investigation. In particular, the relative error could be more appropriate; unlike the normstabilizer cost, it cannot be reduced simply by dividing all of the hidden states by a constant. The value 5 was chosen as a target for the norms based on the value found by our proposed cost; in practice it would be another hyperparameter to tune. The success of the other regularizers which encourage () norm stability indicates that our inductive bias in favor of stable norms is useful.
tanh,  

tanh,  
ReLU,  
ReLU,  
TRec,  
TRec, 
We show that the normstabilizer improves phoneme recognition on the TIMIT dataset, outperforming networks regularized with weight noise and/or dropout. For these experiments, we use a similar setup to the previous state of the art for an RNN on this task (Graves et al., 2013), with CTC (Graves et al., 2006) and bidirectional LSTMs with 3 layers of 500 hidden units (for each direction). We train with Adam (Kingma & Ba, 2014) using learning rate=.001 and gradient clipping=200. Unlike Graves et al. (2013), we do not use beam search or an RNN transducer. We early stop after 25 epochs without improvement on the development set.
We apply normstabilization to the hidden activations (in this case we do use the output tanh as is standard) with
, and use standard deviation .05 for weight noise and p=.5 for dropout. We try all pairwise combinations of the regularization techniques. We run 5 experiments for each of these 10 settings, and report the average phoneme error rate (PER). Combining weight noise and normstabilization gave poor performance, with some networks failing to train, these results are omitted. Adding dropout had a minor effect on results. Normstabilized networks had the best performance (see figure
2 and table 4). Inspired by these results, we decided to train larger networks with more regularization, and observed further performance improvements (see table 5). We also used a higher “patience” for our early stopping criterion here, terminating after 100 epochs without improvement. Unlike previous experiments, we only ran one experiment with each of these settings. The network with 750 hidden units and gave the best performance on the development set, with dev/test PER of 16.2%/18.6%. This is competitive with the state of the art results on this task from Graves et al. (2013) and we evaluate without beam search or RNN transducer. although Tóth (2014)achieved 13.9%/16.7% using convolutional neural networks. The network with 1000 hidden units and
achieved dev/test PER of 16.7%/17.5%.










test  
dev 
 dropout probability,
 standard deviation of additive Gaussian weight noise.










test  
dev 
The adding task (Hochreiter & Schmidhuber, 1997) is a toy problem used to test an RNN’s ability to model longterm dependencies. The goal is to output the sum of two numbers seen at random timesteps during training; inputs at other timesteps carry no information. Each element of an input sequence consists of a pair , where is chosen at uniform random and indicates which two numbers to add. We use sequences of length 400. In Le et al. (2015), none of the models were able to reduce the cost below the “shortsighted” baseline set by predicting the first (or second) of the indicated numbers (which gives an expected cost of ) for this sequence length. We are able to solve this task more successfully. We use uniform initialization in , learning rate=.01, gradient clipping=1. We compare across nine random seeds with and without the normstabilizer (using ). The normstabilized networks reduced the test cost below in 8/9 cases, averaging .059 MSE. The unregularized networks averaged .105 MSE, and only outperformed the shortsighted baseline in 4/9 cases, also failing to improve over a constant predictor in 4/9 cases.
To test our hypothesis that stability helps networks generalize to longer sequences than they were trained on, we examined the costs and hidden norms at each timestep.
Comparing identical SRNNs trained with and without normstabilizer penalty, we found LSTMs and RNNs with tanh activation functions continued to perform well far beyond the training horizon. Although the activations of LSTM’s memory cells could potentially grow linearly, in our experiments they are stable. Applying the normstabilizer does significantly decrease their average norm and the variability of the norm, however (see figure 3). IRNNs, on the other hand, suffered from exploding activations, resulting in poor performance, but the normstabilizer effectively controls the norms and maintains a high level of performance; see figure 4. Normstabilized IRNNs’ performance and norms were both stable for the longest horizon we evaluated (10,000 timesteps).
For more insight on why the normstabilizer outperforms alternative costs, we examined the hidden norms of networks trained with values of ranging from 0 to 200 on a dataset of 1000 length50 sequences taken from wikipedia (Hutter, 2012). When we penalize the difference of the initial and final norms, or the difference of the norms from some fixed value, increasing the cost does not change the shape of the norms; they still begin to explode within the training horizon (see figure 5). For the normstabilizer, however, increasing the penalty significantly delayed (but did not completely eradicate) activation explosions on this dataset.
We also noticed that the distribution of activations was more concentrated in fewer hidden units when applying normstabilization on PennTreebank. Similarly, we found that the forget gates in LSTM networks had a more peaked distribution (see figure 6), while the average across dimensions was lower (so the network was forgetting more on average at each time step, but a small number of units were forgetting less). Finally, we found that the eigenvalues of regularized IRNN’s hidden transition matrices had a larger number of large eigenvalues, while the unregularized IRNN had a much larger number of eigenvalues closer to in absolute value (see figure 6). This supports our hypothesis that orthogonal transitions are not inherently desirable in an RNN. By explicitly encouraging stability, the normstabilizer seems to favor solutions that maintain stability via selection of active units, rather than restricting the choice of transition matrix.
We introduced normbased regularization of RNNs to prevent exploding or vanishing activations. We compare a range of novel methods for encouraging or enforcing norm stability. The best performance is achieved by penalizing the squared difference of subsequent hidden states’ norms. This penalty, the normstabilizer, improved performance on the tasks of language modeling and addition tasks, and gave state of the art RNN performance on phoneme recognition on the TIMIT dataset.
Future work could involve:
Exploring the relationship between stability and generative modeling with RNNs
Applying normregularized IRNNs to more challenging tasks
Applying similar regularization techniques to feedforward nets
This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laborotory (AFRL) . The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. We appreciate the many k80 GPUs provided by ComputeCanada. The authors would like to thank the developers of Theano
(Bastien et al., 2012) and Blocks (van Merriënboer et al., 2015). Special thanks to Alex Lamb, Amar Shah, Asja Fischer, Caglar Gulcehre, Cesar Laurent, Dmitriy Serdyuk, Dzmitry Bahdanau, Faruk Ahmed, Harm de Vries, Jose Sotelo, Marcin Moczulski, Martin Arjovsky, Mohammad Pezeshki, Philemon Brakel, and Saizhen Zhang for useful discussions and/or sharing code.Zerobias autoencoders and the benefits of coadapting features.
ArXiv eprints, February 2014.
Comments
There are no comments yet.