Revisiting Activation Regularization for Language RNNs

08/03/2017 ∙ by Stephen Merity, et al. ∙ 0

Recurrent neural networks (RNNs) serve as a fundamental building block for many sequence tasks across natural language processing. Recent research has focused on recurrent dropout techniques or custom RNN cells in order to improve performance. Both of these can require substantial modifications to the machine learning model or to the underlying RNN configurations. We revisit traditional regularization techniques, specifically L2 regularization on RNN activations and slowness regularization over successive hidden states, to improve the performance of RNNs on the task of language modeling. Both of these techniques require minimal modification to existing RNN architectures and result in performance improvements comparable or superior to more complicated regularization techniques or custom cell architectures. These regularization techniques can be used without any modification on optimized LSTM implementations such as the NVIDIA cuDNN LSTM.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The need for effective regularization methods for RNNs has seen extensive focus in recent years. While application of dropout (Srivastava et al., 2014) to the input and output of an RNN has been shown to be effective (Zaremba et al., 2014), dropout is destructive when naively applied to the recurrent connections of an RNN. When naive dropout is applied to the recurrent connections, it is almost impossible to retain information over long periods of time.

Given this fundamental issue, substantial work has gone into understanding and improving dropout when applied to recurrent connections. Of these techniques, which we shall broadly refer to as recurrent dropout, some specific variations have gained popular usage.

Variational RNNs (Gal & Ghahramani, 2016) drop the same network units at each timestep, as opposed to dropping different network units at each timestep. By performing dropout on the same units at each timestep, destructive loss of the RNN hidden state is avoided and the same information is masked at each timestep.

Rather than dropping units, another tactic is to drop updates to given network units. Semeniuta et al. (2016) perform dropout on the input gate of the LSTM (Hochreiter & Schmidhuber, 1997) but allow the forget gate to discard portions of the existing hidden state. Zoneout (Krueger et al., 2016) prevents hidden state updates from occurring by setting a randomly selected subset of network unit activations in to be equal to the previous activations from . Both of these act to prevent updates to the hidden state while preserving existing content.

On an extreme end, work has also been done to restrict the recurrent matrices in an RNN in order to limit their computational capacity. Some RNN architectures only allow element-wise interactions (Balduzzi & Ghifary, 2016; Bradbury et al., 2016; Seo et al., 2016), removing the recurrent matrix entirely, while others act to restrict the capacity by parameterizing the recurrent matrix (Arjovsky et al., 2016; Wisdom et al., 2016; Jing et al., 2016).

Other forms of regularization explicitly act upon activations such as such as batch normalization 

(Ioffe & Szegedy, 2015), recurrent batch normalization (Cooijmans et al., 2016), and layer normalization (Ba et al., 2016). These all introduce additional training parameters and can complicate the training process while increasing the sensitivity of the model. Norm stabilization (Krueger & Memisevic, 2015) penalizes the model when the norm of an RNN’s hidden state changes substantially between timesteps, achieving strong results in character language modeling on and phoneme recognition.

In this work, we revisit regularization in the form of activation regularization (AR) and temporal activation regularization (TAR). When applied to modern baselines that do not contain recurrent dropout or normalization techniques, AR and TAR achieve comparable or superior results.

Compared to other invasive regularization techniques which may require modifications to the RNN cell itself or complex model changes, both AR and TAR require no substantial modifications to the RNN or model. This enables AR and TAR to be applied to optimized RNN implementations such as the cuDNN LSTM which can be many times faster than naïve but flexible LSTM implementations.

2 Activation Regularization

2.1 activation regularization (AR)

While regularization is traditionally used on the weights of machine learning models ( weight decay), it could also be used on the activations. We define AR as

where is the dropout mask used by later parts of the model, ( norm), is the output of the RNN at timestep , and is a scaling coefficient.

When applied to the output of a dense layer, AR penalizes activations that are substantially away from 0, encouraging the activations to remain small. While acting implicitly rather than explicitly, this has similarities to the various batch or layer normalization techniques.

The penalty on the RNN activations can be applied to or to (the dropped output used in the rest of the model). In our experiments, we found that applying AR to

was more effective than applying it to neurons not updated during the current optimization step.

2.2 Temporal activation regularization (TAR)

Adding a prior that minimizes differences between states has been explored in the past. This broad concept falls under the broad concept of slowness regularization (Hinton, 1989; Földiák, 1991; Luciw & Schmidhuber, 2012; Jonschkowski & Brock, 2015; Wen et al., 2015) which attempts to minimize where

is a loss function describing the distance between

and and is an arbitrary mapping function.

Temporal activation regularization (TAR) is a direct descendant of this slowness regularization, minimizing

where ( norm), is the output of the RNN at timestep , and is a scaling coefficient.

TAR penalizes any large changes in hidden state between timesteps, encouraging the model to keep the output as consistent as possible. For the LSTM, the hidden state which is regularized is only , not the long term memory , though this could optionally be regularized in a similar manner.

3 Experiments

3.1 Language Modeling

We benchmark activation regularization (AR) and temporal activation regularization (TAR) applied to a strong non-variational LSTM baseline111PyTorch Word Level Language Modeling example: https://github.com/pytorch/examples/tree/master/word_language_model. The experiment uses a preprocessed version of the Penn Treebank (PTB) (Mikolov et al., 2010) and WikiText-2 (WT2) (Merity et al., 2016)

. All hyperparameters, including

for AR and for TAR, are optimized over the validation dataset. The best found hyperparameters as determined by the validation results are then run on the test set.

Model Parameters Validation
13M
13M
13M
13M
13M
13M
Table 1: Results over the Penn Treebank for testing coefficients for AR with base model .
Model Parameters Validation
13M
13M
13M
13M
13M
13M
Table 2: Results over the Penn Treebank for testing coefficients for TAR with base model .
Model Parameters Validation Test
PTB, LSTM (tied) 13M
PTB, LSTM (tied) 24M
PTB, LSTM (tied) 51M
PTB, LSTM (tied) 13M
PTB, LSTM (tied) 24M
PTB, LSTM (tied) 51M
Table 3: Single model perplexity results over the Penn Treebank. Models noting tied use weight tying on the embedding and softmax weights. The top section contain models without AR or TAR with the bottom section containing equivalent models using them.
Model Parameters Validation Test
Inan et al. (2016) - Variational LSTM (tied) () 28M
Inan et al. (2016) - Variational LSTM (tied) () + augmented loss 28M
WT2, LSTM (tied) 28M
WT2, LSTM (tied) 28M
Table 4: Results over WikiText-2. The increases in parameters compared to the models on PTB are due to the larger vocabulary. Models noting tied use weight tying on the embedding and softmax weights.

PTB: As the Penn Treebank is a small dataset, preventing overfitting is of considerable importance and a major focus of research. Almost all competitive models rely upon a form of recurrent dropout to ensure the RNN does not overfit through drastic changes in the hidden state. Other aggressive dropout techniques, such as performing dropout on the embedding layer such that entire words are dropped from a sequence, are also frequently used.

WT2: WikiText-2 is a dataset approximately twice as large as PTB but with a vocabulary three times larger. The text is also tokenized and processed in a manner similar to datasets used for machine translation using the Moses tokenizer (Koehn et al., 2007).

Experiment details: All experiments use a model containing a two layer RNN. The AR and TAR loss are only applied to the output of the final RNN layer, not to all layers. For the majority of experiments, we follow the medium model size of Zaremba et al. (2014): a two layer RNN with 650 hidden units in each layer.

For training the model, stochastic gradient descent (SGD) without momentum was used for up to 80 epochs. The learning rate began at

and was divided by four each time validation perplexity failed to improve. weight regularization of was used over all weights in the model and gradients with norm over 10 were rescaled. Batches consist of 20 examples with each example containing 35 timesteps. The loss was averaged over all examples and timesteps. All embedding weights were uniformly initialized in the interval and all other weights were initialized between , where is the hidden size.

For dropout, we have two different parameters, dp and . dp

is the dropout rate used on the word vectors and the final RNN output.

is the dropout rate used on the connection between RNN layers. All models use weight tying between the embedding and softmax layer

(Inan et al., 2016; Press & Wolf, 2016).

Evaluating AR and TAR independently on PTB: To understand the potential of AR and TAR, we investigate their impact on language model perplexity when used independently in Table 1 (AR) and Table 2 (TAR). While both result in a substantial reduction in perplexity, AR results in the strongest improvement of , while TAR only achieves . The drops achieved by this are equivalent to using an LSTM model with twice as many parameters - a substantial improvement given the simplicity of AR and TAR.

Evaluating AR and TAR jointly on PTB: When both AR and TAR are used together, we found the best result was achieved by decreasing and , likely as the model was over-regularized otherwise. In Table 3 we present PTB results for three different model sizes comparing models without AR/TAR to those which use both. The model sizes were chosen to be comparable in size to other published results. With both AR and TAR, the smallest model has an improvement of over the baseline model. The improvements continue for the two larger size models, and , though the gains fall off as the model size is increased.

Model Parameters Validation Test
Zaremba et al. (2014) - LSTM (medium) 20M
Zaremba et al. (2014) - LSTM (large) 66M
Gal & Ghahramani (2016) - Variational LSTM (medium) 20M
Gal & Ghahramani (2016) - Variational LSTM (medium, MC) 20M
Gal & Ghahramani (2016) - Variational LSTM (large) 66M
Gal & Ghahramani (2016) - Variational LSTM (large, MC) 66M
Kim et al. (2016) - CharCNN 19M
Merity et al. (2016) - Pointer Sentinel-LSTM 21M
Inan et al. (2016) - Variational LSTM (tied) + augmented loss 24M
Inan et al. (2016) - Variational LSTM (tied) + augmented loss 51M
Zilly et al. (2016) - Variational RHN (tied) 23M
Zoph & Le (2016) - NAS Cell (tied) 25M
Zoph & Le (2016) - NAS Cell (tied) 54M
PTB, LSTM (tied) 13M
PTB, LSTM (tied) 24M
PTB, LSTM (tied) 51M
Table 5: Single model perplexity on validation and test sets for the Penn Treebank language modeling task. Models noting tied use weight tying on the embedding and softmax weights.

Comparing to state-of-the-art PTB: In Table 5 we summarize the current state of the art models in language modeling over the Penn Treebank.

The largest LSTM we train () achieves comparable results to the Recurrent Highway Network (RHN) (Zilly et al., 2016), a human developed custom RNN architecture, but with approximately double the number of parameters. Although the LSTM uses twice as many parameters, the RHN runs a cell 10 times per timestep (referred to as recurrence depth), resulting in far more computation. This would likely result in the RHN being slower than the larger LSTM model during both training and prediction, especially when factoring in optimized LSTM implementations such as NVIDIA’s cuDNN LSTM.

We also compare to the Neural Architecture Search (NAS) cell (Zoph & Le, 2016). While Zoph & Le (2016) do not report any of the hyperparameters or what type of dropout they used for their Penn Treebank result, they do note that they performed an extensive hyperparameter search over learning rate, weight initialization, dropout rates, and decay epoch in order to produce their best performing model. It is possible that a large contributor to their improved result was in these tuned hyperparameters as they did not compare their NAS cell results to a standard or variational LSTM cell that was subjected to the same extensive hyperparameter search. Our largest LSTM results are perplexity higher in comparison but have not undergone extensive hyperparameter search, do not use additional regularization techniques such as recurrent or embedding dropout, and do not use a custom RNN cell.

WikiText-2 Results: We compare our WikiText-2 results to Inan et al. (2016) who introduced weight tying between the embedding and softmax weights. While we did not perform any hyperparamter search over the coefficient values of and for AR and TAR, instead using the best results from PTB, we find them to still be quite effective. The baseline LSTM already achieves a perplexity improvement over the variational LSTM models from Inan et al. (2016), including one which uses an augmented loss that modifies standard cross entropy with temperature and a KL divergence based loss. When the AR and TAR parameters optimized over PTB are used, perplexity falls an additional perplexity. This is not as strong an improvement as seen on the PTB dataset and may be due to the increased complexity of the dataset (larger vocabulary meaning a longer tail of usage, different genre, and so on) or may just be due to the lack of hyperparamter tuning.

AR and TAR for GRU and RNN: While neither the GRU (Cho et al., 2014) or RNN are traditionally used in language modeling, we wanted to see the generality of AR and TAR to other types of RNN cells. We applied the best values of and for an LSTM cell to the GRU and RNN on PTB without any further search in Table 6. These values are likely quite suboptimal but are sufficient for illustrative purposes. For the GRU, perplexity improved by from the baseline. This is a positive sign given the impact of these regularization techniques on a GRU are quite different to that of an LSTM. The LSTM only has subjected to AR and TAR, leaving the long term memory unregularized, but the GRU uses both as output at that timestep and as the hidden state input for the next timestep. For the RNN, the model did not train to acceptable levels at all without the application of AR and TAR. For the RNN, TAR likely forced the recurrent matrix to learn an identity function in order to ensure could produce . This would be important given the weights in this model were randomly initialized and suggests TAR acts as an implicit identity initialization constraint (Le et al., 2015).

Model Parameters Validation Test
PTB, RNN (tied) 13M
PTB, RNN (tied) 13M
PTB, GRU (tied) 13M
PTB, GRU (tied) 13M
Table 6: Single model perplexity results over the Penn Treebank for RNN and GRU. Neither cell are traditionally used for language modeling but this demonstrates the generality for AR () and TAR (). Values for taken from best LSTM model with no search. Models noting tied use weight tying on the embedding and softmax weights.

4 Conclusion

In this work, we revisit regularization in the form of activation regularization (AR) and temporal activation regularization (TAR). While simple to implement, activity regularization and temporal activity regularization are competitive with other far more complex regularization techniques and offer equivalent or better results. The improvements that these techniques provide can likely be combined with other regularization techniques, such as the variational LSTM, and may lead to further improvements in performance as well, especially if subjected to an extensive hyperparameter search.

Sample generated text

For generating text samples, words were sampled using the standard generation script contained in the PyTorch word level language modeling example. WikiText-2 was used given the larger vocabulary and more realistic looking text. Neither the token nor the were allowed to be selected. Each paragraph is a separate sample of text with the tokens following Moses (Koehn et al., 2007), joining words with @-@ and dot-decimal split to a @.@ token.

 

” Something Borrowed ” is the second episode of the fourth season of the American comedy television series The X @-@ Files . The episode was written by David McCarthy and directed by Mark Sacks . It aired in the United States on November 30 , 2011 , as a two @-@ episode episode, watched by 4 @.@ 9 million viewers and was the highest rated show on the Fox network .

The work of Olivier ’s , a large 1950s table with the center of a vinyl beam , was used for bony motifs from the upper @-@ production model via the Club van X . The modified works were released in the museum , which gave its namesake to the visual designers in Hong Kong .

The first prototype was released for the PlayStation 4 , containing the 2 @.@ 5 part series , with 3 @.@ 5 million copies sold . In October 2010 , Activision announced that both the game and the main gameplay was “ downloadable ” . The first game , titled Snow : The Game of the Battlefield 2 : The Ultimate Warrior , was the third anime game , and was released in August 2016 .

The German Land Forces had been reversed in the early 1990s , although the Soviet Union continued to deter NDH forces in the nation . The area was moved to Sarajevo , and the troops were despatched to the National Register of Historic Places in the summer of 1918 for the establishment of full political and social parties . The Polish language was protected by the Soviet Union , which was the first Polish continental conflict of the newly formed Union in North America , and the Polish Front with the last of the Polish Communist Party .

References