The need for effective regularization methods for RNNs has seen extensive focus in recent years. While application of dropout (Srivastava et al., 2014) to the input and output of an RNN has been shown to be effective (Zaremba et al., 2014), dropout is destructive when naively applied to the recurrent connections of an RNN. When naive dropout is applied to the recurrent connections, it is almost impossible to retain information over long periods of time.
Given this fundamental issue, substantial work has gone into understanding and improving dropout when applied to recurrent connections. Of these techniques, which we shall broadly refer to as recurrent dropout, some specific variations have gained popular usage.
Variational RNNs (Gal & Ghahramani, 2016) drop the same network units at each timestep, as opposed to dropping different network units at each timestep. By performing dropout on the same units at each timestep, destructive loss of the RNN hidden state is avoided and the same information is masked at each timestep.
Rather than dropping units, another tactic is to drop updates to given network units. Semeniuta et al. (2016) perform dropout on the input gate of the LSTM (Hochreiter & Schmidhuber, 1997) but allow the forget gate to discard portions of the existing hidden state. Zoneout (Krueger et al., 2016) prevents hidden state updates from occurring by setting a randomly selected subset of network unit activations in to be equal to the previous activations from . Both of these act to prevent updates to the hidden state while preserving existing content.
On an extreme end, work has also been done to restrict the recurrent matrices in an RNN in order to limit their computational capacity. Some RNN architectures only allow element-wise interactions (Balduzzi & Ghifary, 2016; Bradbury et al., 2016; Seo et al., 2016), removing the recurrent matrix entirely, while others act to restrict the capacity by parameterizing the recurrent matrix (Arjovsky et al., 2016; Wisdom et al., 2016; Jing et al., 2016).
Other forms of regularization explicitly act upon activations such as such as batch normalization(Ioffe & Szegedy, 2015), recurrent batch normalization (Cooijmans et al., 2016), and layer normalization (Ba et al., 2016). These all introduce additional training parameters and can complicate the training process while increasing the sensitivity of the model. Norm stabilization (Krueger & Memisevic, 2015) penalizes the model when the norm of an RNN’s hidden state changes substantially between timesteps, achieving strong results in character language modeling on and phoneme recognition.
In this work, we revisit regularization in the form of activation regularization (AR) and temporal activation regularization (TAR). When applied to modern baselines that do not contain recurrent dropout or normalization techniques, AR and TAR achieve comparable or superior results.
Compared to other invasive regularization techniques which may require modifications to the RNN cell itself or complex model changes, both AR and TAR require no substantial modifications to the RNN or model. This enables AR and TAR to be applied to optimized RNN implementations such as the cuDNN LSTM which can be many times faster than naïve but flexible LSTM implementations.
2 Activation Regularization
2.1 activation regularization (AR)
While regularization is traditionally used on the weights of machine learning models ( weight decay), it could also be used on the activations. We define AR as
where is the dropout mask used by later parts of the model, ( norm), is the output of the RNN at timestep , and is a scaling coefficient.
When applied to the output of a dense layer, AR penalizes activations that are substantially away from 0, encouraging the activations to remain small. While acting implicitly rather than explicitly, this has similarities to the various batch or layer normalization techniques.
The penalty on the RNN activations can be applied to or to (the dropped output used in the rest of the model). In our experiments, we found that applying AR to
was more effective than applying it to neurons not updated during the current optimization step.
2.2 Temporal activation regularization (TAR)
Adding a prior that minimizes differences between states has been explored in the past. This broad concept falls under the broad concept of slowness regularization (Hinton, 1989; Földiák, 1991; Luciw & Schmidhuber, 2012; Jonschkowski & Brock, 2015; Wen et al., 2015) which attempts to minimize where
is a loss function describing the distance betweenand and is an arbitrary mapping function.
Temporal activation regularization (TAR) is a direct descendant of this slowness regularization, minimizing
where ( norm), is the output of the RNN at timestep , and is a scaling coefficient.
TAR penalizes any large changes in hidden state between timesteps, encouraging the model to keep the output as consistent as possible. For the LSTM, the hidden state which is regularized is only , not the long term memory , though this could optionally be regularized in a similar manner.
3.1 Language Modeling
We benchmark activation regularization (AR) and temporal activation regularization (TAR) applied to a strong non-variational LSTM baseline111PyTorch Word Level Language Modeling example: https://github.com/pytorch/examples/tree/master/word_language_model. The experiment uses a preprocessed version of the Penn Treebank (PTB) (Mikolov et al., 2010) and WikiText-2 (WT2) (Merity et al., 2016)
. All hyperparameters, includingfor AR and for TAR, are optimized over the validation dataset. The best found hyperparameters as determined by the validation results are then run on the test set.
|PTB, LSTM (tied)||13M|
|PTB, LSTM (tied)||24M|
|PTB, LSTM (tied)||51M|
|PTB, LSTM (tied)||13M|
|PTB, LSTM (tied)||24M|
|PTB, LSTM (tied)||51M|
|Inan et al. (2016) - Variational LSTM (tied) ()||28M|
|Inan et al. (2016) - Variational LSTM (tied) () + augmented loss||28M|
|WT2, LSTM (tied)||28M|
|WT2, LSTM (tied)||28M|
PTB: As the Penn Treebank is a small dataset, preventing overfitting is of considerable importance and a major focus of research. Almost all competitive models rely upon a form of recurrent dropout to ensure the RNN does not overfit through drastic changes in the hidden state. Other aggressive dropout techniques, such as performing dropout on the embedding layer such that entire words are dropped from a sequence, are also frequently used.
WT2: WikiText-2 is a dataset approximately twice as large as PTB but with a vocabulary three times larger. The text is also tokenized and processed in a manner similar to datasets used for machine translation using the Moses tokenizer (Koehn et al., 2007).
Experiment details: All experiments use a model containing a two layer RNN. The AR and TAR loss are only applied to the output of the final RNN layer, not to all layers. For the majority of experiments, we follow the medium model size of Zaremba et al. (2014): a two layer RNN with 650 hidden units in each layer.
For dropout, we have two different parameters, dp and . dp
is the dropout rate used on the word vectors and the final RNN output.
is the dropout rate used on the connection between RNN layers. All models use weight tying between the embedding and softmax layer(Inan et al., 2016; Press & Wolf, 2016).
Evaluating AR and TAR independently on PTB: To understand the potential of AR and TAR, we investigate their impact on language model perplexity when used independently in Table 1 (AR) and Table 2 (TAR). While both result in a substantial reduction in perplexity, AR results in the strongest improvement of , while TAR only achieves . The drops achieved by this are equivalent to using an LSTM model with twice as many parameters - a substantial improvement given the simplicity of AR and TAR.
Evaluating AR and TAR jointly on PTB: When both AR and TAR are used together, we found the best result was achieved by decreasing and , likely as the model was over-regularized otherwise. In Table 3 we present PTB results for three different model sizes comparing models without AR/TAR to those which use both. The model sizes were chosen to be comparable in size to other published results. With both AR and TAR, the smallest model has an improvement of over the baseline model. The improvements continue for the two larger size models, and , though the gains fall off as the model size is increased.
|Zaremba et al. (2014) - LSTM (medium)||20M|
|Zaremba et al. (2014) - LSTM (large)||66M|
|Gal & Ghahramani (2016) - Variational LSTM (medium)||20M|
|Gal & Ghahramani (2016) - Variational LSTM (medium, MC)||20M|
|Gal & Ghahramani (2016) - Variational LSTM (large)||66M|
|Gal & Ghahramani (2016) - Variational LSTM (large, MC)||66M|
|Kim et al. (2016) - CharCNN||19M|
|Merity et al. (2016) - Pointer Sentinel-LSTM||21M|
|Inan et al. (2016) - Variational LSTM (tied) + augmented loss||24M|
|Inan et al. (2016) - Variational LSTM (tied) + augmented loss||51M|
|Zilly et al. (2016) - Variational RHN (tied)||23M|
|Zoph & Le (2016) - NAS Cell (tied)||25M|
|Zoph & Le (2016) - NAS Cell (tied)||54M|
|PTB, LSTM (tied)||13M|
|PTB, LSTM (tied)||24M|
|PTB, LSTM (tied)||51M|
Comparing to state-of-the-art PTB: In Table 5 we summarize the current state of the art models in language modeling over the Penn Treebank.
The largest LSTM we train () achieves comparable results to the Recurrent Highway Network (RHN) (Zilly et al., 2016), a human developed custom RNN architecture, but with approximately double the number of parameters. Although the LSTM uses twice as many parameters, the RHN runs a cell 10 times per timestep (referred to as recurrence depth), resulting in far more computation. This would likely result in the RHN being slower than the larger LSTM model during both training and prediction, especially when factoring in optimized LSTM implementations such as NVIDIA’s cuDNN LSTM.
We also compare to the Neural Architecture Search (NAS) cell (Zoph & Le, 2016). While Zoph & Le (2016) do not report any of the hyperparameters or what type of dropout they used for their Penn Treebank result, they do note that they performed an extensive hyperparameter search over learning rate, weight initialization, dropout rates, and decay epoch in order to produce their best performing model. It is possible that a large contributor to their improved result was in these tuned hyperparameters as they did not compare their NAS cell results to a standard or variational LSTM cell that was subjected to the same extensive hyperparameter search. Our largest LSTM results are perplexity higher in comparison but have not undergone extensive hyperparameter search, do not use additional regularization techniques such as recurrent or embedding dropout, and do not use a custom RNN cell.
WikiText-2 Results: We compare our WikiText-2 results to Inan et al. (2016) who introduced weight tying between the embedding and softmax weights. While we did not perform any hyperparamter search over the coefficient values of and for AR and TAR, instead using the best results from PTB, we find them to still be quite effective. The baseline LSTM already achieves a perplexity improvement over the variational LSTM models from Inan et al. (2016), including one which uses an augmented loss that modifies standard cross entropy with temperature and a KL divergence based loss. When the AR and TAR parameters optimized over PTB are used, perplexity falls an additional perplexity. This is not as strong an improvement as seen on the PTB dataset and may be due to the increased complexity of the dataset (larger vocabulary meaning a longer tail of usage, different genre, and so on) or may just be due to the lack of hyperparamter tuning.
AR and TAR for GRU and RNN: While neither the GRU (Cho et al., 2014) or RNN are traditionally used in language modeling, we wanted to see the generality of AR and TAR to other types of RNN cells. We applied the best values of and for an LSTM cell to the GRU and RNN on PTB without any further search in Table 6. These values are likely quite suboptimal but are sufficient for illustrative purposes. For the GRU, perplexity improved by from the baseline. This is a positive sign given the impact of these regularization techniques on a GRU are quite different to that of an LSTM. The LSTM only has subjected to AR and TAR, leaving the long term memory unregularized, but the GRU uses both as output at that timestep and as the hidden state input for the next timestep. For the RNN, the model did not train to acceptable levels at all without the application of AR and TAR. For the RNN, TAR likely forced the recurrent matrix to learn an identity function in order to ensure could produce . This would be important given the weights in this model were randomly initialized and suggests TAR acts as an implicit identity initialization constraint (Le et al., 2015).
|PTB, RNN (tied)||13M|
|PTB, RNN (tied)||13M|
|PTB, GRU (tied)||13M|
|PTB, GRU (tied)||13M|
In this work, we revisit regularization in the form of activation regularization (AR) and temporal activation regularization (TAR). While simple to implement, activity regularization and temporal activity regularization are competitive with other far more complex regularization techniques and offer equivalent or better results. The improvements that these techniques provide can likely be combined with other regularization techniques, such as the variational LSTM, and may lead to further improvements in performance as well, especially if subjected to an extensive hyperparameter search.
Sample generated text
For generating text samples, words were sampled using the standard generation script contained in the PyTorch word level language modeling example. WikiText-2 was used given the larger vocabulary and more realistic looking text. Neither the token nor the were allowed to be selected. Each paragraph is a separate sample of text with the tokens following Moses (Koehn et al., 2007), joining words with @-@ and dot-decimal split to a @.@ token.
” Something Borrowed ” is the second episode of the fourth season of the American comedy television series The X @-@ Files . The episode was written by David McCarthy and directed by Mark Sacks . It aired in the United States on November 30 , 2011 , as a two @-@ episode episode, watched by 4 @.@ 9 million viewers and was the highest rated show on the Fox network .
The work of Olivier ’s , a large 1950s table with the center of a vinyl beam , was used for bony motifs from the upper @-@ production model via the Club van X . The modified works were released in the museum , which gave its namesake to the visual designers in Hong Kong .
The first prototype was released for the PlayStation 4 , containing the 2 @.@ 5 part series , with 3 @.@ 5 million copies sold . In October 2010 , Activision announced that both the game and the main gameplay was “ downloadable ” . The first game , titled Snow : The Game of the Battlefield 2 : The Ultimate Warrior , was the third anime game , and was released in August 2016 .
The German Land Forces had been reversed in the early 1990s , although the Soviet Union continued to deter NDH forces in the nation . The area was moved to Sarajevo , and the troops were despatched to the National Register of Historic Places in the summer of 1918 for the establishment of full political and social parties . The Polish language was protected by the Soviet Union , which was the first Polish continental conflict of the newly formed Union in North America , and the Polish Front with the last of the Polish Communist Party .
- Arjovsky et al. (2016) Arjovsky, Martin, Shah, Amar, and Bengio, Yoshua. Unitary evolution recurrent neural networks. In International Conference on Machine Learning, pp. 1120–1128, 2016.
- Ba et al. (2016) Ba, Jimmy, Kiros, Jamie Ryan, and Hinton, Geoffrey E. Layer normalization. CoRR, abs/1607.06450, 2016.
- Balduzzi & Ghifary (2016) Balduzzi, David and Ghifary, Muhammad. Strongly-typed recurrent neural networks. arXiv preprint arXiv:1602.02218, 2016.
- Bradbury et al. (2016) Bradbury, James, Merity, Stephen, Xiong, Caiming, and Socher, Richard. Quasi-Recurrent Neural Networks. arXiv preprint arXiv:1611.01576, 2016.
- Cho et al. (2014) Cho, Kyunghyun, Van Merriënboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
- Cooijmans et al. (2016) Cooijmans, Tim, Ballas, Nicolas, Laurent, César, and Courville, Aaron C. Recurrent batch normalization. CoRR, abs/1603.09025, 2016.
- Földiák (1991) Földiák, Peter. Learning invariance from transformation sequences. Neural Computation, 3(2):194–200, 1991.
- Gal & Ghahramani (2016) Gal, Yarin and Ghahramani, Zoubin. A theoretically grounded application of dropout in recurrent neural networks. In NIPS, 2016.
- Hinton (1989) Hinton, Geoffrey E. Connectionist learning procedures. Artificial intelligence, 40(1-3):185–234, 1989.
- Hochreiter & Schmidhuber (1997) Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural Computation, 1997.
- Inan et al. (2016) Inan, Hakan, Khosravi, Khashayar, and Socher, Richard. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling. arXiv preprint arXiv:1611.01462, 2016.
- Ioffe & Szegedy (2015) Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
- Jing et al. (2016) Jing, Li, Shen, Yichen, Dubček, Tena, Peurifoy, John, Skirlo, Scott, Tegmark, Max, and Soljačić, Marin. Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNN. arXiv preprint arXiv:1612.05231, 2016.
- Jonschkowski & Brock (2015) Jonschkowski, Rico and Brock, Oliver. Learning state representations with robotic priors. Auton. Robots, 39:407–428, 2015.
- Kim et al. (2016) Kim, Yoon, Jernite, Yacine, Sontag, David, and Rush, Alexander M. Character-aware neural language models. In Thirtieth AAAI Conference on Artificial Intelligence, 2016.
- Koehn et al. (2007) Koehn, Philipp, Hoang, Hieu, Birch, Alexandra, Callison-Burch, Chris, Federico, Marcello, Bertoldi, Nicola, Cowan, Brooke, Shen, Wade, Moran, Christine, Zens, Richard, Dyer, Chris, Bojar, Ondřej, Constantin, Alexandra, and Herbst, Evan. Moses: Open source toolkit for statistical machine translation. In ACL, 2007.
- Krueger & Memisevic (2015) Krueger, David and Memisevic, Roland. Regularizing rnns by stabilizing activations. CoRR, abs/1511.08400, 2015.
- Krueger et al. (2016) Krueger, David, Maharaj, Tegan, Kramár, János, Pezeshki, Mohammad, Ballas, Nicolas, Ke, Nan Rosemary, Goyal, Anirudh, Bengio, Yoshua, Larochelle, Hugo, Courville, Aaron, et al. Zoneout: Regularizing RNNss by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
- Le et al. (2015) Le, Quoc V, Jaitly, Navdeep, and Hinton, Geoffrey E. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015.
- Luciw & Schmidhuber (2012) Luciw, Matthew and Schmidhuber, Juergen. Low complexity proto-value function learning from sensory observations with incremental slow feature analysis. Artificial Neural Networks and Machine Learning–ICANN 2012, pp. 279–287, 2012.
- Merity et al. (2016) Merity, Stephen, Xiong, Caiming, Bradbury, James, and Socher, Richard. Pointer Sentinel Mixture Models. arXiv preprint arXiv:1609.07843, 2016.
- Mikolov et al. (2010) Mikolov, Tomas, Karafiát, Martin, Burget, Lukás, Cernocký, Jan, and Khudanpur, Sanjeev. Recurrent neural network based language model. In INTERSPEECH, 2010.
- Press & Wolf (2016) Press, Ofir and Wolf, Lior. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016.
- Semeniuta et al. (2016) Semeniuta, Stanislau, Severyn, Aliaksei, and Barth, Erhardt. Recurrent dropout without memory loss. In COLING, 2016.
- Seo et al. (2016) Seo, Minjoon, Min, Sewon, Farhadi, Ali, and Hajishirzi, Hannaneh. Query-Reduction Networks for Question Answering. arXiv preprint arXiv:1606.04582, 2016.
- Srivastava et al. (2014) Srivastava, Nitish, Hinton, Geoffrey E., Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.
- Wen et al. (2015) Wen, Tsung-Hsien, Gasic, Milica, Mrksic, Nikola, Su, Pei-Hao, Vandyke, David, and Young, Steve. Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems. arXiv preprint arXiv:1508.01745, 2015.
- Wisdom et al. (2016) Wisdom, Scott, Powers, Thomas, Hershey, John, Le Roux, Jonathan, and Atlas, Les. Full-capacity unitary recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 4880–4888, 2016.
- Zaremba et al. (2014) Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
- Zilly et al. (2016) Zilly, Julian Georg, Srivastava, Rupesh Kumar, Koutník, Jan, and Schmidhuber, Jürgen. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
- Zoph & Le (2016) Zoph, Barret and Le, Quoc V. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.