Frustratingly Short Attention Spans in Neural Language Modeling

02/15/2017 ∙ by Michał Daniluk, et al. ∙ UCL 0

Neural language models predict the next token using a latent representation of the immediate token history. Recently, various methods for augmenting neural language models with an attention mechanism over a differentiable memory have been proposed. For predicting the next token, these models query information from a memory of the recent history which can facilitate learning mid- and long-range dependencies. However, conventional attention mechanisms used in memory-augmented neural language models produce a single output vector per time step. This vector is used both for predicting the next token as well as for the key and value of a differentiable memory of a token history. In this paper, we propose a neural language model with a key-value attention mechanism that outputs separate representations for the key and value of a differentiable memory, as well as for encoding the next-word distribution. This model outperforms existing memory-augmented neural language models on two corpora. Yet, we found that our method mainly utilizes a memory of the five most recent output representations. This led to the unexpected main finding that a much simpler model based only on the concatenation of recent output representations from previous time steps is on par with more sophisticated memory-augmented neural language models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

At the core of language models (LMs) is their ability to infer the next word given a context. This requires representing context-specific dependencies in a sequence across different time scales. On the one hand, classical -gram language models capture relevant dependencies between words in short time distances explicitly, but suffer from data sparsity. Neural language models, on the other hand, maintain and update a dense vector representation over a sequence where time dependencies are captured implicitly (Mikolov et al., 2010). A recent extension of neural sequence models are attention mechanisms (Bahdanau et al., 2015), which can capture long-range connections more directly. However, we argue that applying such an attention mechanism directly to neural language models requires output vectors to fulfill several purposes at the same time: they need to (i) encode a distribution for predicting the next token, (ii) serve as a key to compute the attention vector, as well as (iii) encode relevant content to inform future predictions.

We hypothesize that such overloaded use of output representations makes training the model difficult and propose a modification to the attention mechanism which separates these functions explicitly, inspired by Miller et al. (2016); Ba et al. (2016); Reed & de Freitas (2015); Gulcehre et al. (2016). Specifically, at every time step our neural language model outputs three vectors. The first is used to encode the next-word distribution, the second serves as key, and the third as value for an attention mechanism. We term the model key-value-predict attention and show that it outperforms existing memory-augmented neural language models on the Children’s Book Test (CBT, Hill et al., 2016) and a new corpus of Wikipedia articles. However, we observed that this model pays attention mainly to the previous five memories. We thus also experimented with a much simpler model that only uses a concatenation of output vectors from the previous time steps for predicting the next token. This simple model is on par with more sophisticated memory-augmented neural language models. Thus, our main finding is that modeling short attention spans properly works well and provides notable improvements over a neural language model with attention. Conversely, it seems to be notoriously hard to train neural language models to leverage long-range dependencies.

In this paper, we investigate various memory-augmented neural language models and compare them against previous architectures. Our contributions are threefold: (i) we propose a key-value attention mechanism that uses specific output representations for querying a sliding-window memory of previous token representations, (ii) we demonstrate that while this new architecture outperforms previous memory-augmented neural language models, it mainly utilizes a memory of the previous five representations, and finally (iii) based on this observation we experiment with a much simpler but effective model that uses the concatenation of three previous output representations to predict the next word.

2 Methods

In the following, we discuss methods for extending neural language models with differentiable memory. We first present a standard attention mechanism for language modeling (§2.1). Subsequently, we introduce two methods for separating the usage of output vectors in the attention mechanism: (i) using a dedicated key and value (§2.2), and (ii) further separating the value into a memory value and a representation that encodes the next-word distribution (§2.3). Finally, we describe a very simple method that concatenates previous output representations for predicting the next token (§2.4).

(a) Neural language model with attention.
(b) Key-value separation.
(c) Key-value-predict separation.
(d) Concatenation of previous output representations.
Figure 5: Memory-augmented neural language modelling architectures.

2.1 Attention for Neural Language Modeling

Augmenting a neural language model with attention (Bahdanau et al., 2015) is straight-forward. We simply take the previous output vectors as memory where

is the output dimension of a Long Short-Term Memory (LSTM) unit

(Hochreiter & Schmidhuber, 1997). This memory could in principle contain all previous output representations, but for practical reasons we only keep a sliding window of the previous outputs. Let be the output representation at time step and be a vector of ones.

The attention weights are computed from a comparison of the current and previous LSTM outputs. Subsequently, the context vector is calculated from a sum over previous output vectors weighted by their respective attention value. This can be formulated as

(1)
(2)
(3)

where are trainable projection matrices and is a trainable vector. The final representation that encodes the next-word distribution is computed from a non-linear combination of the attention-weighted representation of previous outputs and the final output vector via

(4)

where are trainable projection matrices. An overview of this architecture is depicted in Figure (a)a

. Lastly, the probablity distribution

for the next word is represented by

(5)

where and are a trainable projection matrix and bias, respectively.

2.2 Key-Value Attention

Inspired by Miller et al. (2016); Ba et al. (2016); Reed & de Freitas (2015); Gulcehre et al. (2016)

, we introduce a key-value attention model that separates output vectors into keys used for calculating the attention distribution

, and a value part used for encoding the next-word distribution and context representation. This model is depicted in Figure (b)b. Formally, we rewrite Equations 1-4 as follows:

(6)
(7)
(8)
(9)
(10)

In essence, Equation 7 compares the key at time step with the previous keys to calculate the attention distribution which is then used in Equation 9 to obtain a weighted context representation from values associated with these keys.

2.3 Key-Value-Predict Attention

Even with a key-value separation, a potential problem is that the same representation is still used both for encoding the probability distribution of the next word and for retrieval from the memory via the attention later. Thus, we experimented with another extension of this model where we further separate into a key, a value and a predict representation where the latter is only used for encoding the next-word distribution (see Figure c). To this end, equations 6 and 10 are replaced by

(11)
(12)

More precisely, the output vector is divided into three equal parts: key, value and predict. In our implementation we simply split the output vector into , and . To this end the hidden dimension of the key-value-predict attention model needs to be a multiplicative of three. Consequently, the dimensions of , and are for a hidden dimension of .

2.4 -gram Recurrent Neural Network

Neural language models often work best in combination with traditional -gram models (Mikolov et al., 2011; Chelba et al., 2013; Williams et al., 2015; Ji et al., 2016; Shazeer et al., 2015), since the former excel at generalization while the latter ensure memorization. In addition, from initial experiments with memory-augmented neural language models, we found that usually only the previous five output representations are utilized. This is in line with observations by Tran et al. (2016). Hence, we experiment with a much simpler architecture depicted in Figure d. Instead of an attention mechanism, the output representations from the previous time steps are directly used to calculate next-word probabilities. Specifically, at every time step we split the LSTM output into vectors and replace Equation 4 with

(13)

where is a trainable projection matrix. This model is related to higher-order RNNs (Soltani & Jiang, 2016) with the difference that we do not incorporate output vectors from the previous steps into the hidden state but only use them for predicting the next word. Furthermore, note that at time step the first part of the output vector will contribute to predicting the next word, the second part will contribute to predicting the second word thereafter, and so on. As the output vectors from the previous time-steps are used to score the next word, we call the resulting model an -gram RNN.

3 Related Work

Early attempts of using memory in neural networks have been undertaken by

Taylor (1959) and Steinbuch & Piske (1963)

by performing nearest-neighbor operations on input vectors and fitting parametric models to the retrieved sets. The dedicated use of external memory in neural architectures has more recently witnessed increased interest.

Weston et al. (2015) introduced Memory Networks to explicitly segregate memory storage from the computation of the neural network, and Sukhbaatar et al. (2015)

trained this model end-to-end with an attention-based memory addressing mechanism. The Neural Turing Machines by

Graves et al. (2014)

add an external differentiable memory with read-write functions to a controller recurrent neural network, and has shown promising results in simple sequence tasks such as copying and sorting. These models make use of external memory, whereas our model directly uses a short sequence from the history of tokens to dynamically populate an addressable memory.

In sequence modeling, RNNs such as LSTMs (Hochreiter & Schmidhuber, 1997) maintain an internal memory state as they process an input sequence. Attending over previous state outputs on top of an RNN encoder has improved performances in a wide range of tasks, including machine translation (Bahdanau et al., 2015), recognizing textual entailment (Rocktäschel et al., 2016), sentence summarization (Rush et al., 2015), image captioning (Xu et al., 2015) and speech recognition (Chorowski et al., 2015).

Recently, Cheng et al. (2016) proposed an architecture that modifies the standard LSTM by replacing the memory cell with a memory network (Weston et al., 2015). Another proposal for conditioning on previous output representations are Higher-order Recurrent Neural Networks (HORNNs, Soltani & Jiang, 2016). Soltani & Jiang found it useful to include information from multiple preceding RNN states when computing the next state. This previous work centers around preceding state vectors, whereas we investigate attention mechanisms on top of RNN outputs, i.e. the vectors used for predicting the next word. Furthermore, instead of pooling we use attention vectors to calculate a context representation of previous memories.

Yang et al. (2016) introduced a reference-aware neural language model where at every position a latent variable determines from which source a target token is generated, e.g., by copying entries from a table or referencing entities that were mentioned earlier.

Another class of models that include memory into sequence modeling are Recurrent Memory Networks (RMNs) (Tran et al., 2016). Here, a memory block accesses the most recent input words to selectively attend over relevant word representations from a global vocabulary. RMNs use a global memory with two input word vector look-up tables for the attention mechanism, and consequently have a large number of trainable parameters. Instead, we proposed models that need much fewer parameters by producing the vectors that will be attended over in the future, which can be seen as a memory that is dynamically populated by the language model.

Finally, the functional separation of look-up keys and memory content has been found useful for Memory Networks (Miller et al., 2016), Neural Programmer-Interpreters (Reed & de Freitas, 2015), Dynamic Neural Turing Machines (Gulcehre et al., 2016), and Fast Associative Memory (Ba et al., 2016). We apply and extend this principle to neural language models.

4 Experiments

We evaluate models on two different corpora for language modeling. The first is a subset of the Wikipedia corpus.111The wikipedia corpus is available at https://goo.gl/s8cyYa. It consists of 7500 English Wikipedia articles (dump from 6 Feb 2015) belonging to one of the following categories: People, Cities, Countries, Universities, and Novels. We chose these categories as we expect articles in these categories to often contain references to previously mentioned entities. Subsequently, we split this corpus into a train, development, and test part, resulting in corpora of M words, M and M words, respectively. We map all numbers to a dedicated numerical symbol and restrict the vocabulary to the K most frequent words, encompassing 97% of the training vocabulary. All other words are replaced by the UNK symbol. The average length of sentences is tokens. In addition to this Wikipedia corpus, we also run experiments on the Children’s Book Test (CBT Hill et al., 2016). While this corpus is designed for cloze-style question-answering, in this paper we use it to test how well language models can exploit wider linguistic context.

4.1 Training Procedure

We use ADAM (Kingma & Ba, 2015) with an initial learning rate of and a mini-batch size of

for optimization. Furthermore, we apply gradient clipping at a gradient norm of

 (Pascanu et al., 2013). The bias of the LSTM’s forget gate is initialized to (Jozefowicz et al., 2016), while other parameters are initialized uniformly from the range

. Backpropagation Through Time

(Rumelhart et al., 1985; Werbos, 1990) was used to train the network with steps of unrolling. We reset the hidden states between articles for the Wikipedia corpus and between stories for CBT, respectively. We take the best configuration based on performance on the validation set and evaluate it on the test set.

5 Results

Model Attention Window Size
1 5 10 15
RM(+tM-g) (Tran et al., 2016)
Attention
Key-Value
Key-Value-Predict
(a) Test perplexity of different attention architectures with varying attention window sizes. Best perplexity per model is italic.
Model Dev Test
-gram RNN M
-gram RNN M
-gram RNN M
-gram RNN M
(b) Comparison of -gram neural language models. denotes the input size, the hidden size and the total number of model parameters.
Model Dev Test
RNN - M M
LSTM - M M
FOFE HORNN (3-rd order) (Soltani & Jiang, 2016) - M M
Gated HORNN (3-rd order) (Soltani & Jiang, 2016) - M M
RM(+tM-g) (Tran et al., 2016) 15 M M
Attention M M
Key-Value M M
Key-Value-Predict M M
-gram RNN - M M
(c) Summary of models with best attention window size . The total number of model parameters, including word representations, is denoted by (without word representations ).
Model Named Entities Common Nouns Verbs Prepositions
Humans (context+query)
Kneser-Ney LM
Kneser-Ney LM + cache
LSTM (context+query)
Memory Network
AS Reader, avg ensemble (Kadlec et al., 2016)
AS Reader, greedy ensemble (Kadlec et al., 2016)
QANN, 4 hops, GloVe (Weissenborn, 2016)
AoA Reader, single model (Cui et al., 2016a)
CAS Reader, mode avg (Cui et al., 2016b)
GA Reader, ensemble (Dhingra et al., 2016)
EpiReader, ensemble (Trischler et al., 2016)
FOFE HORNN (3-rd order) (Soltani & Jiang, 2016)
Gated HORNN (3-rd order) (Soltani & Jiang, 2016)
RM(+tM-g) (Tran et al., 2016)
LSTM
Attention
Key-Value
Key-Value-Predict
-gram RNN
(d) Results on CBT; those marked with are taken from Hill et al. (2016).
Figure 6: Perplexities of memory-augmented neural language models on the Wikipedia corpus (a-c) and accuracies on the CBT test set (d).
(e)
(f)
Figure 9: Attention weights of the Key-Value-Predict model on a randomly sampled Wikipedia article (a) and average attention weight distribution on the whole Wikipedia test set for RM(+tM-g), Attention, Key-Value and Key-Value-Predict models (b). The rightmost positions represent the most recent history.

In the first set of experiments we explore how well the proposed models and Tran et al.’s Recurrent-memory Model can make use of histories of varying lengths. Perplexity results for different attention window sizes on the Wikipedia corpus are summarized in Figure (a)a. The average attention these models pay to specific positions in the history is illustrated in Figure 9. We observed that although our models attend over tokens further in the past more often than the Recurrent-memory Model, attending over a longer history does not significantly improve the perplexity of any attentive model.

The much simpler -gram RNN model achieves comparable results (Figure (b)b) and seems to work best with a history of the previous three output vectors (-gram RNN). As a result, we choose the -gram model for the following -gram RNN experiments.

5.1 Comparison with state-of-the-art models

In the next set of experiments, we compared our proposed models against a variety of state-of-the-art models on the Wikipedia and CBT corpora. Results are shown in Figure (c)c and (d)d, respectively. Note that the models presented here do not achieve state-of-the-art on CBT as they are language models and not tailored towards cloze-sytle question answering. Thus, we merely use this corpus for comparing different neural language model architectures. We reimplemented the Recurrent-Memory model by Tran et al. (2016) with the temporal matrix and gating composition function (RM+tM-g). Furthermore, we reimplemented Higher Order Recurrent Neural Networks (HORNNs) by Soltani & Jiang (2016).

To ensure a comparable number of parameters to a vanilla LSTM model, we adjusted the hidden size of all models to have roughly the same total number of model parameters. The attention window size N for the -gram RNN model was set to according to the best validation set perplexity on the Wikipedia corpus. Below we discuss the results in detail.

Attention

By using a neural language model with an attention mechanism over a dynamically populated memory, we observed a points lower perplexity over a vanilla LSTM on Wikipedia, but only notable differences for predicting verbs and prepositions in CBT. This indicates that incorporating mechanisms for querying previous output vectors is useful for neural language modeling.

Key-Value

Decomposing the output vector into a key-value paired memory improves the perplexity by points compared to a baseline LSTM, and by points compared to the RM(+tM-g) model. Again, for CBT we see only small improvements.

Key-Value-Predict

By further separating the output vector into a key, value and next-word prediction part, we get the lowest perplexity and gain points over a baseline LSTM, a points compared to RM(+tM-g), and points compared to only splitting the output into a key and value. For CBT, we see an accuracy increase of percentage points for verbs, and for prepositions. As stated earlier, the performance of the Key-Value-Predict model does not improve significantly when increasing the attention window size. This leads to the conclusion that none of the attentive models investigated in this paper can utilize a large memory of previous token representations. Moreover, none of the presented methods differ significantly for predicting common nouns and named entities in CBT.

-gram RNN

Our main finding is that the simple modification of using output vectors from the previous time steps for the next-word prediction leads to perplexities that are on par with or better than more complicated neural language models with attention. Specifically, the -gram RNN achieves only slightly worse perplexities than the Key-Value-Predict architecture.

6 Conclusion

In this paper, we observed that using an attention mechanism for neural language modeling where we separate output vectors into a key, value and predict part outperform simpler attention mechanisms on a Wikipedia corpus and the Children Book Test (CBT, Hill et al., 2016). However, we found that all attentive neural language models mainly utilize a memory of only the most recent history and fail to exploit long-range dependencies. In fact, a much simpler -gram RNN model, which only uses a concatenation of output representations from the previous three time steps, is on par with more sophisticated memory-augmented neural language models. Training neural language models that take long-range dependencies into account seems notoriously hard and needs further investigation. Thus, for future work we want to investigate ways to encourage attending over a longer history, for instance by forcing the model to ignore the local context and only allow attention over output representations further behind the local history.

Acknowledgments

This work was supported by Microsoft Research and the Engineering and Physical Sciences Research Council through PhD Scholarship Programmes, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award.

References