Efficient Attention using a Fixed-Size Memory Representation

07/01/2017 ∙ by Denny Britz, et al. ∙ Google 0

The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20 sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sequence-to-sequence models  (Sutskever et al., 2014; Cho et al., 2014)

have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT)  

(Bahdanau et al., 2014; Wu et al., 2016)

, text summarization  

(Rush et al., 2015; Nallapati et al., 2016), speech recognition  (Chan et al., 2015; Chorowski and Jaitly, 2016), image captioning  (Xu et al., 2015), and conversational modeling  (Vinyals and Le, 2015; Li et al., 2015).

The most popular approaches are based on an encoder-decoder architecture consisting of two recurrent neural networks (RNNs) and an attention mechanism that aligns target to source tokens  

(Bahdanau et al., 2014; Luong et al., 2015). The typical attention mechanism used in these architectures computes a new attention context at each decoding step based on the current state of the decoder. Intuitively, this corresponds to looking at the source sequence after the output of every single target token.

Inspired by how humans process sentences, we believe it may be unnecessary to look back at the entire original source sequence at each step.111Eye-tracking and keystroke logging data from human translators show that translators generally do not reread previously translated source text words when producing target text  (Carl et al., 2011). We thus propose an alternative attention mechanism (section 3) that leads to smaller computational time complexity. Our method predicts

attention context vectors while reading the source, and learns to use a weighted average of these vectors at each step of decoding. Thus, we avoid looking back at the source sequence once it has been encoded. We show (section

4) that this speeds up inference while performing on-par with the standard mechanism on both toy and real-world WMT translation datasets. We also show that our mechanism leads to larger speedups as sequences get longer. Finally, by visualizing the attention scores (section 5), we verify that the proposed technique learns meaningful alignments, and that different attention context vectors specialize on different parts of the source.

2 Background

2.1 Sequence-to-Sequence Model with Attention

Our models are based on an encoder-decoder architecture with attention mechanism  (Bahdanau et al., 2014; Luong et al., 2015). An encoder function takes as input a sequence of source tokens and produces a sequence of states

.The decoder is an RNN that predicts the probability of a target sequence

. The probability of each target token is predicted based on the recurrent state in the decoder RNN, , the previous words, , and a context vector . The context vector , also referred to as the attention vector, is calculated as a weighted average of the source states.

(1)
(2)

Here, is an attention function that calculates an unnormalized alignment score between the encoder state and the decoder state . Variants of used in  Bahdanau et al. (2014) and  Luong et al. (2015) are:

where and are model parameters learned to predict alignment. Let and denote the lengths of the source and target sequences respectively and denoate the state size of the encoder and decoder RNN. Such content-based attention mechanisms result in inference times of 222An exception is the dot-attention from  Luong et al. (2015), which is , which we discuss further in Section 3., as each context vector depends on the current decoder state and all encoder states, and requires an matrix multiplication.

The decoder outputs a distribution over a vocabulary of fixed-size :

(3)

The model is trained end-to-end by minimizing the negative log likelihood of the target words using stochastic gradient descent.

3 Memory-Based Attention Model

Figure 1:

Memory Attention model architecture.

attention vectors are predicted during encoding, and a linear combination is chosen during decoding. In our example, .

Our proposed model is shown in Figure 1. During encoding, we compute an attention matrix , where

is the number of attention vectors and a hyperparameter of our method, and

is the dimensionality of the top-most encoder state. This matrix is computed by predicting a score vector at each encoding time step . is then a linear combination of the encoder states, weighted by :

(4)
(5)

where is a parameter matrix in .

The computational time complexity for this operation is . One can think of C as compact fixed-length memory that the decoder will perform attention over. In contrast, standard approaches use a variable-length set of encoder states for attention. At each decoding step, we similarly predict scores . The final attention context is a linear combination of the rows in weighted by the scores. Intuitively, each decoder step predicts how important each of the attention vectors is.

(6)
(7)

Here, is the current state of the decoder, and is a learned parameter matrix. Note that we do not access the encoder states at each decoder step. We simply take a linear combination of the attention matrix pre-computed during encoding - a much cheaper operation that is independent of the length of the source sequence. The time complexity of this computation is as multiplication with the attention matrices needs to happen at each decoding step.

Summing from encoding and from decoding, we have a total linear computational complexity of . As is typically very large, or units in most applications, we expect our model to be faster than the standard attention mechanism running in . For long sequences (as in summarization, where —S— is large), we also expect our model to be faster than the cheaper dot-based attention mechanism, which needs computation time and requires encoder and decoder states sizes to match.

We also experimented with using a sigmoid function instead of the softmax to score the encoder and decoder attention scores, resulting in 4 possible combinations. We call this choice the

scoring function. A softmax scoring function calculates normalized scores, while the sigmoid scoring function results in unnormalized scores that can be understood as gates.

3.1 Model Interpretations

Our memory-based attention model can be understood intuitively in two ways. We can interpret it as ”predicting” the set of attention contexts produced by a standard attention mechanism during encoding. To see this, assume we set . In this case, we predict all attention contexts during the encoding stage and learn to choose the right one during decoding. This is cheaper than computing contexts one-by-one based on the decoder and encoder content. In fact, we could enforce this objective by first training a regular attention model and adding a regularization term to force the memory matrix to be close to the vectors computed by the standard attention. We leave it to future work to explore such an objective.

Alternatively, we can interpret our mechanism as first predicting a compact memory matrix, a representation of the source sequence, and then performing location-based attention on the memory by picking which row of the matrix to attend to. Standard location-based attention mechanism, by contrast, predicts a location in the source sequence to focus on  (Luong et al., 2015; Xu et al., 2015).

3.2 Position Encodings (PE)

In the above formulation, the predictions of attention contexts are symmetric. That is, is not forced to be different from . While we would hope for the model to learn to generate distinct attention contexts, we now present an extension that pushes the model into this direction. We add position encodings to the score matrix that forces the first few context vector to focus on the beginning of the sequence and the last few vectors to focus on the end (thereby encouraging in-between vectors to focus on the middle).

Explicitly, we multiply the score vector with position encodings :

(8)
(9)

To obtain we first calculate a constant matrix where we define each element as

(10)

adapting a formula from  (Sukhbaatar et al., 2015). Here, is the context vector index and is the maximum sequence length across all source sequences. The manifold is shown graphically in Figure 2. We can see that earlier encoder states are upweighted in the first context vectors, and later states are upweighted in later vectors. The symmetry of the manifold and its stationary point having value 0.5 both follow from Eq.  10. The elements of the matrix that fall beyond the sequence lengths are then masked out and the remaining elements are renormalized across the timestep dimension. This results in the jagged array of position encodings .

Figure 2: Surface for the position encodings.

4 Experiments

4.1 Toy Copying Experiment

Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in Graves et al. (2014). We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to , for unique to each dataset.

4.1.1 Training Setup

All models are implemented using TensorFlow based on the seq2seq implementation of  

Britz et al. (2017)333http://github.com/google/seq2seq and trained on a single machine with a Nvidia K40m GPU. We use a 2-layer 256-unit, a bidirectional LSTM  (Hochreiter and Schmidhuber, 1997) encoder, a 2-layer 256-unit LSTM decoder, and 256-dimensional embeddings. For the attention baseline, we use the standard parametrized attention  (Bahdanau et al., 2014). Dropout of 0.2 (0.8 keep probability) is applied to the input of each cell and we optimize using Adam (Kingma and Ba, 2014) at a learning rate of 0.0001 and batch size 128. We train for at most 200,000 steps (see Figure 3 for sample learning curves). BLEU scores are calculated on tokenized data using the multi-bleu.perl script in Moses.444http://github.com/moses-smt/mosesdecoder We decode using beam search with a beam

Length Model BLEU Time (s)
20 No Att 99.93 2.03
99.52 2.12
99.56 2.25
99.56 2.21
99.57 2.59
99.75 2.86
Att 99.98 2.86
50 No Att 97.37 3.90
98.86 4.33
99.95 4.48
99.96 4.58
99.96 5.35
99.97 5.84
Att 99.94 6.46
100 No Att 73.99 6.33
87.42 7.32
99.81 7.47
99.97 7.50
99.99 7.65
100.00 7.77
Att 100.00 11.00
200 No Att 32.64 9.10
44.22 9.30
98.54 9.49
99.98 9.53
100.00 9.59
100.00 9.78
Att 100.00 14.28
Table 1: BLEU scores and computation times with varying and sequence length compared to baseline models with and without attention.

size of 10  (Wiseman and Rush, 2016).

(a) Comparison of varying for copying sequences of length 200 on evaluation data, showing that large leads to faster convergence and small performs similarly to the non-attentional baseline.
(b) Comparison of sigmoid and softmax functions for choosing the encoder and decoder attention scores on evaluation data, showing that choice of gating/normalization matters.
Figure 3: Training Curves for the Toy Copy task

4.1.2 Results

Table 1 shows the BLEU scores of our model on different sequence lengths while varying . This is a study of the trade-off between computational time and representational power. A large allows us to compute complex source representations, while a of 1 limits the source representation to a single vector. We can see that performance consistently increases with up to a point that depends on the data length, with longer sequences requiring more complex representations. The results with and without position encodings are almost identical on the toy data. Our technique learns to fit the data as well as the standard attention mechanism despite having less representational power. Both beat the non-attention baseline by a significant margin.

That we are able to represent the source sequence with a fixed size matrix with fewer than rows suggests that traditional attention mechanisms may be representing the source with redundancies and wasting computational resources. This makes intuitive sense for the toy task, which should require a relatively simple representation.

The last column shows that our technique significantly speeds up the inference process. The gap in inference speed increases as sequences become longer. We measured inference time on the full validation set of 1,000 examples, not including data loading or model construction times.

Figure 2(a) shows the learning curves for sequence length 200. We see that is unable to fit the data distribution, while fits the data almost as quickly as the attention-based model. Figure 2(b) shows the effect of varying the encoder and decoder scoring functions between softmax and sigmoid. All combinations manage to fit the data, but some converge faster than others. In section 5 we show that distinct alignments are learned by different function combinations.

4.2 Machine Translation

(a) Training curves for en-fi
(b) Training curves for en-tr
Figure 4: Comparing training curves for en-fi and en-tr with sigmoid encoder scoring and softmax decoder scoring and position encoding. Note that en-tr curves converged very quickly.
Figure 5: Comparing training curves for en-fi for different encoder/decoder scoring functions for our models at .

Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT’17555statmt.org/wmt17/translation-task.html on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT’17 task.666http://data.statmt.org/wmt17/translation-task/preprocessed Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm  (Sennrich et al., 2016). We use newstest2015 as a validation set, and report BLEU on newstest2016.

4.2.1 Training Setup

We use a similar setup to the Toy Copy task, but use 512 RNN and embedding units, train using 8 distributed workers with 1 GPU each, and train for at most 1M steps. We save checkpoints every 30 minutes during training, and choose the best based on the validation BLEU score.

4.2.2 Results

Model Dataset K en-cs en-de en-fi en-tr
Memory Attention Test 32 19.37 28.82 15.87 -
64 19.65 29.53 16.49 -
Valid 32 19.20 26.20 15.90 12.94
64 19.63 26.39 16.35 13.06
Memory Attention + PE Test 32 19.45 29.53 15.86 -
64 20.36 30.61 17.03 -
Valid 32 19.35 26.22 16.31 12.97
64 19.73 27.31 16.91 13.25
Attention Test - 19.19 30.99 17.34 -
Valid - 18.61 28.13 17.16 13.76
Table 2: BLEU scores on WMT’17 translation datasets from the memory attention models and regular attention baselines. We picked the best out of the four scoring function combinations on the validation set. Note that en-tr does not have an official test set. Best test scores on each dataset are highlighted.
Model Decoding Time (s)
26.85
27.13
Attention 33.28
Table 3: Decoding time, averaged across 10 runs, for the en-de validation set (2169 examples) with average sequence length of 35. Results are similar for both PE and non-PE models.

Table 2 compares our approach with and without position encodings, and with varying values for hyperparameter , to baseline models with regular attention mechanism. Learning curves are shown in Figure 4. We see that our memory attention model with sufficiently high performs on-par with, or slightly better, than the attention-based baseline model despite its simpler nature. Across the board, models with performed better than corresponding models with , suggesting that using a larger number of attention vectors can capture a richer understanding of source sequences. Position encodings also seem to consistently improve model performance.

Table 3 shows that our model results in faster decoding time even on a complex dataset with a large vocabulary of 16k. We measured decoding time over the full validation set, not including time used for model setup and data loading, averaged across 10 runs. The average sequence length for examples in this data was 35, and we expect more significant speedups for tasks with longer sequences, as suggested by our experiments on toy data. Note that in our NMT examples/experiments, , but we obtain computational savings from the fact that . We may be able to set , as in toy copying, and still get very good performance in other tasks. For instance, in summarization the source is complex but the representation of the source required to perform the task is ”simple” (i.e. all that is needed to generate the abstract).

Figure 5 shows the effect of using sigmoid and softmax function in the encoders and decoders. We found that softmax/softmax consistently performs badly, while all other combinations perform about equally well. We report results for the best combination only (as chosen on the validation set), but we found this choice to only make a minor difference.

5 Visualizing Attention

Figure 6: Attention scores at each step of decoding for on a sample from the sequence length 100 toy copy dataset. Individual attention vectors are highlighted in blue. (-axis: source tokens; -axis: target tokens)
Figure 7: Attention scores at each step of decoding for on a sample with sequence length 11. The subfigure on the left color codes each individual attention vector. (-axis: source; -axis: target)

Figure 8: Attention scores at each step of decoding for en-de WMT translation task using model with sigmoid scoring functions and . The left subfigure displays each individual attention vector separately while the right subfigure displays the full combined attention. (-axis: source; -axis: target)

A useful property of the standard attention mechanism is that it produces meaningful alignment between source and target sequences. Often, the attention mechanism learns to progressively focus on the next source token as it decodes the target. These visualizations can be an important tool in debugging and evaluating seq2seq models and are often used for unknown token replacement.

This raises the question of whether or not our proposed memory attention mechanism also learns to generate meaningful alignments. Due to limiting the number of attention contexts to a number that is generally less than the sequence length, it is not immediately obvious what each context would learn to focus on. Our hope was that the model would learn to focus on multiple alignments at the same time, within the same attention vector. For example, if the source sequence is of length 40 and we have attention contexts, we would hope that roughly focuses on tokens 1 to 4, on tokens 5 to 8, and so on. Figures 6 and 7 show that this is indeed the case. To generate this visualization we multiply the attention scores and from the encoder and decoder. Figure 8 shows a sample translation task visualization.

Figure 6

suggests that our model learns distinct ways to use its memory depending on the encoder and decoder functions. Interestingly, using softmax normalization results in attention maps typical of those derived from using standard attention, i.e. a relatively linear mapping between source and target tokens. Meanwhile, using sigmoid gating results in what seems to be a distributed representation of the source sequences across encoder time steps, with multiple contiguous attention contexts being accessed at each decoding step.

6 Related Work

Our contributions build on previous work in making seq2seq models more computationally efficient.  Luong et al. (2015) introduce various attention mechanisms that are computationally simpler and perform as well or better than the original one presented in Bahdanau et al. (2014). However, these typically still require computation complexity, or lack the flexibility to look at the full source sequence. Efficient location-based attention  (Xu et al., 2015) has also been explored in the image recognition domain.

Wu et al. (2016) presents several enhancements to the standard seq2seq architecture that allow more efficient computation on GPUs, such as only attending on the bottom layer.  Kalchbrenner et al. (2016)

propose a linear time architecture based on stacked convolutional neural networks.  

Gehring et al. (2016) also propose the use of convolutional encoders to speed up NMT.  de Brébisson and Vincent (2016) propose a linear attention mechanism based on covariance matrices applied to information retrieval.  Raffel et al. (2017) enable online linear time attention calculation by enforcing that the alignment between input and output sequence elements be monotonic. Previously, monotonic attention was proposed for morphological inflection generation by  Aharoni and Goldberg (2016).

7 Conclusion

In this work, we propose a novel memory-based attention mechanism that results in a linear computational time of during decoding in seq2seq models. Through a series of experiments, we demonstrate that our technique leads to consistent inference speedups as sequences get longer, and can fit complex data distributions such as those found in Neural Machine Translation. We show that our attention mechanism learns meaningful alignments despite being constrained to a fixed representation after encoding. We encourage future work that explores the optimal values of for various language tasks and examines whether or not it is possible to predict based on the task at hand. We also encourage evaluating our models on other tasks that must deal with long sequences but have compact representations, such as summarization and question-answering, and further exploration of their effect on memory and training speed.

References