Neural Text Generation: A Practical Guide

11/27/2017 ∙ by Ziang Xie, et al. ∙ Stanford University 0

Deep learning methods have recently achieved great empirical success on machine translation, dialogue response generation, summarization, and other text generation tasks. At a high level, the technique has been to train end-to-end neural network models consisting of an encoder model to produce a hidden representation of the source text, followed by a decoder model to generate the target. While such models have significantly fewer pieces than earlier systems, significant tuning is still required to achieve good performance. For text generation models in particular, the decoder can behave in undesired ways, such as by generating truncated or repetitive outputs, outputting bland and generic responses, or in some cases producing ungrammatical gibberish. This paper is intended as a practical guide for resolving such undesired behavior in text generation models, with the aim of helping enable real-world applications.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 15

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Neural networks have recently attained state-of-the-art results on many tasks in machine learning, including natural language processing tasks such as sentiment understanding and machine translation. Within NLP, a number of core tasks involve generating text, conditioned on some input information. Prior to the last few years, the predominant techniques for text generation were either based on template or rule-based systems, or well-understood probabilistic models such as

-gram or log-linear models (Chen and Goodman, 1996; Koehn et al., 2003). These rule-based and statistical models, however, despite being fairly interpretable and well-behaved, require infeasible amounts of hand-engineering to scale—in the case of rule or template-based models—or tend to saturate in their performance with increasing training data (Jozefowicz et al., 2016). On the other hand, neural network models for text, despite their sweeping empirical success, are poorly understood and sometimes poorly behaved as well. Figure 1 illustrates the trade-offs between these two types of systems.

Figure 1: Figure illustrating the tradeoffs between using rule-based vs. neural text generation systems.

To help with the adoption of more usage of neural text generation systems, we detail some practical suggestions for developing NTG systems. We include a brief overview of both the training and decoding procedures, as well as some suggestions for training NTG models. The primary focus, however, is advice for diagnosing and resolving pathological behavior during decoding. As it can take a long time to retrain models, it is comparatively cheap to tune the decoding procedure; hence it’s worth understanding how to do this quickly before deciding whether or not to retrain.

Figure 2: Development cycle for NTG systems.

Figure 2 illustrates the feedback loops when improving different components of the model training and decoding procedures.

Despite a growing body of research, information on best practices tends to be scattered and often depends on specific model architectures. While some starting hyperparameters are suggested, the advice in this guide is intended to be as architecture-agnostic as possible, and error analysis is emphasized instead. It may be helpful to first read the background section, but the remaining sections can be read independently.

1.1 Focus of this guide

This guide focuses on advice for training and decoding of neural encoder-decoder models (with an attention mechanism) for text generation tasks. Roughly speaking, the source and target are assumed to be on the order of dozens of tokens. The primary focus of the guide is on the decoding procedure. Besides suggestions for improving the model training and decoding algorithms, we also touch briefly on preprocessing (Section 3) and deployment (Section 5.3).

1.1.1 Limitations: What will not be covered

Before continuing, we describe what this guide will not cover, as well as some of the current limitations of neural text generation models. This guide does not consider:

  • Natural language understanding and semantics. While impressive work has been done in learning word embeddings (Mikolov et al., 2013; Pennington et al., 2014)

    , the goal of learning “thought vectors” for sentences has remained more elusive 

    (Kiros et al., 2015). As previously mentioned, we also do not consider sequence labeling or classification tasks.

  • How to capture long-term dependencies (beyond a brief discussion of attention) or maintain global coherence. This remains a challenge due to the curse of dimensionality as well as neural networks failing to learn more abstract concepts from the predominant next-step prediction training objective.

  • How to interface models with a knowledge base, or other structured data that cannot be supplied in a short piece of text. Some recent work has used pointer mechanisms towards this end (Vinyals et al., 2015).

  • Consequently, while we focus on natural language, to be precise, this guide does not cover natural language generation (NLG), which entails generating documents or longer descriptions from structured data. The primary focus is on tasks where the target is a single sentence—hence the term “text generation” as opposed to “language generation”.

Although the field is evolving quickly, there are still many tasks where older rule or template-based systems are the only reasonable option. Consider, for example, the seminal work on ELIZA (Weizenbaum, 1966)

—a computer program intended to emulate a psychotherapist—that was based on pattern matching and rules for generating responses. In general, neural-based systems are unable perform the dialogue state management required for such systems. Or consider the task of generating a summary of a large collection of documents. With the soft attention mechanisms used in neural systems, there is currently no direct way to condition on such an amount of text.

2 Background

Summary of Notation

Symbol Shape Description scalar size of output vocabulary input/source sequence of length output/target sequence of length encoder hidden states where denotes representation at timestep decoder hidden states where denotes representation at timestep attention matrix where is attention weight at th decoder timestep on th encoder state representation varies hypothesis in hypothesis set during decoding scalar score of hypothesis during beam search

Minibatch size dimension is omitted from the shape column.

2.1 Setting

We consider modeling discrete sequences of text tokens. Given a sequence over the vocabulary , we seek to model

(1)

where denotes

, and equality follows from the chain rule of probability. Depending on how we choose to tokenize the text, the vocabulary can contain the set of characters, word-pieces/byte-pairs, words, or some other unit. For the tasks we consider in this paper, we divide the sequence

into an input or source sequence (that is always provided in full) and an output or target sequence . For example, for machine translation tasks might be a sentence in English and the translated sentence in Chinese. In this case, we model

(2)
Task (example) (example)
language modeling none (empty sequence) tokens from news corpus
machine translation source sequence in English target sequence in French
grammar correction noisy, ungrammatical sentence corrected sentence
summarization body of news article headline of article
dialogue conversation history next response in turn
Related tasks (may be outside scope of this guide)
speech transcription audio / speech features text transcript
image captioning image caption describing image
question answering supporting text + knowledge base + question answer
Table 1: Example tasks we consider.

Note that this is a generalization of (1); we consider from here on. Besides machine translation, this also encompasses many other tasks in natural language processing—see Table 1 for a summary.

Beyond the tasks described in the first half of Table 1, many of the techniques described in this paper also extend to tasks at the intersection of text and other modalities. For example, in speech recognition, may be a sequence of features computed on short snippets of audio, with being the corresponding text transcript, and in image captioning is an image (which is not so straightforward to express as a sequence) and the corresponding text description. While we could also include sequence labeling (for example part-of-speech tagging) as another task, we instead consider tasks that do not have a clear one-to-one correspondence or between source and target. The lack of such a correspondence leads to issues in decoding which we focus on in Section 5. The same reasoning applies to sequence classification

tasks such as sentiment analysis.

2.2 Encoder-decoder models

Encoder-decoder models, also referred to as sequence-to-sequence models, were developed for machine translation and have rapidly exceeded the performance of prior systems depite having comparatively simple architectures, trained end-to-end to map source directly to target.

Figure 3: Figure illustrating the generic encoder-decoder model architecture we assume for this guide. Several choices are possible for the encoder and decoder architectures as well as for the attention mechanism. Here we show the outputs for a single timestep.

Before neural network-based approaches, count-based methods (Chen and Goodman, 1996) and methods involving learning phrase pair probabilities were used for language modeling and translation. Prior to more recent encoder-decoder models, feed-forward fully-connected neural networks

were shown to work well for language modeling. Such models simply stack affine matrix transforms followed by nonlinearities to the input and each following hidden layer 

(Bengio et al., 2003). However, these networks have fallen out of favor for modeling sequence data, as they require defining a fixed context length when modeling , do not use parameter sharing across timesteps, and have been surpassed in performance by subsequent architectures.

At the time of this writing, several different architectures have demonstrated strong results.

  • Recurrent neural networks (RNNs) use shared parameter matrices across different time steps and combine the input at the current time step with the previous hidden state summarizing all previous time steps (Mikolov et al., 2010; Sutskever et al., 2014; Cho et al., 2014). Many different gating mechanisms have been developed for such architectures to try and ease optimization (Hochreiter and Schmidhuber, 1997; Cho et al., 2014).

  • Convolutional neural networks (CNNs). Convolutions with kernels reused across timesteps can also be used with masking to avoid peeking ahead at future inputs during training (see Section 2.3 for an overview of the training procedure) (Kalchbrenner et al., 2016). Using convolutions has the benefit during training of parallelizing across the time dimension instead of computing the next hidden state one step at a time.

  • Both recurrent and convolutional networks for modeling sequences typically rely on a per timestep attention mechanism (Bahdanau et al., 2014)

    that acts as a shortcut connection between the target output prediction and the relevant source input hidden states. At a high-level, at decoder timestep

    , the decoder representation is used to compute a weight for each encoder representation . For example, this could be done by using the dot product

    as the logits before applying the softmax function. Hence

    The weighted representation is then fed into the decoder along with and . More recent models which rely purely on attention mechanisms with masking have also been shown to obtain as good or better results as RNN or CNN-based models (Vaswani et al., 2017). We describe the attention mechanism in more detail in Section 2.5.

Unless otherwise indicated, the advice in this guide is intended to be agnostic of the model architecture, as long as the following two conditions hold:

  1. The model performs next-step prediction of the next target conditioned on the source and previous predicted targets, i.e. it models .

  2. The model uses an attention mechanism (resulting in an attention matrix ), which eases training, is simple to implement and cheap to compute in most cases, and has become a standard component of encoder-decoder models.111Some recent architectures also make use of a self-attention mechanism where decoder outputs are conditioned on previous decoder hidden states; for simplicity we do not discuss this extension.

Figure 3 illustrates the backbone architecture we use for this guide.

2.3 Training overview

During training, we optimize over the model parameters the sequence cross-entropy loss

(3)

thus maximizing the log-likelihood of the training data. Previous ground truth inputs are given to the model when predicting the next index in the sequence, a training method sometimes referred to (unfortunately) as teacher forcing. Due to the inability to fit current datasets into memory as well as for faster convergence, gradient updates are computed on minibatches of training sentences. Stochastic gradient descent (SGD) as well as optimizers such as Adam (Kingma and Ba, 2014) have been shown to work well empirically.

Recent reserarch has also explored other methods for training sequence models, such as by using reinforcement learning or a separate adversarial loss 

(Goodfellow et al., 2014; Li et al., 2016b, 2017; Bahdanau et al., 2016; Arjovsky et al., 2017). As of this writing, however, the aforementioned training method is the primary workhorse for training such models.

2.4 Decoding overview

During decoding, we are given the source sequence and seek to generate the target that maximizes some scoring function .222The scoring function may also take

or other tensors as input, but for simplicity we consider just

. In greedy decoding, we simply take the argmax over the softmax output distribution for each timestep, then feed that as the input for the next timestep. Thus at any timestep we only have a single hypothesis. Although greedy decoding can work surprisingly well, note that it often does not result in the most probable output hypothesis, since there may be a path that is more probable overall despite including an output which was not the argmax (this also holds true for most scoring functions we may choose).

Since it’s usually intractable to consider every possible due to the branching factor and number of timesteps, we instead perform beam search, where we iteratively expand each hypotheses one token at a time, and at the end of every search iteration we only keep the best (in terms of ) hypotheses, where is the beam width or beam size. Here’s the full beam search procedure, in more detail:

  1. We begin the beam procedure with the start-of-sequence token, . Thus our set of hypothesis consisting of the single hypothesis , a list with only the start token.

  2. Repeat, for :

    1. Repeat, for :

      1. Repeat, for every in the vocabulary with probability :

        1. Add the hypothesis to .

        2. Compute and cache . For example, if simply computes the cumulative log-probability of a hypothesis, we have .

    2. If any of the hypotheses end with the end-of-sequence token , move that hypothesis to the list of terminated hypotheses .

    3. Keep the best remaining hypotheses in according to .333Likewise, we can also store other information for each hypothesis, such as .

  3. Finally, we return . Since we are now considering completed hypotheses, we may also wish to use a modified scoring function .

Figure 4: Toy example of beam search procedure with beam width . The search has been run for 3 steps and no hypothesis has terminated with yet. Edges are annotated with probabilities of tokens. Only the tokens after pruning to the top hypotheses are shown.

One surprising result with neural models is that relatively small beam sizes yield good results with rapidly diminishing returns. Further, larger beam sizes can even yield (slightly) worse results. For example, a beam size of 8 may only work marginally better than a beam size of 4, and a beam size of 16 may work worse than 8 (Koehn and Knowles, 2017).

Finally, oftentimes incorporating a language model (LM) in the scoring function can help improve performance. Since LMs only need to be trained on the target corpus, we can train language models on much larger corpuses that do not have parallel data. Our objective in decoding is to maximize the joint probability:

(4)

However, since we are given and not , it’s intractable to maximize over . In practice, we instead maximize the pseudo-objective

If our original score is , which we assume involves a term, then the LM-augmented score is

where is a hyperparameter to balance the LM and decoder scores.

Despite these issues, this simple procedure works fairly well. However, there arise many cases where beam search, with or without a language model, can result in far from optimal outputs. This is due to inherent biases in the decoding (as well as the training) procedure. We describe how to diagnose and tackle these problems that can arise in Section 5.

2.5 Attention

Figure 5: Expected attention matrix when source and target are monotonically aligned (synthesized as illustrative example).

The basic attention mechanism used to “attend” to portions of the encoder hidden states during each decoder timestep has many extensions and applications. Attention can also be over previous decoder hidden states, in what is called self-attention (Vaswani et al., 2017), or used over components separate from the encoder-decoder model instead of over the encoder hidden states Grave et al. (2016). It can also be used to monitor training progress (by inspecting whether a clear alignment develops between encoder and decoder hidden states), as well as to inspect the correspondences between the input and output sequence that the network learns.

That last bit—that the attention matrix typically follows the correspondences between input and output—will be useful when we discuss methods for guiding the decoding procedure in Section 5. Hence we go into more detail on the attention matrix here.

The attention matrix has columns and rows, where is the number of output timesteps and is the number of input timesteps. Every row

is a discrete probability distribution over the encoder hidden states, which in this guide we assume to be equal to the number of input timesteps

. In the case that a encoder column has no entry over some threshold (say ), this suggests that the corresponding encoder input was ignored by the decoder. Likewise, if has multiple values over threshold, this suggests those encoder hidden states were repeatedly used during multiple decoder timesteps. Figure 5 shows an attention matrix we might expect for a well-trained network where source and target are well-aligned (e.g. EnglishFrench translation). For a great overview and visualizations of of attention and RNN models, also see Olah and Carter (2016).

2.6 Evaluation

One of the key challenges for developing text generation systems is that there is no satisfying automated metric for evaluating the final output of the system. Unlike classification and sequence labeling tasks, we (as of now) cannot precisely measure the output quality of text generation systems barring human evaluation. Perplexity does not always correlate well with downstream metrics (Chen et al., 1998), automated or otherwise. Common metrics based on -gram overlap such as ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002) are only rough approximations, and often do not capture linguistic fluency and cohererence (Conroy and Dang, 2008; Liu et al., 2016). Such metrics are especially problematic in more open-ended generation tasks such as summarization and dialogue.

Recent results have shown that though automated metrics are not great at distinguishing between systems once performance passes some baseline, they nonetheless are useful for finding examples where performance is poor, and can also be consistent for evaluating similar systems (Novikova et al., 2017)

. Thus, despite issues with current automated evaluation metrics, we assume their use for model development; however, manual human evaluation should be interspersed as well.

3 Preprocessing

With increasingly advanced libraries for building computation graphs and performing automatic differentiation, a more significant portion of the software development process is devoted to data preparation.444Suggestions for multilingual preprocessing are welcome.

Broadly speaking, once the raw data has been collected, there remains cleaning, tokenization, and splitting into training and test data. An important consideration during cleaning is setting the character encoding—for example ASCII or UTF-8—for which libraries such as Python’s unidecode can save a lot of time. After cleaning comes the less easily-specified tasks of splitting the text into sentences and tokenization. At present, we recommend Stanford CoreNLP555https://github.com/stanfordnlp/CoreNLP for extensive options and better handling of sentence and word boundaries than other available libraries.

An alternative to performing tokenization (and later detokenization) is to avoid it altogether. Instead of working at the word level, we can instead operate at the character level or use intermediate subword units (Sennrich et al., 2015). Such models result in longer sequences overall, but empirically subword models tend to provide a good trade-off between sequence length (speed) and handling of rare words (Wu et al., 2016). Section 5.2.1 discusses the benefits of subword models in more detail. Ultimately, if using word tokens, it’s important to use a consistent tokenization scheme for all inputs to the system—this includes handling of contractions, punctuation marks such as quotes and hyphens, periods denoting abbreviations (nonbreaking prefixes) vs. sentence boundaries, character escaping, etc.666The Stanford Tokenizer page https://nlp.stanford.edu/software/tokenizer.html has a detailed list of options.

4 Training

A few heuristics should be sufficient for handling many of the issues when training such models. Start by getting the model to overfit on a tiny subset of the data as a quick sanity check. If the loss explodes, keep reducing the learning rate until it doesn’t. If the model overfits, apply dropout 

(Srivastava et al., 2014; Zaremba et al., 2014)

and weight decay until it doesn’t. Gradient clipping is often crucial to avoid the exploding gradient problem while using a reasonably large learning rate. For SGD and its variants, periodically annealing the learning rate when the validation loss fails to decrease typically helps significantly.

A few useful heuristics that should be robust to the hyperparameter settings and optimization settings you use:

  • Sort the next dozen or so batches of sentences by length so each batch has examples of roughly the same length, thus saving computation (Sutskever et al., 2014).

  • If the training set is small, tuning regularization will be key to performance (Melis et al., 2017). Noising (or “token dropout”) is also worth trying (Xie et al., 2017). Though we only touch on this issue briefly, amount of training data will be in most cases the primary bottleneck in the performance of a NTG model.

  • Measure validation loss after each epoch and anneal the learning rate when validation loss stops decreasing. Depending on how much the validation loss fluctates (based off of validation set size and optimizer settings) you may wish to anneal with patience (wait for several epochs of non-decreasing learning rate before reducing the learning rate).

  • Periodically checkpoint model parameters and measure downstream performance (BLEU, , etc.) using several of the last few model checkpoints. Validation cross entropy loss and final performance may not correlate well, and there can be significant differences in final performance across checkpoints with similar validation losses.

  • Ensembling almost always improves performance. Averaging checkpoints is a cheap way to approximate the ensembling effect (Huang et al., 2017).

For a survey of model parameters to consider as well as suggested settings of hyperparameters, see Britz et al. (2017) or Melis et al. (2017).

5 Decoding

Suppose you’ve trained a neural network encoder-decoder model that achieves reasonable perplexity on the validation set. You then try running decoding or generation using this model. The simplest way is to run greedy decoding, as described in Section 2.4. From there, beam search decoding should yield some additional performance improvements. However, it’s rare that things simply work. This section is intended for use as a quick reference when encountering common issues during decoding.777Many examples are purely illustrative excerpts from Alice’s Adventures in Wonderland (Carroll, 1865).

5.1 Diagnostics

First, besides manual inspection, it’s helpful to create some diagnostic metrics when debugging the different components of a text generation system. Despite training the encoder-decoder network to map source to target, during the decoding procedure we introduce two additional components:

  1. A scoring function that tells us how “good” a hypothesis on the beam is.

  2. Optionally, a language model trained on a large corpus which may or may not be similar to the target corpus.

It may not be clear which of these components we should prioritize when trying to improve the performance of the combined system; hence it can be very helpful to run ablative analysis (Ng, 2007),

For the language model, a few suggestions are measuring performance for and several other reasonably spaced values, then plotting the performance trend; measuring perplexity of the language model when trained on varying amounts of training data, to see if more data would be helpful or yields diminishing returns; and measuring performance when training the language model on several different domains (news data, Wikipedia, etc.) in cases where it’s difficult to obtain data close to the target domain.

When measuring the scoring function, computing metrics and inspecting the decoded outputs vs. the gold sentences often immediately yields insights. Useful metrics include:

  • Average length of decoded outputs vs. average length of reference targets .

  • vs. , then inspecting the ratio . If the average ratio is especially low, then there may be a bug in the beam search, or the beam size may need to be increased. If the average ratio is high, then the scoring function may not be appropriate.

  • For some applications computing edit distance (insertions, substitutions, deletions) between and may also be useful, for example by looking at the most frequent edits or by examining cases where the length-normalized distances are highest.

5.2 Common issues

5.2.1 Rare and out-of-vocabulary (OOV) words

Decoded And as in  thought he stood, The , with eyes of flame Expected And as in uffish thought he stood, The Jabberwock, with eyes of flame

For languages with very large vocabularies, especially languages with rich morphologies, rare words become problematic when choosing a tokenization scheme that results in more token labels than it is feasible to model in the output softmax. One ad hoc approach that was first used to deal with this issue is simply to truncate the softmax output size (to say, 50K), then assign the remaining token labels all to the  class (Luong et al., 2014). The box above illustrates the resulting output (after detokenization) when rare words are replaced with s. A more elegant approach is to use character or subword preprocessing (Sennrich et al., 2015; Wu et al., 2016) to avoid OOVs entirely, though this can slow down runtime for both training and decoding.

5.2.2 Decoded output short, truncated or ignore portions of input

Decoded It’s no use going back to yesterday. Expected It’s no use going back to yesterday, because I was a different person then.

During the decoding search procedure, hypotheses terminate with the token. The decoder network should learn to place very low probability on the token until the target is fully generated; however, sometimes does not have sufficiently low probability. This is because as the length of the hypothesis grows, the total log probability only decreases. Thus, if we do not normalize the log probability by the length of the hypothesis, shorter hypotheses will be favored. The box above illustrates an example where the hypothesis terminates early. This issue is exacerbated when incorporating a language model term. Two simple ways of resolving this issue are normalizing the log-probability score and adding a length bonus.

  • Length normalization: Replace the score with the score normalized by the hypothesis length .

  • Length bonus: Replace the score with the , where is a hyperparameter.

Note that normalizing the total log-probability by length is equivalent to maximizing the th root of the probability, while adding a length bonus is equivalent to multiplying the probability at every timestep by a baseline .

Figure 6: Example of attention matrix when decoding terminates early with the token without having covered the input .

Another method for avoiding this issue is with a coverage penalty using the attention matrix  (Tu et al., 2016; Wu et al., 2016). As formulated here, the coverage penalty can only be applied once a hypothesis (with a corresponding attention matrix) has terminated; hence it can only be incorporated into to perform a final re-ranking of the hypotheses. For a given hypothesis with attention matrix with shape , the coverage penalty is computed as

(5)

Intuitively, for every source timestep, if the attention matrix places full probability on that source timestep when aggregated over all decoding timesteps, then the coverage penalty is zero. Otherwise a penalty is incurred for not attending to that source timestep.

Finally, when the source and target are expected to be of roughly equal lengths, one last trick is to simply constrain the target length to be within some delta of the source length , e.g. where we also include and in the case where is very small, and and are hyperparameters.

5.2.3 Decoded output repeats

Decoded I’m not myself, you see, you see, you see, you see, … Expected I’m not myself, you see.
Figure 7: Example of attention matrix when decoding exhibits repeating behavior.

Repeating outputs are a common issue that often seem to expose neural versus template-based systems. Simple measures include adding a penalty when the model reattends to previous timesteps after the attention has shifted away. This is easily detected using the attention matrix with some manually selected threshold.

Finally, a more fundamental issue to consider with repeating outputs is poor training of the model parameters. Passing in the attention vector as part of the decoder input when predicting is another training-time method (See et al., 2017).

5.2.4 Lack of diversity

“I don’t know” (Li et al., 2015)

In dialogue and QA, where there are often very common responses for many different conversation turns, generic responses such as “I don’t know” are a common problem. Similarly, in problems where many possible source inputs map to a much smaller set of possible target outputs, diversity of outputs can be an issue.

Increasing the temperature of the softmax is a simple method for trying to encourage more diversity in decoded outputs. In practice, however, a method penalizing low-ranked siblings during each step of the beam search decoding procedure has been shown to work well (Li et al., 2016a). Another more sophisticated method is to maximize the mutual information between the source and target, but is significantly more difficult to implement and requires generating -best lists (Li et al., 2015).

5.3 Deployment

Although speed of decoding is not a huge concern when trying to achieve state-of-the-art results, it is a concern when deploying models in production, when real-time decoding is often a requirement. Beyond gains from using highly parallelized hardware such as GPUs or from using libraries with optimized matrix-vector operations, we now discuss some other techniques for improving the runtime of decoding.

Consider the factors which determine the runtime of the decoding algorithm. For the beam search algorithms we consider, the runtime should scale linearly with the beam size (although in practice, batching hypotheses can lead to sublinear scaling). The runtime will often scale approximately quadratically with the hidden size of network , and finally, linearly with the number of timesteps . Thus decoding might have a complexity of .

Thus, (a jumbled collection of) possible methods for speeding up decoding include developing heuristics to prune the beam, finding the best trade-off between size of the vocabulary (softmax) and decoder timesteps, batching multiple examples together, caching previous computations (in the case of CNN models), and performing as much computation as possible within the compiled computation graph.

6 Conclusion

We describe techniques for training and dealing with undesired behavior in natural language generation models using neural network decoders. Since training models tends to be far more time-consuming than decoding, it is worth making sure your decoder is fully debugged before committing to training additional models. Encoder-decoder models are evolving rapidly, but we hope these techniques will be useful for diagnosing a variety of issues when developing your NTG system.

7 Acknowledgements

We thank Arun Chaganty and Yingtao Tian for helpful discussions, and Dan Jurafsky for many helpful pointers.

1.0

References