Best-First Beam Search

07/08/2020 ∙ by Clara Meister, et al. ∙ ETH Zurich 0

Decoding for many NLP tasks requires a heuristic algorithm for approximating exact search since the full search space is often intractable if not simply too large to traverse efficiently. The default algorithm for this job is beam search–a pruned version of breadth-first search–which in practice, returns better results than exact inference due to beneficial search bias. In this work, we show that standard beam search is a computationally inefficient choice for many decoding tasks; specifically, when the scoring function is a monotonic function in sequence length, other search algorithms can be used to reduce the number of calls to the scoring function (e.g., a neural network), which is often the bottleneck computation. We propose best-first beam search, an algorithm that provably returns the same set of results as standard beam search, albeit in the minimum number of scoring function calls to guarantee optimality (modulo beam size). We show that best-first beam search can be used with length normalization and mutual information decoding, among other rescoring functions. Lastly, we propose a memory-reduced variant of best-first beam search, which has a similar search bias in terms of downstream performance, but runs in a fraction of the time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Beam search is a common heuristic algorithm for decoding structured predictors, e.g., neural machine translation models and transition-based parsers. Due to the widespread adoption of recurrent neural networks and other non-Markov models, traditional dynamic programming solutions, such as the Viterbi algorithm

Viterbi (1967), are prohibitively inefficient; this makes beam search a common component of many state-of-the-art NLP systems. Despite offering no formal guarantee of finding the highest-scoring hypothesis under the model, beam search yields impressive performance on a variety of tasks—unexpectedly providing a beneficial search bias over exact search for many tasks Stahlberg and Byrne (2019).

Within NLP, most research on beam search has focused on altering the standard log-probability scoring function to return improved results, e.g., higher

bleu scores Wu et al. (2016); Murray and Chiang (2018); Shu and Nakayama (2018); Yang et al. (2018) or a more diverse set of outputs Vijayakumar et al. (2016). However, little work has been done to speed up beam search itself. Filling this gap, this paper focuses on reformulating beam search in order to make it faster. We propose best-first beam search, a prioritized version of traditional beam search which is up to an order of magnitude faster in practice while still returning the same set of results. We additionally discuss an even faster heuristic version of our algorithm which further limits the number of candidate solutions, leading to a smaller memory footprint while still finding good solutions.

Concretely, we offer a novel interpretation of beam search as an agenda-based algorithm where traditional beam search is recovered by employing a length-based prioritization scheme. We prove that a specific best-first prioritization scheme, as in classic A search Hart et al. (1968), allows for the elimination of paths that will necessarily fall off the beam; for many scoring functions, including standard log-probability scoring, we can still guarantee the same hypotheses as traditional beam search are returned. Indeed, our algorithm returns beam search’s top hypothesis the first time it encounters a complete hypothesis, allowing the program to stop early. Further, we discuss the application of best-first beam search to several popular scoring functions in the literature He et al. (2016); Li et al. (2016); this demonstrates that we have a general framework for adapting a variety of rescoring methods and alternate objectives to work with our algorithm.

Empirically, we compare best-first beam search to ordinary beam search on two NLP sequence-to-sequence tasks: neural machine translation (NMT) and abstractive summarization (AS). On NMT, we find that our algorithm achieves roughly a 30% speed-up over traditional beam search with increased gains for larger beams (e.g., x for a beam of 500). We find similar results hold for AS. Finally, we show that our memory-reduced version, which limits the number of active hypotheses, leads to additional speed-ups over best-first beam search across beam sizes while maintaining similar bleu scores.

2 Sequence Transduction

A core operation in structured prediction models is the determination of the highest-scoring output for a given input under a learned scoring model.

(1)

where is an input and is a set of well-formed outputs for the input. An important example of creftype 1 is maximum a posteriori (MAP),

(2)

Our work focuses on sequence-to-sequence transduction: predicting an output sequence given an input sequence. One such task is machine translation, wherein a source-language sentence is mapped (“transduced”) to a target-language sentence. While our exposition focuses on sequence-to-sequence prediction, our algorithms are directly applicable to any sequential structured prediction model, such as transition-based parsers Nivre et al. (2008) and sequence taggers McCallum et al. (2000); Lafferty et al. (2001).

Notation.

Let be an input sequence of length and, likewise, let be an output sequence of length . Each is an element of , the set of output tokens. author=ryan,color=violet!40,size=,fancyline,caption=,author=ryan,color=violet!40,size=,fancyline,caption=,todo: author=ryan,color=violet!40,size=,fancyline,caption=,Are we consistent with input and output? Finally, let be the set of all valid output sequences (i.e., complete hypotheses). For the task of language generation, which we focus on experimentally, this set is defined as

(3)

where is string concatenation and is the set of all subsets of of size . In words, every valid sequence begins and ends with distinguished tokens (bos and eos, respectively).111bos and eos are typically members of . Often, eos counts towards the length limit while bos does not. This is reflected in creftype 3. Furthermore, each sequence has at most length —which is typically dependent on —a restriction we impose to ensure termination. Some applications may require a stronger coupling between and (e.g., ). We drop the dependence of and on when it is clear from context.

Scoring.

We consider a general additively decomposable scoring model of the form

(4)

This framework covers a variety of modeling methodologies including probabilistic transducers (both globally and locally normalized) and non-probabilistic models such as maximum-margin techniques Taskar et al. (2004). Most importantly, creftype 4 covers MAP decoding creftype 2 of neural sequence-to-sequence models à la sutskever-2014:222To see why, apply (an order-preserving transformation):

(5)

We note that creftype 5 is the scoring function used for decoding many language generation models.

Beam search.

The worst-case running time of exactly computing creftype 1 is exponential in ; namely, .333This can be improved if, for example, admits a low-order Markov factorization (Viterbi, 1967; Vieira et al., 2016). We do not discuss that setting in this paper because it limits the scoring model’s expressive power. Beam search is a commonly used approximation to creftype 1 in NMT and language generation tasks. It is used in many (if not most) state-of-the-art NLP systems Wu et al. (2016); Serban et al. (2017); Edunov et al. (2018); Yang et al. (2019). Beam search may be understood as a pruned version of the classic path-search algorithm, breadth-first search (BFS), where the breadth is narrowed to the beam size . Pseudocode is given in Alg. 1.

Although, beam search does not solve creftype 1 exactly, it is a surprisingly useful approximation for NLP models. In many settings, beam search outperforms exact methods in terms of downstream evaluation Koehn and Knowles (2017); Stahlberg and Byrne (2019). For the remainder of this paper, we will pivot our attention away from exact solutions to creftype 1 to exact solutions to the beam search output.

Definition 2.1.

-optimal hypothesis. We say that a hypothesis is -optimal if it is the top hypothesis returned by beam search with beam size .

Input: : source sentence
          : maximum beam size
          : maximum hypothesis length
          : scoring function

1:
2:for   :
3:   
4:   for  :
5:      if  :
6:         
7:         continue       
8:      for  :
9:         
10:                   
11:   
12:return
Algorithm 1 Standard beam search555Often, the function is additively decomposable in , such as creftype 5. Implementations can exploit this fact to make each score evaluation (line 9) rather than . We did not make this implementation detail explicit in Alg. 1 or footnote 7 for generality and simplicity.

3 A Beam Search

We develop a meta-algorithm that is parameterized by several choice points. Our general search algorithm for decoding (footnote 7) takes an arbitrary prioritization function, stopping criterion, and search heuristic. With certain values of these attributes, we recover many common search algorithms: greedy search, beam search, best-first search Dijkstra (1959), and A search Hart et al. (1968). We propose an alternate prioritization function for beam search that allows for faster decoding while still returning the same -optimal set of hypotheses.

Input: : source sentence
          : maximum hypothesis length
          : scoring function
          : comparator

1

          : stopping criterion

2

          : maximum beam size

3

          : heuristic function

4

1:
2:
3:
4:while not and not  :
5:   
6:   if  or  :
7:      continue    
8:   
9:   if  :
10:      
11:   else:
12:      for  :
13:         
14:         
15:                   
16:return if not else null
Algorithm 2 General decoding scheme.footnotemark: 777If the last token of is the end symbol (e.g., eos), then is not expanded any further. One can either regard as any other hypothesis albeit with or keep appending eos (i.e ) so that time step and length can be regarded as synonymous. We adopt the latter standard for comparability with subsequent algorithms. Highlighted sections are choice points in the algorithm for which values determine the search strategy. See § 3.1 for detailed explanation.

3.1 Choice Points of the Meta Algorithm

Here we review the components of our meta algorithm (the highlighted sections in footnote 7) that can be varied to recover different search strategies:

  • author=timv,color=magenta!40,size=,fancyline,caption=,author=timv,color=magenta!40,size=,fancyline,caption=,todo: author=timv,color=magenta!40,size=,fancyline,caption=,Consider restructuring a little bit so that you have the number (1) comparator : [Explanation]. Should each choicepoint have a mathematical type? e.g. comparator is , is a collection of items. items are triples.

    A priority queue maintains the set of active hypotheses. Elements in this set are ordered according to a generic comparator . When its (or ) methods are called, the first element ordered by is returned (or returned and removed).

  • The algorithm terminates according to configurable stopping criterion based on elements in .

  • Only paths of a given length are considered. If the algorithm has already encountered paths of a given length, subsequent paths of that length are not evaluated. If we take , we recover unpruned search algorithms.

  • A heuristic function can be used during search to change the priority in which paths are evaluated. We note that with pruning, a heuristic may change the value of the -optimal hypothesis (see § 4.1).

Recovering Beam Search.

To recover beam search from footnote 7, we use the choice points from Table 1. Explicitly, the comparator prioritizes hypotheses from earlier time steps first, but breaks ties with the hypotheses’ scores under the model. We note that while the standard algorithm for beam search does not prioritize by score within a time step, variations of the algorithm use this strategy so they can employ early-stopping strategies Klein et al. (2017); Huang et al. (2017). Beam search terminates once either all hypotheses end in eos or the queue is empty (i.e., when the beams have been extended time steps but none end in eos). In the second case, no complete hypothesis is foundauthor=timv,color=magenta!40,size=,fancyline,caption=,author=timv,color=magenta!40,size=,fancyline,caption=,todo: author=timv,color=magenta!40,size=,fancyline,caption=,What is a complete hypothesis? I don’t think we have such a thin in our setting. You get it in graph search when you’re looking for a path to a specific goal, but we don’t have a goal in our setting. In our framework, this type of behavior could be encoded in the scoring function somehow, e.g., score= if the path has an EOS but it doesn’t end with in a goal node before it or something like that.author=clara,color=orange,size=,fancyline,caption=,author=clara,color=orange,size=,fancyline,caption=,todo: author=clara,color=orange,size=,fancyline,caption=,EOS is the goal node. We define complete hypotheses in section 2.. Finally, choosing the heuristic makes the algorithm a case of standard best-first search.

Note that, while standard beam search returns a set, footnote 7 only returns the -optimal hypothesis. This behavior is sufficient for the majority of use cases for beam search. However, if the full set of hypotheses is desired, the stopping criterion can be changed to evaluate true only when hypotheses are complete. Under the other beam search settings, this would guaranteeably return the same set as beam search (see § 4.1).

Recovering A.

To recover the traditional A search algorithm we use the comparator that prioritizes hypotheses with a higher score first; ties are broken by hypothesis length. The algorithm terminates when the first item of contains an eos. If we take , best-first beam search recovers A. Any admissible heuristic may be used for .

Definition 3.1.

Admissible Heuristic. A heuristic is admissible if it never overestimates the future cost—or underestimates the future reward—of continuing down a path.

3.2 Best-First Beam Search

In its original form, search may traverse the entire graph, which as discussed earlier, is intractable for many decoding problems. While standard beam search addresses this problem by limiting the search space, it still has computational inefficiencies—namely, we must analyze hypotheses of a given length (i.e., time step), regardless of how poor their scores may already be, before considering longer hypotheses. However, prioritization by length is not strictly necessary for finding a -optimal hypothesis. As is done in , we can use score as the prioritization scheme and still guarantee optimality–or -optimality–of the paths returned by the algorithm.

We define beam search as the algorithm where breadth is limited to size . Further, we define best-first beam search as the case of beam search when no heuristic is used (see Table 1 for algorithm settings). This formulation has two large advantages over standard beam search: (1) we gain the ability to remove paths from the queue that are guaranteed to fall off the beam and (2) we can terminate the algorithm the first time a complete hypothesis is encountered. We can therefore reduce the computation required for decoding while still returning the same set of results.

To see why the above is true, note that the standard log-probability scoring function used by many sequential structured prediction models is monotonically decreasing in .

Definition 3.2.

Monotonicity. A scoring function is monotonic in if for all , ,

Clearly, creftype 5 is a monotonic scoring function in because , i.e., can only decrease in author=clara,color=orange,size=,fancyline,caption=,author=clara,color=orange,size=,fancyline,caption=,todo: author=clara,color=orange,size=,fancyline,caption=,flesh out a bit more. This implies we can order our search according to without fear of overlooking a hypothesis whose score would increase over time. Furthermore, once hypotheses of a given length have been evaluated, we no longer need to consider any hypothesis where since such hypotheses would necessarily fall off the beam. We can therefore remove such hypotheses from the queue and avoid wasting computational power on their evaluation. We prove this formally in § 4.1.

Another implication of the monotonicity property of is that we may terminate best-first beam search once a hypothesis containing eos is encountered (i.e., the end state is found). If the full set of complete hypotheses is desired, then we simply continue until hypotheses have reached eos. We prove the -optimality of these hypotheses under best-first beam search in § 4.1.

3.3 Implementation Details

Standard beam search forms a separate set of active hypotheses for each time step, i.e., each is its own set. Once has been narrowed down to the top , the previous can be forgotten. However in best-first beam search, since hypotheses are not evaluated in order of time step, we may need to keep from several time steps at any given point.

A naive implementation of best-first beam search is to keep a single priority queue with all the active hypotheses ordered by current score. However, each push to the queue would then require time. We can reduce this runtime by instead keeping a priority queue of beams, where the priority queue is ordered by the highest-scoring hypothesis from each beam. Further, each beam can be represented by a min-max queue Atkinson et al. (1986); this allows us to limit the size of to : we can check in time if a hypothesis is in the top- before adding it to .

A potential inefficiency, which we avoid, comes from updating , which we must do when evaluating a hypothesis from . Since all beams are stored in a queue, there is no guarantee of the location in the queue of . To avoid lookup, we can keep a pointer to each beam, indexed by making the lookup . However, we acquire a term to update the queue of beams as may change priority.

Memory-Reduced best-first beam search.

A major drawback of the algorithm is its memory usage, which in the worst-case is for breadth width and maximum depth . In the formulation of beam search, where the breadth width is limited to the beam size, this amounts to worst-case memory usage, where standard beam search has memory usage. While in many settings the multiplicative factor may be insignificant, for neural sequence models it can be prohibitive; this is due to the large amount of memory required to store each hypothesis (e.g., prior hidden states needed to compute subsequent scores for scoring functions parameterized by neural networks).

We propose a variant of best-first beam search that limits memory usage, i.e., the queue capacity. Specifically, if we reach the chosen queue capacity, we remove the worst scoring active hypothesis from the earliest active time step. This can easily be done in time given our pointer to each beam.

4 Algorithm Analysis

4.1 Correctness

We show the equivalence of the top hypothesis888best-first beam search is guaranteed to return the same set of hypotheses as beam search. We include the proof for only the top hypothesis for simplicity. The proof for set equality follows naturally. returned by beam search and best-first beam search when is monotonically decreasing in , length-based prioritization is used and the beam size is the same for both algorithms. Without loss of generality, we hold constant in all the following proofs.

Note that we take the terms pop and push from queue terminology. Specifically, “popping a hypothesis” refers to making it past line 7 of footnote 7, where a hypothesis is expanded by . In path search terminology, this would be equivalent to visiting a node and adding the edges from that node as potential paths to explore. Lastly, we refer to the priority queue used by beam search and best-first beam search as and , respectively.

Lemma 4.1.

best-first beam search evaluates all hypotheses of a given length in order of their score.

Proof.

We prove the lemma by induction. The lemma holds trivially for the base case of hypotheses of length 0 because the only hypothesis of length 0 is .

Now, by the inductive hypothesis, suppose Lemma 4.1 holds for all hypotheses of length . We will show it must also hold for hypotheses of length . Consider two competing hypotheses: and . Note that . Suppose .

Case 1: . Then by induction, popped first and is pushed to before . Since , will be popped before .

Case 2: . Then by induction, is popped first and is added to before . But, since by monotonicity, then will be popped before . Consequently, will be pushed to before is evaluated. By the rules of the priority queue will be evaluated before .

Case 3: ). The lemma holds if either or is popped first.

By the principle of induction, Lemma 4.1 holds for all . ∎

Lemma 4.2.

The first hypothesis that best-first beam search pops that ends in eos is the best hypothesis found by best-first beam search.

Proof.

Let be the first hypothesis popped by best-first beam search ending in eos. By rules of the priority queue, no other active hypothesis has a higher score than . Additionally, by monotonicity of the scoring function, no other hypothesis can subsequently have score greater than . Therefore must be the best hypothesis found by best-first beam search. ∎

Lemma 4.3.

If best-first beam search pops a hypothesis, then beam search necessarily pops that same hypothesis.

Proof.

We prove the lemma by induction on hypothesis length. The base case holds trivially: For hypotheses of length , both best-first beam search and beam search must pop the as it is the only item in the queue after initialization.

By the inductive hypothesis, suppose Lemma 4.3 holds for hypotheses of length . Suppose best-first beam search pops a hypothesis of length .

Case 1: best-first beam search pops hypotheses of length before popping , which is of length . The sets of hypotheses of length that each algorithm pops are necessarily the same by the inductive hypothesis and the fact that they have the same cardinality. If best-first beam search pops , which is of length , then it must be in the top- highest-scoring hypotheses of length in by the rules of the priority queue. Consequently, it must be in the top- in .

Case 2: best-first beam search has popped fewer than hypotheses of length before popping . Then, all remaining hypotheses of length in must have by the rules of the priority queue. By the monotonicity of the score function, all extensions of those will also have . Because none of has greater score than , must be in .

Corollary 4.3.1.

best-first beam search will never pop more hypotheses than beam search.

Theorem 4.4.
author=ryan,color=violet!40,size=,fancyline,caption=,author=ryan,color=violet!40,size=,fancyline,caption=,todo: author=ryan,color=violet!40,size=,fancyline,caption=,Need to think and revisit

Once best-first beam search has popped hypotheses of length , hypotheses from time steps do not need to be popped.

Proof.

This follows from Lemma 4.1. If hypotheses of length have been popped, then these must be the top- hypotheses from time step . Therefore no hypothesis from time step that is still in would be in the top- at time step . ∎

Theorem 4.5.

Upon stopping, beam search and best-first beam search return the same hypothesis.

Proof.

Let be the hypothesis best-first beam search returns. By Lemma 4.3 beam search also pops . Because ends in eos, we have that will also have the same score; it follows by Lemma 4.1 that is the highest-scoring hypothesis of length and, by monotonicity, of all subsequent lengths. Thus, when beam search reaches , remains in . Therefore, when beam search terminates, will be the highest-scoring hypothesis that ends in eos and thus the solution returned by beam search. ∎

author=clara,color=orange,size=,fancyline,caption=,author=clara,color=orange,size=,fancyline,caption=,todo: author=clara,color=orange,size=,fancyline,caption=,need to check again

Non-monotonic Scoring Functions.

Non-monotonic scoring functions (definition 3.2) break the assumptions of § 4.1, in which case best-first beam search is not guaranteed to return a -optimal hypothesis. However, when the scoring function is boundable from above, we can alter the original stopping criterion (

2
in footnote 7) such that -optimality is again guaranteed.

Given our assumed restriction on the search space—namely, —we can upper-bound the maximal score of any hypothesis under the scoring function in use. Formally, for any function we have:

(6)

where is the best complete hypothesis found so far and is the function-dependent upper bound on how much the score of can increase as is expanded further.999For monotonic scoring functions, we have . In this situation, best-first beam search only terminates once no other hypothesis in can have a score greater than the best finished hypothesis. We note that Huang et al. (2017) use a similar scheme for optimal stopping with bounded length normalization. We discuss examples of non-monotonic scoring functions in § 5.

A Note on Heuristics.

Our analysis shows the equivalence of beam search and best-first beam search, i.e., when . The analysis does not hold for arbitrary admissible heuristics. A poor heuristic, e.g., one that grossly overestimates the future score of continuing down one path, may cause other items to be pruned from best-first beam search that otherwise would have remained on the beam in standard beam search.

4.2 Runtime

Theorem 4.6.

The runtime of best-first beam search is

Proof.

We pop at most items. Each pop requires us to push items. Each push requires time when the priority queue is implemented with a min–max heap Atkinson et al. (1986) and incrementally pruned so that it has no more than items. After pushing those items, we have to perform a percolation in the priority queue of priority queues which requiers time. This yields time. ∎

Theorem 4.7.

The runtime of standard beam search is .

Proof.

The proof is the same as Theorem 4.6, but we can forgo the percolation step in the queue of queues because standard beam search proceeds in order of hypothesis length. This yields . ∎

While the theoretical bound of best-first beam search has an additional log factor compared to standard beam search, we find this to be negligible in practice. Rather, we find number of calls to , the scoring function under our model (e.g., a neural network), is often the bottleneck operation when decoding neural networks (see § 6 for empirical evidence). In terms of this metric, the beam search algorithm makes calls to , as is called once for each active hypothesis in and may evolve for rounds. The worst-case number of calls to will be the same as for beam search, which follows from Lemma 4.3.

5 Scoring Functions

Even before the findings of stahlberg-byrne-2019-nmt, it was well known that the best scoring hypothesis with respect to the traditional likelihood objective can be far from ideal in practice Wu et al. (2016); Murray and Chiang (2018); Yang et al. (2018). For language generation tasks specifically, the results returned by neural models using the standard scoring function are often short and default to high frequency words Vinyals and Le (2015); Shen et al. (2016).

To alleviate such problems, methods that revise hypothesis scores to incorporate preferences for longer, less repetitive, or more diverse options have been introduced and are often used in practice. While most such techniques change the scoring function such that it is no longer monotonic, we can still guarantee the -optimality of the returned hypothesis for (upper) bounded scoring functions using the methods discussed in § 4.1. In the remainder of this section, we present alternate scoring schemes adapted to work with best-first beam search. Additionally, we present several heuristics which, while breaking the -optimality guarantee, provide another set of decoding strategies worth exploring.

Length Normalization.

Length normalization is a widely-used hypothesis scoring method that aims to counteract the propensity for shorter sequences to have higher scores under neural models; this is done by normalizing scores by hypothesis length (see Murray and Chiang (2018) for more detail).

For early stopping in beam search with length normalization, Huang et al. (2017) propose bounding the additive length reward as the minimum of a pre-determined optimal sequence length ratio and the final sequence length :

(7)

where is the scaling parameter for the reward. We note, however, that the same can be done with the maximum sequence length such that the traditional length reward used by He et al. (2016) is recovered:

(8)

We formally propose two methods for length normalization. We use the scoring functions in creftype 7 or creftype 8 with either: (1) the following heuristic:

(9)

where can be or ;101010We enforce . or (2) stopping criterion as in creftype 6 albeit with and upper-bound function:

(10)

Despite their similarities, these two methods are not guaranteed to return the same results. While the second method will return the same -optimal hypotheses as beam search, using a heuristicauthor=timv,color=magenta!40,size=,fancyline,caption=,author=timv,color=magenta!40,size=,fancyline,caption=,todo: author=timv,color=magenta!40,size=,fancyline,caption=,It’s a little funny to say that h=0 is a heuristic. When we use a nontrival heuristic h!=0 we loose k-optimality. during pruned search means we can no longer guarantee the -optimality of the results with respect to the scoring function as the heuristic may push hypotheses off of the beam. We present experimental results for both methods in § 6.

max width= IWSLT’14 De-En MTTT Fr-En CNN-DailyMail (35.6) (35.4) (34.7) (7.9) (33.0) (9.9) (1.2) (31.5) (30.9) (29.1) BF beam search 93 (24%) 169 (36%) 1275 (79%) 1168 (736%) 184 (16%) 867 (138%) 885 (836%) 200 (33%) 305 (43%) 2960 (92%) Beam search (ES) 107 (7%) 210 (9%) 2047 (12%) 7685 (27%) 196 (9%) 1310 (58%) 4182 (98%) 224 (19%) 357 (22%) 3942 (59%) Beam search 115 229 2286 9770 214 2066 8281 266 435 5673

Table 2: Average number of calls (rounded to nearest whole digit) to , the sequence transduction model, per generated sequence when using different decoding algorithms. Green percentages are performance improvements over standard beam search. Beam search (ES) refers to the OpenNMT early-stopping method Klein et al. (2017)

. All methods provably return the same solution and thus, evaluation metrics (in dark blue) for a given beam size are identical.

author=timv,color=magenta!40,size=,fancyline,caption=,author=timv,color=magenta!40,size=,fancyline,caption=,todo: author=timv,color=magenta!40,size=,fancyline,caption=,Consider making a table that summarizes this section: original scoring rule name — citation — mathematical definition — monotonic approximation.

Mutual Information.

Maximum mutual information decoding (Li et al., 2016) aims to alleviate the inherent preference of neural models for high-frequency tokens when using the log-probability decoding objective. Rather than choosing the hypothesis to maximize conditional probability with respect to the input , we instead choose to maximize pointwise mutual information (PMI):

(11)

Note that creftype 11 is equivalent to , which can be rewritten as making the objective additive and thus creftype 11 can conform to creftype 4.

From this last form, we can see how mutual information decoding penalizes high-frequency and generic outputs; the negative term, as Li et al. (2016) point out, acts as an “anti-language model.” One unfortunate side effect of this objective is that ungrammatical and nonsensical outputs, which have probabilities close to 0 under a language model like

, end up with high scores due to the second term in the score function. To address this problem, and to upper-bound the scoring function, we propose lower-bounding the language model term by a hyperparameter

. We additionally use the strength hyperparameter employed by Li et al. (2016):

(12)

Similarly to our methods for length normalization, we can use the scoring function in creftype 12 either with the heuristic:

(13)

or with stopping criterion as in creftype 6 albeit with and upper-bound function:

(14)

Since is the best possible score at any given time step, clearly we can bound the increase in by the above function. However, as with our length normalization strategy, we lose the -optimality guarantee with the heuristic method for mutual information decoding. We present experimental results for both methods in § 6.

6 Experiments

Figure 1: Number of calls to scoring function vs. total sequence generation time. Each point is a decoded sequence. Colors represent different model architectures and shapes signify the decoding algorithm used (beam sizes 3 and 10 are included for each). There is no notable difference in the overhead (time-wise) of best-first beam search and beam search.

We run our algorithm on several language-related tasks that typically use beam search for decoding: neural machine translation (NMT) and abstractive summarization (AS). Specifically, experiments are performed on IWSLT’14 De-En Cettolo et al. (2012), WMT’17 De-En Bojar et al. (2017), MTTT Fr-En Duh (2018), and CNN-DailyMail Hermann et al. (2015) using both Transformers Vaswani et al. (2017) and Convolutional sequence-to-sequence models Gehring et al. (2017).

For reproducibility, we use the data pre-processing scripts provided by fairseq Ott et al. (2019) and follow their methods for training sequence transduction models. Hyperparameter are set in accordance with previous works. Specifically, on IWSLT’14 and MTTT tasks, we follow the recommended Transformer settings for IWSLT’14 in fairseq,111111https://github.com/pytorch/fairseq/tree/master/examples/translation which are based on Vaswani et al. (2017) and Gehring et al. (2017). Hyperparameters for models trained on the WMT task are set following version 3 of the Tensor2Tensor toolkit Vaswani et al. (2018). We use byte-pair encoding (BPE; Sennrich et al. 2016) for all languages. Vocabulary sizes for WMT and IWSLT’14 are set from recommendations for the respective tasks in fairseq; for the MTTT tasks, vocabulary sizes are tuned on models with standard label smoothing regularization. Similarly, the CNN/DailyMail dataset is pre-processed and uses BPE following the same steps as Lewis et al. (2019). Hyperparameters are the same as for their model fine-tuned on CNN/DailyMail. Details are available on the fairseq website.121212https://github.com/pytorch/fairseq/blob/master/examples/bart/README.cnn.md

We use bleu Papineni et al. (2002) (evaluated using SacreBLEU Post (2018)) for MT metrics and rouge-l Lin (2004) for abstractive summarization metrics. We build our decoding framework in SGNMT.131313https://github.com/ucam-smt/sgnmt

6.1 Running Time

In Tab. 2, we report values as the average number of calls to the scoring function per input; we do not use wall-clock time as this is heavily dependent on hardware. See Fig. 1 for empirical justification of the correlation between calls to the scoring function and runtime on the hardware our experiments were run on. For reference, in our experiments, the scoring function took on average of the total computation time, even with larger beam sizes, when overhead of the search algorithm is most significant.

We find that best-first (BF) beam search leads to significant speed-ups over both traditional beam search and beam search with early stopping, with a performance increase141414Performance increase is defined as of for a beam size of 500. We likewise find that best-first beam search offers speed-ups over early stopping methods that are not guaranteed to return the same results as standard beam search (see Tab. 3).

IWSLT’14 De-En
method search error bleu # calls
10 shrinking 0% 35.4 229 (0%)
early 0% 35.4 225 (2%)
BF BS - 35.4 169 (36%)
100 shrinking 31.7% 13.2 2278 (0%)
early 31.7% 13.2 1738 (31%)
BF BS - 34.7 1275 (79%)
WMT’17 De-En
10 shrinking 0% 28.6 260 (0%)
early 0% 28.6 252 (3%)
BF BS - 28.6 230 (12%)
100 shrinking 1.7% 26.4 2587 (0%)
early 1.7% 26.4 2402 (8%)
BF BS - 26.9 2046 (26%)
Table 3: bleu, search error, and average number of calls to for different stopping criterion. “shrinking” refers to the shrinking beam method of Bahdanau et al. (2015) and “early” refers to the stopping criterion of Huang et al. (2017). Note that neither method is guaranteed to return the same result as standard beam search. Search error and performance increases are with respect to standard beam search.

6.2 Length Normalization

We experiment with both forms of length normalization presented in § 5 and provide results in Tab. 4. We find that both methods, i.e., changing the stopping criterion and using a heuristic during search, provide improvements over baseline bleu scores albeit with different hyperparameter settings; increases are similar to improvements reported by Murray and Chiang (2018). Notably, using a heuristic causes a large percentage of search errors with respect to standard beam search using the same scoring function. However, the difference in results appears to be beneficial in terms of bleu.

max width= IWSLT’14 De-En # calls search error bleu Heuristic 5 0.8 115 (0%) 40.6% 33.9 +0.3 10 1.2 229 (0%) 54.7% 33.8 +0.5 Stopping Criterion 5 0.5 73 (58%) - 33.7 +0.1 10 0.5 130 (76%) - 33.7 +0.4 MTTT Fr-En Heuristic 5 0.8 100 (8%) 16.2% 33.5 +0.2 10 1.0 196 (9%) 25.2% 33.6 +0.6 Stopping Criterion 5 1.0 65 (66%) - 34.1 +0.8 10 1.2 88 (143%) - 34.1 +1.1

Table 4: bleu search error, and average number of calls to for output obtained with length normalization scoring function on the IWSLT’14 De-En and MTTT Fr-En test sets. Increase in bleu is over baseline with no length normalization. Search error and performance increases are with respect to standard beam search decoding using the same scoring function.

6.3 Mutual Information

We train a language model on the IWSLT dataset and use it to calculate from creftype 12 as marginalizing over is intractable (see Li et al. (2016) for further justification). We run experiments using both of the methods discussed in § 5 and present results in Tab. 5. We find that both methods provide results of equivalent bleu score compared with the baseline output, i.e., results obtained with the unbounded PMI objective and beam search. Again, despite the high search error rate demonstrated by the heuristic method, evaluation metrics are still comparable.

6.4 Memory Usage

We conduct a set of experiments where we limit total queue capacity to for , as described in § 3.3, and report the bleu score of the resulting set of hypotheses.

As shown in Tab. 6, we find that restricting the queue capacity does not harm output quality and additionally, leads to even greater runtime performance increase. For example, runtime for decoding of IWSLT’14 with a beam size of 10 can be improved by while returning results with better evaluation metrics. We find that improvements are even more pronounced for larger beam sizes. Across beam widths and tasks, we find that search error (with respect to standard beam search) is quite low for . Additionally, for smaller , the change in bleu score demonstrates that search error in this context does not necessarily hurt the quality of results.

width= # calls search error bleu Baseline 5 - .05 115 - 33.2 10 - .05 229 - 33.0 Heuristic 5 .02 .05 129 (0%) 42.7% 33.2 10 .02 .05 256 (0%) 42.7% 33.0 Stopping Criterion 5 .05 114 (1%) 29.2% 33.2 10 .05 224 (2%) 26.6% 33.0

Table 5: bleu scores with mutual information scoring function on IWSLT’14 De-En. Baseline is PMI decoding with unbounded , i.e., . Search error is with respect to beam search decoding of baseline with same .
IWSLT’14 De-En
search error bleu # calls
5 2 22.7% 35.7 +0.1 43.8 (163%)
5 4.4 % 35.8 +0.2 79.8 (44%)
- 35.6 93.0 (24%)
10 2 22.6% 35.7 +0.3 48.4 (374%)
5 4.5% 35.6 +0.2 126.9 (81%)
- 35.4 169.0 (36%)
WMT’17 De-En
5 2 29.0% 29.7 +0.2 77.5 (75%)
5 1.2% 29.5 +0.0 115.8 (12%)
- 29.5 118.8 (10%)
10 2 36.6% 29.5 +0.2 97.3 (165%)
5 2.6% 29.3 +0.0 230.0 (12%)
- 29.3 230.2 (12%)
Table 6: bleu scores and the number of calls to on the IWSLT’14 De-En validation set and WMT’17 De-En test set with queue size restricted to . Note that is the standard best-first beam search algorithm. Performance increases are over standard beam search. Search error is with respect to beam search with same beam width.
author=clara,color=orange,size=,fancyline,caption=,author=clara,color=orange,size=,fancyline,caption=,todo: author=clara,color=orange,size=,fancyline,caption=,I’d like to include a small section on search errors here. Just pointing to some research that they’re not necessarily bad for language generation tasks

7 Related Work

Our work is most similar to that of Zhou and Hansen (2005), who propose beam stack search. However, they are focused on exact inference and still evaluate hypotheses in breadth-first order. Additionally, their algorithm requires memory; while best-first beam search has the same requirements, we introduce effective methods for reducing them, namely memory-reduced best-first beam search.

Huang et al. (2017) propose and prove the optimality of an early-stopping criterion for beam search. The authors find in practice though that reduction in computation from their algorithm was generally not significant. We build on this work and introduce additional methods for avoiding unnecessary computation. Our method leads to better performance, as shown in Tab. 2.

Klein and Manning (2003) use for PCFG parsing; however, they use the un-pruned version for exact search which is not applicable for NMT or AS as the memory requirements of the algorithm are far too large for these tasks. Subsequently, Pauls and Klein (2009) provide a method for pruning this search algorithm, albeit using a threshold rather than explicitly limiting the state space. Huang et al. (2012) also adapt for a -best decoding algorithm. While their methods differ notably from ours, they likewise employ pruning techniques that allow for substantial speedups.author=timv,color=magenta!40,size=,fancyline,caption=,author=timv,color=magenta!40,size=,fancyline,caption=,todo: author=timv,color=magenta!40,size=,fancyline,caption=,There are lots of good agenda-based parsing papers that we can cite. Here are a few: Caraballo and Charniak (best first parsing) https://www.aclweb.org/anthology/J98-2004.pdf. The goal isn’t to review them all, but just to give people some pointers. At the end of the day, all the parsing methods fail to apply here because we’re non-markov.

Stahlberg and Byrne (2019) create an exact inference algorithm for decoding and use it to analyze the output of neural NMT models. While they likewise employ the monotonicity of the scoring function to make their method tractable, they do not focus on speed or mimicking the results of standard beam search.

8 Conclusion

We propose best-first beam search, an algorithm that allows for faster decoding while still guaranteeing -optimality. We provide results on several sequence-to-sequence transduction tasks that show the speed-ups our algorithm provides over standard beam search for decoding neural models. We adapt several popular alternate scoring functions to best-first beam search and provide a framework that can be used to adapt other scoring methods such as coverage normalization Wu et al. (2016) or diverse beam search Vijayakumar et al. (2016). We also provide a memory-reduced version of our algorithm, which returns competitive results in a fraction of the time needed for standard beam search.

References