Learning Beam Search Policies via Imitation Learning

11/01/2018 ∙ by Renato Negrinho, et al. ∙ Carnegie Mellon University 0

Beam search is widely used for approximate decoding in structured prediction problems. Models often use a beam at test time but ignore its existence at train time, and therefore do not explicitly learn how to use the beam. We develop an unifying meta-algorithm for learning beam search policies using imitation learning. In our setting, the beam is part of the model, and not just an artifact of approximate decoding. Our meta-algorithm captures existing learning algorithms and suggests new ones. It also lets us show novel no-regret guarantees for learning beam search policies.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Beam search is the dominant method for approximate decoding in structured prediction tasks such as machine translation sutskever2014sequence , speech recognition graves2013speech , image captioning vinyals2015show , and syntactic parsing weiss2015structured . Most models that use beam search at test time ignore the beam at train time and instead are learned via methods like likelihood maximization. They therefore suffer from two issues that we jointly address in this work: (1) learning ignores the existence of the beam and (2) learning uses only oracle trajectories. These issues lead to mismatches between the train and test settings that negatively affect performance. Our work addresses these two issues simultaneously by using imitation learning to develop novel beam-aware algorithms with no-regret guarantees. Our analysis is inspired by DAgger ross2011reduction .

Beam-aware learning algorithms use beam search at both train and test time. These contrast with common two-stage learning algorithms that, first, at train time, learn a probabilistic model via maximum likelihood, and then, at test time, use beam search for approximate decoding. The insight behind beam-aware algorithms is that, if the model uses beam search at test time, then the model should be learned using beam search at train time. Resulting beam-aware methods run beam search at train time (i.e., roll-in) to collect losses that are then used to update the model parameters. The first proposed beam-aware algorithms are perceptron-based, updating the parameters either when the best hypothesis does not score first in the beam 

collins_incremental_2004 , or when it falls out of the beam daume2005learning .

While there is substantial prior work on beam-aware algorithms, none of the existing algorithms expose the learned model to its own consecutive mistakes at train time. When rolling in with the learned model, if a transition leads to a beam without the correct hypothesis, existing algorithms either stop collins_incremental_2004 ; huang2012structured ; andor2016globally or reset to a beam with the correct hypothesis daume2005learning ; xu2007learning ; wiseman2016sequence .111(goyal2017continuous, ) take a different approach by training with a differentiable approximation of beam search, but decode with the standard (non-differentiable) search algorithm at test time. Additionally, existing beam-aware algorithms either do not have theoretical guarantees or only have perceptron-style guarantees xu2007learning . We are the first to prove no-regret guarantees for an algorithm to learn beam search policies.

Imitation learning algorithms, such as DAgger ross2011reduction , leverage the ability to query an oracle at train time to learn a model that is competitive (in the no-regret sense) to the best model in hindsight. Existing imitation learning algorithms such as SEARN daume2009search-based , DAgger ross2011reduction 222 Scheduled sampling bengio2015scheduled is an instantiation of DAgger., AggreVaTe ross2014reinforcement , and LOLS chang2015learning , execute the learned model at train time to collect data that is then labeled by the oracle and used for retraining. Nonetheless, these methods do not take the beam into account at train time, and therefore do not learn to use the beam effectively at test time.

We propose a new approach to learn beam search policies using imitation learning that addresses these two issues. We formulate the problem as learning a policy to traverse the combinatorial search space of beams. The learned policy is induced via a scoring function: the neighbors of the elements of a beam are scored and the top are used to form the successor beam. We learn a scoring function to match the ranking induced by the oracle costs of the neighbors. We introduce training losses that capture this insight, among which are variants of the weighted all pairs loss beygelzimer2008machine and existing beam-aware losses. As the losses we propose are differentiable with respect to the scores, our scoring function can be learned using modern online optimization algorithms, e.g. Adam kingma2015adam .

In some problems (e.g., sequence labeling and syntactic parsing) we have the ability to compute oracle completions and oracle completion costs for non-optimal partial outputs. Within our imitation learning framework, we can use this ability to compute oracle completion costs for the neighbors of the elements of a beam at train time to induce an oracle that allows us to continue collecting supervision after the best hypothesis falls out of the beam. Using this oracle information, we are able to propose a DAgger-like beam-aware algorithm with no-regret guarantees.

We describe our novel learning algorithm as an instantiation of a meta-algorithm for learning beam search policies. This meta-algorithm sheds light into key design decisions that lead to more performant algorithms, e.g., the introduction of better training losses. Our meta-algorithm captures much of the existing literature on beam-aware methods (e.g., daume2005learning ; huang2012structured

), allowing a clearer understanding of and comparison to existing approaches, for example, by emphasizing that they arise from specific choices of training loss function and data collection strategy, and by proving novel regret guarantees for them.

Our contributions are: an algorithm for learning beam search policies (Section 4.2) with accompanying regret guarantees (Section 5), a meta-algorithm that captures much of the existing literature (Section 4), and new theoretical results for the early update collins_incremental_2004 and LaSO daume2005learning algorithms (Section 5.3).

2 Preliminaries

Structured Prediction as Learning to Search

We consider structured prediction in the learning to search framework daume2009search-based ; ross2011reduction . Input-output training pairs are drawn according to a data generating distribution jointly over an input space and an output space . For each input , there is an underlying search space encoded as a directed graph with nodes and edges . Each output is encoded as a terminal node in , where is the set of valid structured outputs for .

In this paper, we deal with stochastic policies , where

is the set of probability distributions over nodes in

. (For convenience and brevity of presentation, we make our policies deterministic later in the paper through the introduction of a tie-breaking total order over the elements of , but our arguments and theoretical results hold more generally.) The goal is to learn a stochastic policy parametrized by that traverses the induced search spaces, generating outputs with small expected cost; i.e., ideally, we would want to minimize


where is the cost function comparing the ground-truth labeling to the predicted labeling . We are not able to optimize directly the loss in Equation (1), but we are able to find a mixture of policies , where for all , that is competitive with the best policy in in the distribution of trajectories induced by the mixture of . We use notation to mean that is generated by sampling a trajectory on by executing policy , and returning the labeling associated with terminal node . The search spaces, cost functions and policies depend on or —in the sequel, we omit indexing by example for conciseness.

Search Space, Cost, and Policies

Each example induces a search space and a cost function . For all , we introduce its set of neighbors . We identify a single initial node . We define the set of terminal nodes . We assume without loss of generality that all nodes are reachable from and that all nodes have paths to terminal nodes. For clarity of exposition, we assume that is a tree-structured directed graph where all terminals nodes are at distance from the root .333 We describe in Appendix A how to convert a directed graph search space to a tree-structured one with all terminals at the same depth.

Each terminal node corresponds to a complete output , which can be compared to the ground-truth via a cost function of interest (e.g., Hamming loss in sequence labeling or negative BLEU score papineni2002bleu in machine translation). We define the optimal completion cost function , which computes the cost of the best terminal node reachable from as , where is the set of terminal nodes reachable from .

The definition of naturally gives rise to an oracle policy . At , can be any fixed distribution (e.g., uniform or one-hot) over . For any state , executing until arriving at a terminal node achieves the lowest possible cost for completions of .

At , a greedy policy induced by a scoring function computes a fixed distribution over . When multiple elements are tied with the same highest score, we can choose an arbitrary distribution over them. If there is a single highest scoring element, the policy is deterministic. In this paper, we assume the existence of a total order over the elements of that is used for breaking ties induced by a scoring function. The tie-breaking total ordering allows us to talk about a particular unique ordering, even when ties occur. The oracle policy can be thought as being induced by the scoring function .

3 Beam search

1:function BeamSearch()
3:     while  do
5:     return  
6:function Policy()
7:     Let
8:     return  
9:function Best()
10:     Let be ordered
11:      such that
12:     Let
13:     return
Algorithm 1 Beam Search
Beam Search Space

Given a search space , we construct its beam search space , where is the maximum beam capacity. is the set of possible beams that can be formed along the search process, and is the set of possible beam transitions. Nodes correspond to nonempty sets of nodes of with size upper bounded by , i.e., with and for all . The initial beam state is the singleton set with the initial state . Terminal nodes in are singleton sets with a single terminal node . For , we define , i.e., the union of the neighborhoods of the elements in .

Algorithm 1 describes the beam search variant used in our paper. In this paper, all elements in the beam are simultaneously expanded when transitioning. It is possible to define different beam search space variants, e.g., by considering different expansion strategies or by handling terminals differently (in the case where terminals can be at different depths). The arguments developed in this paper can be extended to those variants in a straightforward manner.

Beam Costs

We define the cost of a beam to be the cost of its lowest cost element, i.e., we have and, for , . We define the beam transition cost function to be , for , i.e., the difference in cost between the lowest cost element in and the lowest cost element in .

A cost increase occurs on a transition if , or equivalently, , i.e., dropped all the lowest cost neighbors of the elements of . For all , we define , i.e., the set of beams neighboring that do not lead to cost increases. We will significantly overload notation, but usage will be clear from context and argument types, e.g., when referring to and .

1:function Learn()
2:     for each  do
3:         Induce using
4:         Induce using and
5:         Induce using )
7:         Incur losses
8:         Compute using , e.g., by SGD or Adam      
9:     return best on validation  
10:function BeamTrajectory()
13:     while  do
14:         if strategy is oracle then
16:         else
18:              if  then
19:                  if strategy is stop then
20:                       break                   
21:                  if strategy is reset then
24:     return
Algorithm 2 Meta-algorithm
Beam Policies

Let be a policy induced by a scoring function . To sample for a beam , form , and compute scores for all ; let be the elements of ordered such that ; if , ; if , let pick the top-most elements from . At , if there are many orderings that sort the scores of the elements of , we can choose a single one deterministically or sample one stochastically; if there is a single such ordering, the policy is deterministic at .

For each , at train time, we have access to the optimal path cost function , which induces the oracle policy . At a beam , a successor beam is optimal if , i.e., at least one neighbor with the smallest possible cost was included in . The oracle policy can be seen as using scoring function to transition in the beam search space .

4 Meta-Algorithm

Our goal is to learn a policy induced by a scoring function that achieves small expected cumulative transition cost along the induced trajectories. Algorithm 2 presents our meta-algorithm in detail. Instantiating our meta-algorithm requires choosing both a surrogate training loss function (Section 4.1) and a data collection strategy (Section 4.2). Table 1 shows how existing algorithms can be obtained as instances of our meta-algorithm with specific choices of loss function, data collection strategy, and beam size.

4.1 Surrogate Losses


In the beam search space, a prediction for is generated by running on . This yields a beam trajectory , where and . We have


The term can be written in a telescoping manner as


As depends on an example , but not on the parameters , the set of minimizers of is the same as the set of minimizers of


It is not easy to minimize the cost function in Equation (4) as, for example, is combinatorial. To address this issue, we observe the following by using linearity of expectation and the law of iterated expectations to decouple the term in the sum over the trajectory:


where denotes the distribution over beams in that results from following on for steps. We now replace by a surrogate loss function that is differentiable with respect to the parameters , and where is a surrogate loss for the expected cost increase incurred by following policy at beam for one step.

Elements in should be scored in a way that allows the best elements to be kept in the beam. Different surrogate losses arise from which elements we concern ourselves with, e.g., all the top elements in or simply one of the best elements in . Surrogate losses are then large when the scores lead to discarding desired elements in , and small when the scores lead to comfortably keeping the desired elements in .

Surrogate Loss Functions

The following additional notation allows us to define losses precisely. Let be an arbitrary ordering of the neighbors of the elements in . Let be the corresponding costs, where for all , and be the corresponding scores, where for all . Let be a permutation such that , i.e., are ordered in increasing order of cost. Note that . Similarly, let be a permutation such that , i.e., are ordered in decreasing order of score. We assume unique and for simplifying the presentation of the loss functions (which can be guaranteed via the tie-breaking total order on ). In this case, at , the successor beam is uniquely determined by the scores of the elements of .

For each , the corresponding cost function is independent of the parameters . We define a loss function at a beam in terms of the oracle costs of the elements of . We now introduce some well-motivated surrogate loss functions. Perceptron and large-margin inspired losses have been used in early update collins_incremental_2004 , LaSO daume2005learning , and BSO wiseman2016sequence . We also introduce two log losses.

perceptron (first)

Penalizes the lowest cost element in not being put at the top of the beam. When applied on the first cost increase, this is equivalent to an “early update” collins_incremental_2004 .

perceptron (last)

Penalizes the lowest cost element in falling out of the beam.

margin (last)

Prefers the lowest cost element to be scored higher than the last element in the beam by a margin. This yields updates that are similar but not identical to the approximate large-margin variant of LaSO daume2005learning .

cost-sensitive margin (last)

Weights the margin loss by the cost difference between the lowest cost element and the last element in the beam. When applied on a LaSO-style cost increase, this is equivalent to the BSO update of (wiseman2016sequence, ).

upper bound

Convex upper bound to the expected beam transition cost, , where is induced by the scores .


where for . Intuitively, this loss imposes a cost-weighted margin between the best neighbor and the neighbors that ought not to be included in the best successor beam . We prove in Appendix B that this loss is a convex upper bound for the expected beam transition cost.

log loss (beam)

Normalizes only over the top neighbors of a beam according to the scores .


where . The normalization is only over the correct element and the elements included in the beam. The set of indices

encodes the fact that the score vector

may not place in the top , and therefore it has to also be included in that case. This loss is used in (andor2016globally, ), albeit introduced differently.

log loss (neighbors)

Normalizes over all elements in .


The losses here presented directly capture the purpose of using a beam for prediction—ensuring that the best hypothesis stays in the beam, i.e., that, at , is scored sufficiently high to be included in the successor beam . If full cost information is not accessible, i.e., if are not able to evaluate for arbitrary elements in , it is still possible to use a subset of these losses, provided that we are able to identify the lowest cost element among the neighbors of a beam, i.e., for all , an element , such that .

While certain losses do not appear beam-aware (e.g., those in Equation (6) and Equation (12)), it is important to keep in mind that all losses are collected by executing a policy on the beam search space . Given a beam , the score vector and cost vector are defined for the elements of . The losses incurred depend on the specific beams visited. Losses in Equation (6), (10), and (12) are convex. The remaining losses are non-convex. For , we recover well-known losses, e.g., loss in Equation (12) becomes a simple log loss over the neighbors of a single node, which is precisely the loss used in typical log-likelihood maximization models; loss in Equation (7) becomes a perceptron loss. In Appendix C we discuss convexity considerations for different types of losses. In Appendix D, we present additional losses and expand on their connections to existing work.

4.2 Data Collection Strategy

Our meta-algorithm requires choosing a train time policy to traverse the beam search space to collect supervision. Sampling a trajectory to collect training supervision is done by BeamTrajectory in Algorithm 2.


Our simplest policy follows the oracle policy induced by the optimal completion cost function (as in Section 3). Using the terminology of Algorithm 1, we can write . This policy transitions using the negated sorted costs of the elements in as scores.

The oracle policy does not address the distribution mismatch problem. At test time, the learned policy will make mistakes and visit beams for which it has not collected supervision at train time, leading to error compounding. Imitation learning tells us that it is necessary to collect supervision at train time with the learned policy to avoid error compounding at test time ross2011reduction .

We now present data collection strategies that use the learned policy. For brevity, we only cover the case where the learned policy is always used (except when the transition leads to a cost-increase), and leave the discussion of additional possibilities (e.g., probabilistic interpolation of learned and oracle policies) to Appendix 

E.3. When an edge incurring cost increase is traversed, different strategies are possible:


Stop collecting the beam trajectory. The last beam in the trajectory is , i.e., the beam on which we arrive in the transition that led to a cost increase. This data collection strategy is used in structured perceptron training with early update collins_incremental_2004 .


Reset the beam to contain only the best state as defined by the optimal completion cost function: . In the subsequent steps of the policy, the beam grows back to size . LaSO daume2005learning uses this data collection strategy. Similarly to the oracle data collection strategy, rather than committing to a specific , we can sample where is any distribution over . The reset data collection strategy collects beam trajectories where the oracle policy is executed conditionally, i.e., when the roll-in policy would lead to a cost increase.


We can ignore the cost increase and continue following policy . This is the strategy taken by DAgger ross2011reduction . The continue data collection strategy has not been considered in the beam-aware setting, and therefore it is a novel contribution of our work. Our stronger theoretical guarantees apply to this case.

Algorithm Meta-algorithm choices
data collection surrogate loss
log-likelihood oracle log loss (neighbors) 1
DAgger ross2011reduction continue log loss (neighbors) 1
early update collins_incremental_2004 stop perceptron (first)
LaSO (perceptron) daume2005learning reset perceptron (first)
LaSO (large-margin) daume2005learning reset margin (last)
BSO wiseman2016sequence reset cost-sensitive margin (last)
globally normalized andor2016globally stop log loss (beam)
Ours continue [choose a surrogate loss]
Table 1: Existing and novel beam-aware algorithms as instances of our meta-algorithm. Our theoretical guarantees require the existence of a deterministic no-regret online learning algorithm for the resulting problem.

5 Theoretical Guarantees

We state regret guarantees for learning beam search policies using the continue, reset, or stop data collection strategies. One of the main contributions of our work is framing the problem of learning beam search policies in a way that allows us to obtain meaningful regret guarantees. Detailed proofs are provided in Appendix E. We begin by analyzing the continue collection strategy. As we will see, regret guarantees are stronger for continue than for stop or reset.

No-regret online learning algorithms have an important role in the proofs of our guarantees. Let be a sequence of loss functions with for all . Let be a sequence of iterates with for all . The loss function can be chosen according to an arbitrary rule (e.g., adversarially). The online learning algorithm chooses the iterate . Both and are chosen online, as functions of loss functions and iterates .

Definition 1.

An online learning algorithm is no-regret if for any sequence of functions chosen according to the conditions above we have


where goes to zero as goes to infinity.

Many no-regret online learning algorithms, especially for convex loss functions, have been proposed in the literature, e.g., (zinkevich2003online, ; kalai2005efficient, ; hazan2016introduction, ). Our proofs of the theoretical guarantees require the no-regret online learning algorithm to be deterministic, i.e., to be a deterministic rule of previous observed iterates and loss functions , for all . Online gradient descent zinkevich2003online is an example of such an algorithm.

In Theorem 1, we prove no-regret guarantees for the case where the no-regret online algorithm is presented with explicit expectations for the loss incurred by a beam search policy. In Theorem 2, we upper bound the expected cost incurred by a beam search policy as a function of its expected loss. This result holds in cases where, at each beam, the surrogate loss is an upper bound on the expected cost increase at that beam. In Theorem 3, we use Azuma-Hoeffding to prove no-regret high probability bounds for the case where we only have access to empirical expectations of the loss incurred by a policy, rather than explicit expectations. In Theorem 4, we extend Theorem 3 for the case where the data collection policy is different from the policy that we are evaluating. These results allow us to give regret guarantees that depend on how frequently is the data collection policy different from the policy that we are evaluating.

In this section we simply state the results of the theorems alongside some discussion. All proofs are presented in detail in Appendix E. Our analysis closely follows that of DAgger ross2011reduction , although the results need to be interpreted in the beam search setting. Our regret guarantees for beam-aware algorithms with different data collection strategies are novel.

5.1 No-Regret Guarantees with Explicit Expectations

The sequence of functions can be chosen in a way that applying a no-regret online learning algorithm to generate the sequence of policies leads to no-regret guarantees for the performance of the mixture of . The adversary presents the no-regret online learning algorithm with at time . The adversary is able to play because it can anticipate , as the adversary knows the deterministic rule used by the no-regret online learning algorithm to pick iterates. Paraphrasing Theorem 1, on the distribution of trajectories induced by the the uniform stochastic mixture of , the best policy in for this distribution performs as well (in the limit) as the uniform mixture of .

Theorem 1.

Let . If the sequence is chosen by a deterministic no-regret online learning algorithm, we have , where goes to zero when goes to infinity.

Furthermore, if for all the surrogate loss is an upper bound on the expected cost increase for all , we can transform the surrogate loss no-regret guarantees into performance guarantees in terms of . Theorem 2 tells us that if the best policy along the trajectories induced by the mixture of in incurs small surrogate loss, then the expected cost resulting from labeling examples sampled from with the uniform mixture of is also small. It is possible to transform the results about the uniform mixture of on results about the best policy among , e.g., following the arguments of (cesa2004generalization, ), but for brevity we do not present them in this paper. Proofs of Theorem 1 and Theorem 2 are in Appendix E.1

Theorem 2.

Let all the conditions in Definition 1 be satisfied. Additionally, let . Let be an upper bound on , for all . Then, , where goes to zero as goes to infinity.

5.2 Finite Sample Analysis

Theorem 1 and Theorem 2 are for the case where the adversary presents explicit expectations, i.e., the loss function at time is

. We most likely only have access to a sample estimator

of the true expectation: we first sample an example , sample a trajectory according to , and obtain . We prove high probability no-regret guarantees for this case. Theorem 3 tells us that the population surrogate loss of the mixture of policies is, with high probability, not much larger than its empirical surrogate loss. Combining this result with Theorem 1 and Theorem 2 allows us to give finite sample high probability results for the performance of the mixture of policies . The proof of Theorem 3 is found in Appendix E.2.

Theorem 3.

Let which is generated by sampling from (which induces the corresponding beam search space and cost functions), and sampling a beam trajectory using . Let for a constant , for all , beam trajectories , and . Let the iterates be chosen by a no-regret online learning algorithm, based on the sequence of losses , for , then we have , where and .

5.3 Finite Sample Analysis for Arbitrary Data Collection Policies

All the results stated so far are for the continue data collection strategy where, at time , the whole trajectory is collected using the current policy . Stop and reset data collection strategies do not necessarily collect the full trajectory under . If the data collection policy is other than the learned policy, the analysis can be adapted by accounting for the difference in distribution of trajectories induced by the learned policy and the data collection policy. The insight is that only depends on , so if no cost increases occur in this portion of the trajectory, we are effectively sampling the trajectory using when using the stop and reset data collection strategies.

Prior work presented only perceptron-style results for these settings collins_incremental_2004 ; daume2005learning —we are the first to present regret guarantees. Our guarantee depends on the probability with which is collected solely with . We state the finite sample analysis result for the case where these probabilities are not known explicitly, but we are able to estimate them. The proof of Theorem 4 is found in Appendix E.3.

Theorem 4.

Let be the data collection policy for example , which uses either the stop or reset data collection strategies. Let be the empirical estimate of the probability of incurring at least one cost increase up to time . Then,

where and .

If the probability of stopping or resetting goes to zero as goes to infinity, then the term captures the discrepancy between the distributions of induced by and vanishes, and we recover a guarantee similar to Theorem 3. If the probability of stopping or resetting does not go completely to zero, it is still possible to provide regret guarantees for the performance of this algorithm but now with a term that does not vanish with increasing . These regret guarantees for the different data collection strategies are novel.

6 Conclusion

We propose a framework for learning beam search policies using imitation learning. We provide regret guarantees for both new and existing algorithms for learning beam search policies. One of the main contributions is formulating learning beam search policies in the learning to search framework. Policies for beam search are induced via a scoring function. The intuition is that the best neighbors in a beam should be scored sufficiently high, allowing them to be kept in the beam when transitioning using these scores. Based on this insight, we motivate different surrogate loss functions for learning scoring functions. We recover existing algorithms in the literature through specific choices for the loss function and data collection strategy. Our work is the first to provide a beam-aware algorithm with no-regret guarantees.


The authors would like to thank Ruslan Salakhutdinov, Akshay Krishnamurthy, Wen Sun, Christoph Dann, and Kin Olivares for helpful discussions and detailed reviews.


  • (1) Ilya Sutskever, Oriol Vinyals, and Quoc Le.

    Sequence to sequence learning with neural networks.

    NIPS, 2014.
  • (2) Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton.

    Speech recognition with deep recurrent neural networks.

    ICASSP, 2013.
  • (3) Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. CVPR, 2015.
  • (4) David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. Structured training for neural network transition-based parsing. ACL, 2015.
  • (5) Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. AISTATS, 2011.
  • (6) Michael Collins and Brian Roark. Incremental parsing with the perceptron algorithm. ACL, 2004.
  • (7) Hal Daumé and Daniel Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. ICML, 2005.
  • (8) Liang Huang, Suphan Fayong, and Yang Guo. Structured perceptron with inexact search. NAACL, 2012.
  • (9) Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. Globally normalized transition-based neural networks. ACL, 2016.
  • (10) Yuehua Xu and Alan Fern. On learning linear ranking functions for beam search. ICML, 2007.
  • (11) Sam Wiseman and Alexander Rush. Sequence-to-sequence learning as beam-search optimization. ACL, 2016.
  • (12) Kartik Goyal, Graham Neubig, Chris Dyer, and Taylor Berg-Kirkpatrick. A continuous relaxation of beam search for end-to-end training of neural sequence models. AAAI, 2018.
  • (13) Hal Daumé, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 2009.
  • (14) Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. NIPS, 2015.
  • (15) Stéphane Ross and Andrew Bagnell. Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979, 2014.
  • (16) Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daumé, and John Langford. Learning to search better than your teacher. ICML, 2015.
  • (17) Alina Beygelzimer, John Langford, and Bianca Zadrozny. Machine learning techniques—reductions between prediction quality metrics. Performance Modeling and Engineering, 2008.
  • (18) Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
  • (19) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. ACL, 2002.
  • (20) Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. ICML, 2003.
  • (21) Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 2005.
  • (22) Elad Hazan. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2016.
  • (23) Nicolo Cesa-Bianchi, Alex Conconi, and Claudio Gentile. On the generalization ability of on-line learning algorithms. IEEE Transactions on Information Theory, 2004.
  • (24) Ben Taskar, Carlos Guestrin, and Daphne Koller. Max-margin Markov networks. NIPS, 2003.
  • (25) Kevin Gimpel and Noah Smith. Softmax-margin CRFs: Training log-linear models with cost functions. In ACL, 2010.

Appendix A Conversion to Tree-Structured Search Spaces

We define a search space as an arbitrary finite directed graph , where is the set of nodes and is the set of directed edges. Every directed graph has associated a tree-structured directed graph encoding all possible paths through . An important reason to do this transformation is that, in practice, policies often incorporate history features, so they are functions of the whole path leading to a node in , rather than just a single node in . A policy becomes a function of single nodes of . If is tree-structured, is isomorphic to , i.e., they are the same search space.

The set of terminal nodes contains all paths from the initial node to terminal nodes . For , we denote the length of the sequence encoding a path by . The length of a path is . We write for the -th element of a path . For all , for all and . The sets for are defined analogously to the sets for . For a path , if , , and , i.e., a path neighbors if it can be written as followed by an additional node in . For , if is a prefix of and if is a prefix of and . As is tree-structured, we can define the depth of a path as its length, i.e., . If path , then prefix , for all , i.e., path prefixes are themselves paths.

Tree-structured search spaces are common in practice. They often occur in write-only search spaces, where once an action is taken, its effects are irreversible. Typical search spaces for sequence tagging and machine translation are tree-structured: given a sequence to tag or translate, at each step we commit to a token and never get to change it. When the search space is not naturally seen as being tree-structured, the construction described makes it natural to work with an equivalent tree-structured search space of paths .

If has cycles, would be infinite. Infinite cycling in can be prevented by, for example, introducing a maximum path length or a maximum number of times that any given node can be visited. In this paper, we also assumed that all nodes in have distance to the root. It is possible to transform into a new tree-structured graph

by padding shorter paths to length

. Let be the maximum distance of any terminal in to the root. For each terminal node with distance to the root, we extend the path to by appending a linear chain of additional nodes. Node is no longer a terminal node in , and all the nodes in that resulted from extending the path are identified with .

Appendix B Convex Upper Bound Surrogate for Expected Beam Transition Cost

In this appendix, we design a convex upper bound surrogate loss for the expected beam transition cost . Let be an arbitrary ordering of the neighbors of , with corresponding costs , with for all . Let be the corresponding scores, with for all . Let and be the unique permutations such that and , respectively, with ties broken according to the total order on . We have . Let be the maximum beam capacity. Let be the beam induced by the scores , i.e., , with and ties broken according to the total order.

Consider the upper bound loss function (repeated here from Equation (10))