A Continuous Relaxation of Beam Search for End-to-end Training of Neural Sequence Models

08/01/2017
by   Kartik Goyal, et al.
0

Beam search is a desirable choice of test-time decoding algorithm for neural sequence models because it potentially avoids search errors made by simpler greedy methods. However, typical cross entropy training procedures for these models do not directly consider the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined, this "direct loss" objective is itself discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure. In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2017

Differentiable Scheduled Sampling for Credit Assignment

We demonstrate that a continuous relaxation of the argmax operation can ...
research
11/01/2018

Learning Beam Search Policies via Imitation Learning

Beam search is widely used for approximate decoding in structured predic...
research
09/17/2019

BSDAR: Beam Search Decoding with Attention Reward in Neural Keyphrase Generation

This study mainly investigates two decoding problems in neural keyphrase...
research
02/08/2022

Differentiable N-gram Objective on Abstractive Summarization

ROUGE is a standard automatic evaluation metric based on n-grams for seq...
research
03/22/2020

A Better Variant of Self-Critical Sequence Training

In this work, we present a simple yet better variant of Self-Critical Se...
research
04/21/2018

A Stable and Effective Learning Strategy for Trainable Greedy Decoding

As a widely used approximate search strategy for neural network decoders...
research
10/07/2016

Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models

Neural sequence models are widely used to model time-series data in many...

Please sign up or login with your details

Forgot password? Click here to reset