Differentiable N-gram Objective on Abstractive Summarization

02/08/2022
by   Yunqi Zhu, et al.
0

ROUGE is a standard automatic evaluation metric based on n-grams for sequence-to-sequence tasks, while cross-entropy loss is an essential objective of neural network language model that optimizes at a unigram level. We present differentiable n-gram objectives, attempting to alleviate the discrepancy between training criterion and evaluating criterion. The objective maximizes the probabilistic weight of matched sub-sequences, and the novelty of our work is the objective weights the matched sub-sequences equally and does not ceil the number of matched sub-sequences by the ground truth count of n-grams in reference sequence. We jointly optimize cross-entropy loss and the proposed objective, providing decent ROUGE score enhancement over abstractive summarization dataset CNN/DM and XSum, outperforming alternative n-gram objectives.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2019

A Novel Re-weighting Method for Connectionist Temporal Classification

The connectionist temporal classification (CTC) enables end-to-end seque...
research
06/29/2021

Don't Take It Literally: An Edit-Invariant Sequence Loss for Text Generation

Neural text generation models are typically trained by maximizing log-li...
research
08/01/2017

A Continuous Relaxation of Beam Search for End-to-end Training of Neural Sequence Models

Beam search is a desirable choice of test-time decoding algorithm for ne...
research
03/01/2017

Gram-CTC: Automatic Unit Selection and Target Decomposition for Sequence Labelling

Most existing sequence labelling models rely on a fixed decomposition of...
research
06/20/2021

A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss

Neural models trained for next utterance generation in dialogue task lea...
research
03/04/2019

Complement Objective Training

Learning with a primary objective, such as softmax cross entropy for cla...
research
04/14/2017

Optimizing Differentiable Relaxations of Coreference Evaluation Metrics

Coreference evaluation metrics are hard to optimize directly as they are...

Please sign up or login with your details

Forgot password? Click here to reset