BARTScore: Evaluating Generated Text as Text Generation

06/22/2021
by   Weizhe Yuan, et al.
0

A wide variety of NLP applications, such as machine translation, summarization, and dialog, involve text generation. One major challenge for these applications is how to evaluate whether such generated texts are actually fluent, accurate, or effective. In this work, we conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models. The general idea is that models trained to convert the generated text to/from a reference output or the source text will achieve higher scores when the generated text is better. We operationalize this idea using BART, an encoder-decoder based pre-trained model, and propose a metric BARTScore with a number of variants that can be flexibly applied in an unsupervised fashion to evaluation of text from different perspectives (e.g. informativeness, fluency, or factuality). BARTScore is conceptually simple and empirically effective. It can outperform existing top-scoring metrics in 16 of 22 test settings, covering evaluation of 16 datasets (e.g., machine translation, text summarization) and 7 different perspectives (e.g., informativeness, factuality). Code to calculate BARTScore is available at https://github.com/neulab/BARTScore, and we have released an interactive leaderboard for meta-evaluation at http://explainaboard.nlpedia.ai/leaderboard/task-meval/ on the ExplainaBoard platform, which allows us to interactively understand the strengths, weaknesses, and complementarity of each metric.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2022

CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation

Existing reference-free metrics have obvious limitations for evaluating ...
research
05/22/2023

Evaluating Factual Consistency of Texts with Semantic Role Labeling

Automated evaluation of text generation systems has recently seen increa...
research
08/23/2021

CGEMs: A Metric Model for Automatic Code Generation using GPT-3

Today, AI technology is showing its strengths in almost every industry a...
research
03/30/2021

Evaluating the Morphosyntactic Well-formedness of Generated Texts

Text generation systems are ubiquitous in natural language processing ap...
research
06/03/2021

SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization

In this paper, we present a conceptually simple while empirically powerf...
research
09/04/2023

NumHG: A Dataset for Number-Focused Headline Generation

Headline generation, a key task in abstractive summarization, strives to...
research
04/21/2017

A Semantic QA-Based Approach for Text Summarization Evaluation

Many Natural Language Processing and Computational Linguistics applicati...

Please sign up or login with your details

Forgot password? Click here to reset