Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic Factors

10/08/2021
by   Marvin Kaster, et al.
0

Evaluation metrics are a key ingredient for progress of text generation systems. In recent years, several BERT-based evaluation metrics have been proposed (including BERTScore, MoverScore, BLEURT, etc.) which correlate much better with human assessment of text generation quality than BLEU or ROUGE, invented two decades ago. However, little is known what these metrics, which are based on black-box language model representations, actually capture (it is typically assumed they model semantic similarity). In this work, we use a simple regression based global explainability technique to disentangle metric scores along linguistic factors, including semantics, syntax, morphology, and lexical overlap. We show that the different metrics capture all aspects to some degree, but that they are all substantially sensitive to lexical overlap, just like BLEU and ROUGE. This exposes limitations of these novelly proposed metrics, which we also highlight in an adversarial test scenario.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2020

BLEURT: Learning Robust Metrics for Text Generation

Text generation has made significant advances in the last few years. Yet...
research
09/08/2022

Towards explainable evaluation of language models on the semantic similarity of visual concepts

Recent breakthroughs in NLP research, such as the advent of Transformer ...
research
09/13/2021

Perturbation CheckLists for Evaluating NLG Evaluation Metrics

Natural Language Generation (NLG) evaluation is a multifaceted task requ...
research
08/15/2022

MENLI: Robust Evaluation Metrics from Natural Language Inference

Recently proposed BERT-based evaluation metrics perform well on standard...
research
10/25/2022

DEMETR: Diagnosing Evaluation Metrics for Translation

While machine translation evaluation metrics based on string overlap (e....
research
10/13/2020

Improving Text Generation Evaluation with Batch Centering and Tempered Word Mover Distance

Recent advances in automatic evaluation metrics for text have shown that...
research
07/18/2018

Is it worth it? Budget-related evaluation metrics for model selection

Creating a linguistic resource is often done by using a machine learning...

Please sign up or login with your details

Forgot password? Click here to reset