A Survey of Evaluation Metrics Used for NLG Systems

08/27/2020
by   Ananya B. Sai, et al.
0

The success of Deep Learning has created a surge in interest in a wide a range of Natural Language Generation (NLG) tasks. Deep Learning has not only pushed the state of the art in several existing NLG tasks but has also facilitated researchers to explore various newer NLG tasks such as image captioning. Such rapid progress in NLG has necessitated the development of accurate automatic evaluation metrics that would allow us to track the progress in the field of NLG. However, unlike classification tasks, automatically evaluating NLG systems in itself is a huge challenge. Several works have shown that early heuristic-based metrics such as BLEU, ROUGE are inadequate for capturing the nuances in the different NLG tasks. The expanding number of NLG models and the shortcomings of the current metrics has led to a rapid surge in the number of evaluation metrics proposed since 2014. Moreover, various evaluation metrics have shifted from using pre-determined heuristic-based formulae to trained transformer models. This rapid change in a relatively short time has led to the need for a survey of the existing NLG metrics to help existing and new researchers to quickly come up to speed with the developments that have happened in NLG evaluation in the last few years. Through this survey, we first wish to highlight the challenges and difficulties in automatically evaluating NLG systems. Then, we provide a coherent taxonomy of the evaluation metrics to organize the existing metrics and to better understand the developments in the field. We also describe the different metrics in detail and highlight their key contributions. Later, we discuss the main shortcomings identified in the existing metrics and describe the methodology used to evaluate evaluation metrics. Finally, we discuss our suggestions and recommendations on the next steps forward to improve the automatic evaluation metrics.

READ FULL TEXT
research
01/04/2022

StyleM: Stylized Metrics for Image Captioning Built with Contrastive N-grams

In this paper, we build two automatic evaluation metrics for evaluating ...
research
09/04/2018

Handwriting styles: benchmarks and evaluation metrics

Evaluating the style of handwriting generation is a challenging problem,...
research
07/31/2019

MetricsVis: A Visual Analytics System for Evaluating Employee Performance in Public Safety Agencies

Evaluating employee performance in organizations with varying workloads ...
research
01/10/2023

Assessing the applicability of common performance metrics for real-world infrared small-target detection

Infrared small target detection (IRSTD) is a challenging task in compute...
research
03/26/2023

An Evaluation of Memory Optimization Methods for Training Neural Networks

As models continue to grow in size, the development of memory optimizati...
research
10/16/2017

Which is better? A Modularized Evaluation for Topic Popularity Prediction

Topic popularity prediction in social networks has drawn much attention ...
research
02/02/2021

The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

We introduce GEM, a living benchmark for natural language Generation (NL...

Please sign up or login with your details

Forgot password? Click here to reset