Data-driven Natural Language Generation: Paving the Road to Success

06/28/2017
by   Jekaterina Novikova, et al.
0

We argue that there are currently two major bottlenecks to the commercial use of statistical machine learning approaches for natural language generation (NLG): (a) The lack of reliable automatic evaluation metrics for NLG, and (b) The scarcity of high quality in-domain corpora. We address the first problem by thoroughly analysing current evaluation metrics and motivating the need for a new, more reliable metric. The second problem is addressed by presenting a novel framework for developing and evaluating a high quality corpus for NLG training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2023

Evaluating NLG Evaluation Metrics: A Measurement Theory Perspective

We address the fundamental challenge in Natural Language Generation (NLG...
research
02/02/2021

The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

We introduce GEM, a living benchmark for natural language Generation (NL...
research
04/12/2021

Plot-guided Adversarial Example Construction for Evaluating Open-domain Story Generation

With the recent advances of open-domain story generation, the lack of re...
research
10/16/2021

FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metricsfor Automatic Text Generation

Fast and reliable evaluation metrics are key to R D progress. While tr...
research
09/20/2022

Can we do that simpler? Simple, Efficient, High-Quality Evaluation Metrics for NLG

We explore efficient evaluation metrics for Natural Language Generation ...
research
11/28/2019

Towards Reliable Evaluation of Road Network Reconstructions

Existing performance measures rank delineation algorithms inconsistently...
research
10/28/2018

Learning Criteria and Evaluation Metrics for Textual Transfer between Non-Parallel Corpora

We consider the problem of automatically generating textual paraphrases ...

Please sign up or login with your details

Forgot password? Click here to reset