Referenceless Quality Estimation for Natural Language Generation

08/05/2017
by   Ondřej Dušek, et al.
0

Traditional automatic evaluation measures for natural language generation (NLG) use costly human-authored references to estimate the quality of a system output. In this paper, we propose a referenceless quality estimation (QE) approach based on recurrent neural networks, which predicts a quality score for a NLG system output by comparing it to the source meaning representation only. Our method outperforms traditional metrics and a constant baseline in most respects; we also show that synthetic data helps to increase correlation results by 21 results obtained in similar QE tasks despite the more challenging setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2020

BLEU Neighbors: A Reference-less Approach to Automatic Evaluation

Evaluation is a bottleneck in the development of natural language genera...
research
10/10/2019

Automatic Quality Estimation for Natural Language Generation: Ranting (Jointly Rating and Ranking)

We present a recurrent neural network based system for automatic quality...
research
05/25/2020

AMR quality rating with a lightweight CNN

Structured semantic sentence representations such as Abstract Meaning Re...
research
09/24/2018

Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!

Motivated by recent findings on the probabilistic modeling of acceptabil...
research
05/24/2023

Not All Metrics Are Guilty: Improving NLG Evaluation with LLM Paraphrasing

Most research about natural language generation (NLG) relies on evaluati...
research
11/12/2016

Semi-automatic Simultaneous Interpreting Quality Evaluation

Increasing interpreting needs a more objective and automatic measurement...
research
10/09/2020

Mark-Evaluate: Assessing Language Generation using Population Estimation Methods

We propose a family of metrics to assess language generation derived fro...

Please sign up or login with your details

Forgot password? Click here to reset