RoMe: A Robust Metric for Evaluating Natural Language Generation

03/17/2022
by   Md Rashad Al Hasan Rony, et al.
25

Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference's semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe, is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe. Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2020

Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models

Automated evaluation of open domain natural language generation (NLG) mo...
research
05/25/2022

Automatic question generation based on sentence structure analysis using machine learning approach

Automatic question generation is one of the most challenging tasks of Na...
research
08/27/2018

WiSeBE: Window-based Sentence Boundary Evaluation

Sentence Boundary Detection (SBD) has been a major research topic since ...
research
07/20/2016

Constructing a Natural Language Inference Dataset using Generative Neural Networks

Natural Language Inference is an important task for Natural Language Und...
research
12/15/2021

Dynamic Human Evaluation for Relative Model Comparisons

Collecting human judgements is currently the most reliable evaluation me...
research
09/07/2021

POSSCORE: A Simple Yet Effective Evaluation of Conversational Search with Part of Speech Labelling

Conversational search systems, such as Google Assistant and Microsoft Co...
research
09/21/2019

Visuallly Grounded Generation of Entailments from Premises

Natural Language Inference (NLI) is the task of determining the semantic...

Please sign up or login with your details

Forgot password? Click here to reset