DeepAI AI Chat
Log In Sign Up

RoMe: A Robust Metric for Evaluating Natural Language Generation

by   Md Rashad Al Hasan Rony, et al.
University of Hamburg
University of Bonn

Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference's semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe, is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe. Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.


page 1

page 2

page 3

page 4


Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models

Automated evaluation of open domain natural language generation (NLG) mo...

Automatic question generation based on sentence structure analysis using machine learning approach

Automatic question generation is one of the most challenging tasks of Na...

WiSeBE: Window-based Sentence Boundary Evaluation

Sentence Boundary Detection (SBD) has been a major research topic since ...

Constructing a Natural Language Inference Dataset using Generative Neural Networks

Natural Language Inference is an important task for Natural Language Und...

Dynamic Human Evaluation for Relative Model Comparisons

Collecting human judgements is currently the most reliable evaluation me...

POSSCORE: A Simple Yet Effective Evaluation of Conversational Search with Part of Speech Labelling

Conversational search systems, such as Google Assistant and Microsoft Co...

Visuallly Grounded Generation of Entailments from Premises

Natural Language Inference (NLI) is the task of determining the semantic...