MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics

10/07/2020
by   Anthony Chen, et al.
0

Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded by existing generation metrics, which rely on token overlap and are agnostic to the nuances of reading comprehension. To address this, we introduce a benchmark for training and evaluating generative reading comprehension metrics: MOdeling Correctness with Human Annotations. MOCHA contains 40K human judgement scores on model outputs from 6 diverse question answering datasets and an additional set of minimal pairs for evaluation. Using MOCHA, we train a Learned Evaluation metric for Reading Comprehension, LERC, to mimic human judgement scores. LERC outperforms baseline metrics by 10 to 36 absolute Pearson points on held-out annotations. When we evaluate robustness on minimal pairs, LERC achieves 80 outperforming baselines by 14 to 26 absolute percentage points while leaving significant room for improvement. MOCHA presents a challenging problem for developing accurate and robust generative reading comprehension metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2020

Bridging Information-Seeking Human Gaze and Machine Reading Comprehension

In this work, we analyze how human gaze during reading comprehension is ...
research
06/10/2018

Adaptations of ROUGE and BLEU to Better Evaluate Machine Reading Comprehension Task

Current evaluation metrics to question answering based machine reading c...
research
05/03/2022

Quiz Design Task: Helping Teachers Create Quizzes with Automated Question Generation

Question generation (QGen) models are often evaluated with standardized ...
research
05/26/2019

Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives

This paper tackles the problem of reading comprehension over long narrat...
research
03/10/2020

A Framework for Evaluation of Machine Reading Comprehension Gold Standards

Machine Reading Comprehension (MRC) is the task of answering a question ...
research
06/02/2019

Question Answering as an Automatic Evaluation Metric for News Article Summarization

Recent work in the field of automatic summarization and headline generat...
research
03/18/2021

Quinductor: a multilingual data-driven method for generating reading-comprehension questions using Universal Dependencies

We propose a multilingual data-driven method for generating reading comp...

Please sign up or login with your details

Forgot password? Click here to reset