Break, Perturb, Build: Automatic Perturbation of Reasoning Paths through Question Decomposition

07/29/2021
by   Mor Geva, et al.
0

Recent efforts to create challenge benchmarks that test the abilities of natural language understanding models have largely depended on human annotations. In this work, we introduce the "Break, Perturb, Build" (BPB) framework for automatic reasoning-oriented perturbation of question-answer pairs. BPB represents a question by decomposing it into the reasoning steps that are required to answer it, symbolically perturbs the decomposition, and then generates new question-answer pairs. We demonstrate the effectiveness of BPB by creating evaluation sets for three reading comprehension (RC) benchmarks, generating thousands of high-quality examples without human intervention. We evaluate a range of RC models on our evaluation sets, which reveals large performance gaps on generated examples compared to the original data. Moreover, symbolic perturbations enable fine-grained analysis of the strengths and limitations of models. Last, augmenting the training data with examples generated by BPB helps close performance gaps, without any drop on the original data distribution.

READ FULL TEXT
research
11/28/2022

Automatically generating question-answer pairs for assessing basic reading comprehension in Swedish

This paper presents an evaluation of the quality of automatically genera...
research
01/27/2021

VisualMRC: Machine Reading Comprehension on Document Images

Recent studies on machine reading comprehension have focused on text-lev...
research
11/01/2022

CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation

The full power of human language-based communication cannot be realized ...
research
04/05/2021

Discrete Reasoning Templates for Natural Language Understanding

Reasoning about information from multiple parts of a passage to derive a...
research
05/25/2022

Is a Question Decomposition Unit All We Need?

Large Language Models (LMs) have achieved state-of-the-art performance o...
research
04/06/2020

Evaluating NLP Models via Contrast Sets

Standard test sets for supervised learning evaluate in-distribution gene...
research
04/09/2020

Natural Perturbation for Robust Question Answering

While recent models have achieved human-level scores on many NLP dataset...

Please sign up or login with your details

Forgot password? Click here to reset