Log In Sign Up

Why Machine Reading Comprehension Models Learn Shortcuts?

by   Yuxuan Lai, et al.

Recent studies report that many machine reading comprehension (MRC) models can perform closely to or even better than humans on benchmark datasets. However, existing works indicate that many MRC models may learn shortcuts to outwit these benchmarks, but the performance is unsatisfactory in real-world applications. In this work, we attempt to explore, instead of the expected comprehension skills, why these models learn the shortcuts. Based on the observation that a large portion of questions in current datasets have shortcut solutions, we argue that larger proportion of shortcut questions in training data make models rely on shortcut tricks excessively. To investigate this hypothesis, we carefully design two synthetic datasets with annotations that indicate whether a question can be answered using shortcut solutions. We further propose two new methods to quantitatively analyze the learning difficulty regarding shortcut and challenging questions, and revealing the inherent learning mechanism behind the different performance between the two kinds of questions. A thorough empirical analysis shows that MRC models tend to learn shortcut questions earlier than challenging questions, and the high proportions of shortcut questions in training sets hinder models from exploring the sophisticated reasoning skills in the later stage of training.


page 1

page 2

page 3

page 4


Question Generation for Reading Comprehension Assessment by Modeling How and What to Ask

Reading is integral to everyday life, and yet learning to read is a stru...

Towards Inference-Oriented Reading Comprehension: ParallelQA

In this paper, we investigate the tendency of end-to-end neural Machine ...

The Curse of Performance Instability in Analysis Datasets: Consequences, Source, and Suggestions

We find that the performance of state-of-the-art models on Natural Langu...

What Makes Reading Comprehension Questions Easier?

A challenge in creating a dataset for machine reading comprehension (MRC...

Beat the AI: Investigating Adversarial Human Annotations for Reading Comprehension

Innovations in annotation methodology have been a propellant for Reading...

Challenges in Procedural Multimodal Machine Comprehension:A Novel Way To Benchmark

We focus on Multimodal Machine Reading Comprehension (M3C) where a model...

Integrated Triaging for Fast Reading Comprehension

Although according to several benchmarks automatic machine reading compr...