DeepAI AI Chat
Log In Sign Up

Towards Inference-Oriented Reading Comprehension: ParallelQA

by   Soumya Wadhwa, et al.
Carnegie Mellon University

In this paper, we investigate the tendency of end-to-end neural Machine Reading Comprehension (MRC) models to match shallow patterns rather than perform inference-oriented reasoning on RC benchmarks. We aim to test the ability of these systems to answer questions which focus on referential inference. We propose ParallelQA, a strategy to formulate such questions using parallel passages. We also demonstrate that existing neural models fail to generalize well to this setting.


page 1

page 2

page 3

page 4


TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions

A critical part of reading is being able to understand the temporal rela...

NumNet: Machine Reading Comprehension with Numerical Reasoning

Numerical reasoning, such as addition, subtraction, sorting and counting...

Audio-Oriented Multimodal Machine Comprehension: Task, Dataset and Model

While Machine Comprehension (MC) has attracted extensive research intere...

Undersensitivity in Neural Reading Comprehension

Current reading comprehension models generalise well to in-distribution ...

An Empirical Analysis of Multiple-Turn Reasoning Strategies in Reading Comprehension Tasks

Reading comprehension (RC) is a challenging task that requires synthesis...

How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks

Many recent papers address reading comprehension, where examples consist...

Towards Human-level Machine Reading Comprehension: Reasoning and Inference with Multiple Strategies

This paper presents a new MRC model that is capable of three key compreh...