Two-Turn Debate Doesn't Help Humans Answer Hard Reading Comprehension Questions

10/19/2022
by   Alicia Parrish, et al.
0

The use of language-model-based question-answering systems to aid humans in completing difficult tasks is limited, in part, by the unreliability of the text these systems generate. Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown that just a single turn of arguments in this format is not helpful to humans. However, as debate settings are characterized by a back-and-forth dialogue, we follow up on previous results to test whether adding a second round of counter-arguments is helpful to humans. We find that, regardless of whether they have access to arguments or not, humans perform similarly on our task. These findings suggest that, in the case of answering reading comprehension questions, debate is not a helpful format.

READ FULL TEXT

page 4

page 7

page 9

research
11/13/2022

"World Knowledge" in Multiple Choice Reading Comprehension

Recently it has been shown that without any access to the contextual pas...
research
04/11/2022

Single-Turn Debate Does Not Help Humans Answer Hard Reading-Comprehension Questions

Current QA systems can generate reasonable-sounding yet false answers wi...
research
12/20/2022

When Do Decompositions Help for Machine Reading?

Answering complex questions often requires multi-step reasoning in order...
research
02/03/2019

Review Conversational Reading Comprehension

Seeking information about products and services is an important activity...
research
04/29/2020

Multi-choice Dialogue-Based Reading Comprehension with Knowledge and Key Turns

Multi-choice machine reading comprehension (MRC) requires models to choo...
research
02/19/2015

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

One long-term goal of machine learning research is to produce methods th...
research
11/11/2019

Meta Answering for Machine Reading

We investigate a framework for machine reading, inspired by real world i...

Please sign up or login with your details

Forgot password? Click here to reset