Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering

by   Todor Mihaylov, et al.

We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of 1329 elementary level science facts. Roughly 6000 questions probe an understanding of these facts and their application to novel situations. This requires combining an open book fact (e.g., metals conduct electricity) with broad common knowledge (e.g., a suit of armor is made of metal) obtained from other sources. While existing QA datasets over documents or knowledge bases, being generally self-contained, focus on linguistic understanding, OpenBookQA probes a deeper understanding of both the topic---in the context of common knowledge---and the language it is expressed in. Human performance on OpenBookQA is close to 92 pre-trained QA methods perform surprisingly poorly, worse than several simple neural baselines we develop. Our oracle experiments designed to circumvent the knowledge retrieval bottleneck demonstrate the value of both the open book and additional facts. We leave it as a challenge to solve the retrieval problem in this multi-hop setting and to close the large gap to human performance.


page 1

page 2

page 3

page 4


Careful Selection of Knowledge to solve Open Book Question Answering

Open book question answering is a type of natural language based QA (NLQ...

Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA?

Recent work has investigated the interesting question using pre-trained ...

Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study

Recent advancements in open-domain question answering (ODQA), i.e., find...

Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering

Generative question answering (QA) models generate answers to questions ...

OPERA: Harmonizing Task-Oriented Dialogs and Information Seeking Experience

Existing studies in conversational AI mostly treat task-oriented dialog ...

Closed-book Question Generation via Contrastive Learning

Question Generation (QG) is a fundamental NLP task for many downstream a...

Writing your own book: A method for going from closed to open book QA to improve robustness and performance of smaller LLMs

We introduce two novel methods, Tree-Search and Self-contextualizing QA,...

Please sign up or login with your details

Forgot password? Click here to reset