PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them

by   Patrick Lewis, et al.

Open-domain Question Answering models which directly leverage question-answer (QA) pairs, such as closed-book QA (CBQA) models and QA-pair retrievers, show promise in terms of speed and memory compared to conventional models which retrieve and read from text corpora. QA-pair retrievers also offer interpretable answers, a high degree of control, and are trivial to update at test time with new knowledge. However, these models lack the accuracy of retrieve-and-read systems, as substantially less knowledge is covered by the available QA-pairs relative to text corpora like Wikipedia. To facilitate improved QA-pair models, we introduce Probably Asked Questions (PAQ), a very large resource of 65M automatically-generated QA-pairs. We introduce a new QA-pair retriever, RePAQ, to complement PAQ. We find that PAQ preempts and caches test questions, enabling RePAQ to match the accuracy of recent retrieve-and-read models, whilst being significantly faster. Using PAQ, we train CBQA models which outperform comparable baselines by 5 by over 15 configured for size (under 500MB) or speed (over 1K questions per second) whilst retaining high accuracy. Lastly, we demonstrate RePAQ's strength at selective QA, abstaining from answering when it is likely to be incorrect. This enables RePAQ to “back-off" to a more expensive state-of-the-art model, leading to a combined system which is both more accurate and 2x faster than the state-of-the-art model alone.


page 1

page 2

page 3

page 4


Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering

In open-domain question answering (QA), retrieve-and-read mechanism has ...

Selective Question Answering under Domain Shift

To avoid giving wrong answers, question answering (QA) models need to kn...

Leveraging Term Banks for Answering Complex Questions: A Case for Sparse Vectors

While open-domain question answering (QA) systems have proven effective ...

Generate rather than Retrieve: Large Language Models are Strong Context Generators

Knowledge-intensive tasks, such as open-domain question answering (QA), ...

What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge

Open-domain question answering (QA) is known to involve several underlyi...

ReadTwice: Reading Very Large Documents with Memories

Knowledge-intensive tasks such as question answering often require assim...

Key-Value Memory Networks for Directly Reading Documents

Directly reading documents and being able to answer questions from them ...

Code Repositories


Code and data to support the paper "PAQ 65 Million Probably-Asked Questions andWhat You Can Do With Them"

view repo