Can Open-Domain QA Reader Utilize External Knowledge Efficiently like Humans?

11/23/2022
by   Neeraj Varshney, et al.
0

Recent state-of-the-art open-domain QA models are typically based on a two stage retriever-reader approach in which the retriever first finds the relevant knowledge/passages and the reader then leverages that to predict the answer. Prior work has shown that the performance of the reader usually tends to improve with the increase in the number of these passages. Thus, state-of-the-art models use a large number of passages (e.g. 100) for inference. While the reader in this approach achieves high prediction performance, its inference is computationally very expensive. We humans, on the other hand, use a more efficient strategy while answering: firstly, if we can confidently answer the question using our already acquired knowledge then we do not even use the external knowledge, and in the case when we do require external knowledge, we don't read the entire knowledge at once, instead, we only read that much knowledge that is sufficient to find the answer. Motivated by this procedure, we ask a research question "Can the open-domain QA reader utilize external knowledge efficiently like humans without sacrificing the prediction performance?" Driven by this question, we explore an approach that utilizes both 'closed-book' (leveraging knowledge already present in the model parameters) and 'open-book' inference (leveraging external knowledge). Furthermore, instead of using a large fixed number of passages for open-book inference, we dynamically read the external knowledge in multiple 'knowledge iterations'. Through comprehensive experiments on NQ and TriviaQA datasets, we demonstrate that this dynamic reading approach improves both the 'inference efficiency' and the 'prediction accuracy' of the reader. Comparing with the FiD reader, this approach matches its accuracy by utilizing just 18.32 cost and also outperforms it by achieving up to 55.10

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2022

Context Generation Improves Open Domain Question Answering

Closed-book question answering (QA) requires a model to directly answer ...
research
09/06/2019

Incorporating External Knowledge into Machine Reading for Generative Question Answering

Commonsense and background knowledge is required for a QA model to answe...
research
02/13/2021

PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them

Open-domain Question Answering models which directly leverage question-a...
research
11/10/2020

Don't Read Too Much into It: Adaptive Computation for Open-Domain Question Answering

Most approaches to Open-Domain Question Answering consist of a light-wei...
research
04/15/2021

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering

In open-domain question answering (QA), retrieve-and-read mechanism has ...
research
02/21/2021

Pruning the Index Contents for Memory Efficient Open-Domain QA

This work presents a novel pipeline that demonstrates what is achievable...
research
10/04/2021

Perhaps PTLMs Should Go to School – A Task to Assess Open Book and Closed Book QA

Our goal is to deliver a new task and leaderboard to stimulate research ...

Please sign up or login with your details

Forgot password? Click here to reset