Pruning the Index Contents for Memory Efficient Open-Domain QA

02/21/2021
by   Martin Fajcik, et al.
0

This work presents a novel pipeline that demonstrates what is achievable with a combined effort of state-of-the-art approaches, surpassing the 50 match on NaturalQuestions and EfficentQA datasets. Specifically, it proposes the novel R2-D2 (Rank twice, reaD twice) pipeline composed of retriever, reranker, extractive reader, generative reader and a simple way to combine them. Furthermore, previous work often comes with a massive index of external documents that scales in the order of tens of GiB. This work presents a simple approach for pruning the contents of a massive index such that the open-domain QA system altogether with index, OS, and library components fits into 6GiB docker image while retaining only 8 3

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2021

R2-D2: A Modular Baseline for Open-Domain Question Answering

This work presents a novel four-stage open-domain QA pipeline R2-D2 (Ran...
research
01/06/2021

SF-QA: Simple and Fair Evaluation Library for Open-domain Question Answering

Although open-domain question answering (QA) draws great attention in re...
research
10/16/2021

Open Domain Question Answering over Virtual Documents: A Unified Approach for Data and Text

Due to its potential for a universal interface over both data and text, ...
research
09/21/2022

Generate rather than Retrieve: Large Language Models are Strong Context Generators

Knowledge-intensive tasks, such as open-domain question answering (QA), ...
research
11/23/2022

Can Open-Domain QA Reader Utilize External Knowledge Efficiently like Humans?

Recent state-of-the-art open-domain QA models are typically based on a t...
research
06/07/2023

CFDP: Common Frequency Domain Pruning

As the saying goes, sometimes less is more – and when it comes to neural...

Please sign up or login with your details

Forgot password? Click here to reset