Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering

04/10/2022
by   Wenhu Chen, et al.
0

Retrieval augmented language models have recently become the standard for knowledge intensive tasks. Rather than relying purely on latent semantics within the parameters of large neural models, these methods enlist a semi-parametric memory to encode an index of knowledge for the model to retrieve over. Most prior work has employed text passages as the unit of knowledge, which has high coverage at the cost of interpretability, controllability, and efficiency. The opposite properties arise in other methods which have instead relied on knowledge base (KB) facts. At the same time, more recent work has demonstrated the effectiveness of storing and retrieving from an index of Q-A pairs derived from text <cit.>. This approach yields a high coverage knowledge representation that maintains KB-like properties due to its representations being more atomic units of information. In this work we push this line of research further by proposing a question-answer augmented encoder-decoder model and accompanying pretraining strategy. This yields an end-to-end system that not only outperforms prior QA retrieval methods on single-hop QA tasks but also enables compositional reasoning, as demonstrated by strong performance on two multi-hop QA datasets. Together, these methods improve the ability to interpret and control the model while narrowing the performance gap with passage retrieval systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2019

Multi-Hop Paragraph Retrieval for Open-Domain Question Answering

This paper is concerned with the task of multi-hop open-domain Question ...
research
12/20/2022

Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions

Recent work has shown that large language models are capable of generati...
research
09/17/2022

Flexible and Structured Knowledge Grounded Question Answering

Can language models (LM) ground question-answering (QA) tasks in the kno...
research
06/17/2023

GLIMMER: generalized late-interaction memory reranker

Memory-augmentation is a powerful approach for efficiently incorporating...
research
02/07/2021

Memory Augmented Sequential Paragraph Retrieval for Multi-hop Question Answering

Retrieving information from correlative paragraphs or documents to answe...
research
04/20/2021

Efficient Retrieval Optimized Multi-task Learning

Recently, there have been significant advances in neural methods for tac...
research
01/25/2023

Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute

Retrieval-augmented language models such as Fusion-in-Decoder are powerf...

Please sign up or login with your details

Forgot password? Click here to reset