Efficient Retrieval Optimized Multi-task Learning

04/20/2021
by   Hengxin Fun, et al.
0

Recently, there have been significant advances in neural methods for tackling knowledge-intensive tasks such as open domain question answering (QA). These advances are fueled by combining large pre-trained language models with learnable retrieval of documents. Majority of these models use separate encoders for learning query representation, passage representation for the retriever and an additional encoder for the downstream task. Using separate encoders for each stage/task occupies a lot of memory and makes it difficult to scale to a large number of tasks. In this paper, we propose a novel Retrieval Optimized Multi-task (ROM) framework for jointly training self-supervised tasks, knowledge retrieval, and extractive question answering. Our ROM approach presents a unified and generalizable framework that enables scaling efficiently to multiple tasks, varying levels of supervision, and optimization choices such as different learning schedules without changing the model architecture. It also provides the flexibility of changing the encoders without changing the architecture of the system. Using our framework, we achieve comparable or better performance than recent methods on QA, while drastically reducing the number of parameters.

READ FULL TEXT
research
04/14/2022

Exploring Dual Encoder Architectures for Question Answering

Dual encoders have been used for question-answering (QA) and information...
research
10/11/2022

Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering

Given its effectiveness on knowledge-intensive natural language processi...
research
05/04/2023

Chain-of-Skills: A Configurable Model for Open-domain Question Answering

The retrieval model is an indispensable component for real-world knowled...
research
01/01/2021

Multi-task Retrieval for Knowledge-Intensive Tasks

Retrieving relevant contexts from a large corpus is a crucial step for t...
research
06/17/2023

GLIMMER: generalized late-interaction memory reranker

Memory-augmentation is a powerful approach for efficiently incorporating...
research
07/03/2020

Eliminating Catastrophic Interference with Biased Competition

We present here a model to take advantage of the multi-task nature of co...
research
04/10/2022

Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering

Retrieval augmented language models have recently become the standard fo...

Please sign up or login with your details

Forgot password? Click here to reset