Distilling Knowledge from Reader to Retriever for Question Answering

12/08/2020
by   Gautier Izacard, et al.
0

The task of information retrieval is an important component of many natural language processing systems, such as open domain question answering. While traditional methods were based on hand-crafted features, continuous representations based on neural networks recently obtained competitive results. A challenge of using such methods is to obtain supervised data to train the retriever model, corresponding to pairs of query and support documents. In this paper, we propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents. Our approach leverages attention scores of a reader model, used to solve the task based on retrieved documents, to obtain synthetic labels for the retriever. We evaluate our method on question answering, obtaining state-of-the-art results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2021

Towards Unsupervised Dense Information Retrieval with Contrastive Learning

Information retrieval is an important component in natural language proc...
research
04/15/2022

Improving Passage Retrieval with Zero-Shot Question Generation

We propose a simple and effective re-ranking method for improving passag...
research
10/22/2018

A Fully Attention-Based Information Retriever

Recurrent neural networks are now the state-of-the-art in natural langua...
research
06/14/2019

Microsoft AI Challenge India 2018: Learning to Rank Passages for Web Question Answering with Deep Attention Networks

This paper describes our system for The Microsoft AI Challenge India 201...
research
12/06/2014

Practice in Synonym Extraction at Large Scale

Synonym extraction is an important task in natural language processing a...
research
03/06/2019

Bidirectional Attentive Memory Networks for Question Answering over Knowledge Bases

When answering natural language questions over knowledge bases (KB), dif...
research
09/02/2019

Answering questions by learning to rank -- Learning to rank by answering questions

Answering multiple-choice questions in a setting in which no supporting ...

Please sign up or login with your details

Forgot password? Click here to reset