Weakly Supervised Pre-Training for Multi-Hop Retriever

by   Yeon Seonwoo, et al.

In multi-hop QA, answering complex questions entails iterative document retrieval for finding the missing entity of the question. The main steps of this process are sub-question detection, document retrieval for the sub-question, and generation of a new query for the final document retrieval. However, building a dataset that contains complex questions with sub-questions and their corresponding documents requires costly human annotation. To address the issue, we propose a new method for weakly supervised multi-hop retriever pre-training without human efforts. Our method includes 1) a pre-training task for generating vector representations of complex questions, 2) a scalable data generation method that produces the nested structure of question and sub-question as weak supervision for pre-training, and 3) a pre-training model structure based on dense encoders. We conduct experiments to compare the performance of our pre-trained retriever with several state-of-the-art models on end-to-end multi-hop QA as well as document retrieval. The experimental results show that our pre-trained retriever is effective and also robust on limited data and computational resources.


page 1

page 2

page 3

page 4


Do Multi-Hop Question Answering Systems Know How to Answer the Single-Hop Sub-Questions?

Multi-hop question answering (QA) requires a model to retrieve and integ...

Analysing Dense Passage Retrieval for Multi-hop Question Answering

We analyse the performance of passage retrieval models in the presence o...

Questions Are All You Need to Train a Dense Passage Retriever

We introduce ART, a new corpus-level autoencoding approach for training ...

From Easy to Hard: Two-stage Selector and Reader for Multi-hop Question Answering

Multi-hop question answering (QA) is a challenging task requiring QA sys...

LEPUS: Prompt-based Unsupervised Multi-hop Reranking for Open-domain QA

We study unsupervised multi-hop reranking for multi-hop QA (MQA) with op...

End-to-End Training of Neural Retrievers for Open-Domain Question Answering

Recent work on training neural retrievers for open-domain question answe...

ReasonBERT: Pre-trained to Reason with Distant Supervision

We present ReasonBert, a pre-training method that augments language mode...

Code Repositories