Self-Prompting Large Language Models for Open-Domain QA

12/16/2022
by   Junlong Li, et al.
0

Open-Domain Question Answering (ODQA) requires models to answer factoid questions with no context given. The common way for this task is to train models on a large-scale annotated dataset to retrieve related documents and generate answers based on these documents. In this paper, we show that the ODQA architecture can be dramatically simplified by treating Large Language Models (LLMs) as a knowledge corpus and propose a Self-Prompting framework for LLMs to perform ODQA so as to eliminate the need for training data and external knowledge corpus. Concretely, we firstly generate multiple pseudo QA pairs with background passages and one-sentence explanations for these QAs by prompting LLMs step by step and then leverage the generated QA pairs for in-context learning. Experimental results show our method surpasses previous state-of-the-art methods by +8.8 EM averagely on three widely-used ODQA datasets, and even achieves comparable performance with several retrieval-augmented fine-tuned models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2022

Generate rather than Retrieve: Large Language Models are Strong Context Generators

Knowledge-intensive tasks, such as open-domain question answering (QA), ...
research
07/21/2023

Generator-Retriever-Generator: A Novel Approach to Open-domain Question Answering

Open-domain question answering (QA) tasks usually require the retrieval ...
research
05/25/2023

Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation

Large language models (large LMs) are susceptible to producing text with...
research
05/23/2023

Query Rewriting for Retrieval-Augmented Large Language Models

Large Language Models (LLMs) play a powerful Reader of the Retrieve-then...
research
06/21/2022

Questions Are All You Need to Train a Dense Passage Retriever

We introduce ART, a new corpus-level autoencoding approach for training ...
research
01/01/2021

UnitedQA: A Hybrid Approach for Open Domain Question Answering

To date, most of recent work under the retrieval-reader framework for op...
research
04/27/2022

Plug-and-Play Adaptation for Continuously-updated QA

Language models (LMs) have shown great potential as implicit knowledge b...

Please sign up or login with your details

Forgot password? Click here to reset