Retrieving Supporting Evidence for Generative Question Answering

09/20/2023
by   Siqing Huo, et al.
0

Current large language models (LLMs) can exhibit near-human levels of performance on many natural language-based tasks, including open-domain question answering. Unfortunately, at this time, they also convincingly hallucinate incorrect answers, so that responses to questions must be verified against external sources before they can be accepted at face value. In this paper, we report two simple experiments to automatically validate generated answers against a corpus. We base our experiments on questions and passages from the MS MARCO (V1) test collection, and a retrieval pipeline consisting of sparse retrieval, dense retrieval and neural rerankers. In the first experiment, we validate the generated answer in its entirety. After presenting a question to an LLM and receiving a generated answer, we query the corpus with the combination of the question + generated answer. We then present the LLM with the combination of the question + generated answer + retrieved answer, prompting it to indicate if the generated answer can be supported by the retrieved answer. In the second experiment, we consider the generated answer at a more granular level, prompting the LLM to extract a list of factual statements from the answer and verifying each statement separately. We query the corpus with each factual statement and then present the LLM with the statement and the corresponding retrieved evidence. The LLM is prompted to indicate if the statement can be supported and make necessary edits using the retrieved material. With an accuracy of over 80 capable of verifying its generated answer when a corpus of supporting material is provided. However, manual assessment of a random sample of questions reveals that incorrect generated answers are missed by this verification process. While this verification process can reduce hallucinations, it can not entirely eliminate them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2023

Retrieving Supporting Evidence for LLMs Generated Answers

Current large language models (LLMs) can exhibit near-human levels of pe...
research
07/12/2017

Quasar: Datasets for Question Answering by Search and Reading

We present two new large-scale datasets aimed at evaluating systems desi...
research
05/01/2022

ELQA: A Corpus of Questions and Answers about the English Language

We introduce a community-sourced dataset for English Language Question A...
research
06/02/2020

Open-Domain Question Answering with Pre-Constructed Question Spaces

Open-domain question answering aims at solving the task of locating the ...
research
03/07/2018

Translating Questions into Answers using DBPedia n-triples

In this paper we present a question answering system using a neural netw...
research
10/02/2019

BookQA: Stories of Challenges and Opportunities

We present a system for answering questions based on the full text of bo...
research
09/17/2023

ChatGPT Hallucinates when Attributing Answers

Can ChatGPT provide evidence to support its answers? Does the evidence i...

Please sign up or login with your details

Forgot password? Click here to reset