Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback

02/24/2023
by   Baolin Peng, et al.
0

Large language models (LLMs), such as ChatGPT, are able to generate human-like, fluent responses for many downstream tasks, e.g., task-oriented dialog and question answering. However, applying LLMs to real-world, mission-critical applications remains challenging mainly due to their tendency to generate hallucinations and inability to use external knowledge.This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules. Our system makes the LLM generate responses grounded in consolidated external knowledge, e.g., stored in task-specific databases. It also iteratively revises LLM prompts to improve model responses using feedback generated by utility functions, e.g., the factuality score of a LLM-generated response. The effectiveness of LLM-Augmenter is empirically validated on two types of mission-critical scenarios, task-oriented dialog and open-domain question answering. LLM-Augmenter significantly reduces ChatGPT's hallucinations without sacrificing the fluency and informativeness of its responses. We make the source code and models publicly available.

READ FULL TEXT
research
09/05/2023

Augmenting Black-box LLMs with Medical Textbooks for Clinical Question Answering

Large-scale language models (LLMs), such as ChatGPT, are capable of gene...
research
03/15/2023

SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models

Generative Large Language Models (LLMs) such as GPT-3 are capable of gen...
research
09/15/2023

"Merge Conflicts!" Exploring the Impacts of External Distractors to Parametric Knowledge Graphs

Large language models (LLMs) acquire extensive knowledge during pre-trai...
research
02/05/2019

An Ensemble Dialogue System for Facts-Based Sentence Generation

This study aims to generate responses based on real-world facts by condi...
research
10/13/2022

Joint Reasoning on Hybrid-knowledge sources for Task-Oriented Dialog

Traditional systems designed for task oriented dialog utilize knowledge ...
research
09/21/2023

Knowledge Sanitization of Large Language Models

We explore a knowledge sanitization approach to mitigate the privacy con...
research
04/11/2018

Achieving Fluency and Coherency in Task-oriented Dialog

We consider real world task-oriented dialog settings, where agents need ...

Please sign up or login with your details

Forgot password? Click here to reset