Zero-Resource Hallucination Prevention for Large Language Models

09/06/2023
by   Junyu Luo, et al.
0

The prevalent use of large language models (LLMs) in various domains has drawn attention to the issue of "hallucination," which refers to instances where LLMs generate factually inaccurate or ungrounded information. Existing techniques for hallucination detection in language assistants rely on intricate fuzzy, specific free-language-based chain of thought (CoT) techniques or parameter-based methods that suffer from interpretability issues. Additionally, the methods that identify hallucinations post-generation could not prevent their occurrence and suffer from inconsistent performance due to the influence of the instruction format and model style. In this paper, we introduce a novel pre-detection self-evaluation technique, referred to as SELF-FAMILIARITY, which focuses on evaluating the model's familiarity with the concepts present in the input instruction and withholding the generation of response in case of unfamiliar concepts. This approach emulates the human ability to refrain from responding to unfamiliar topics, thus reducing hallucinations. We validate SELF-FAMILIARITY across four different large language models, demonstrating consistently superior performance compared to existing techniques. Our findings propose a significant shift towards preemptive strategies for hallucination mitigation in LLM assistants, promising improvements in reliability, applicability, and interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/21/2023

Solving and Generating NPR Sunday Puzzles with Large Language Models

We explore the ability of large language models to solve and generate pu...
research
08/21/2023

Instruction Tuning for Large Language Models: A Survey

This paper surveys research works in the quickly advancing field of inst...
research
05/24/2023

Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts

The explosive growth of language models and their applications have led ...
research
05/29/2023

Do Large Language Models Know What They Don't Know?

Large language models (LLMs) have a wealth of knowledge that allows them...
research
05/25/2023

Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation

Large language models (large LMs) are susceptible to producing text with...
research
07/08/2023

A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation

Recently developed large language models have achieved remarkable succes...
research
08/06/2023

Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies

Large language models (LLMs) have demonstrated remarkable performance ac...

Please sign up or login with your details

Forgot password? Click here to reset