Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models

09/03/2023
by   Yue Zhang, et al.
0

While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.

READ FULL TEXT
research
04/21/2022

Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias

The remarkable progress in Natural Language Processing (NLP) brought abo...
research
09/12/2023

A Survey of Hallucination in Large Foundation Models

Hallucination in a foundation model (FM) refers to the generation of con...
research
05/24/2023

Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4

Misinformation poses a critical societal challenge, and current approach...
research
07/08/2023

A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation

Recently developed large language models have achieved remarkable succes...
research
10/16/2021

Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey

Pretrained Language Models (PLM) have established a new paradigm through...
research
09/15/2021

Challenges in Detoxifying Language Models

Large language models (LM) generate remarkably fluent text and can be ef...
research
08/29/2023

Evaluation and Analysis of Hallucination in Large Vision-Language Models

Large Vision-Language Models (LVLMs) have recently achieved remarkable s...

Please sign up or login with your details

Forgot password? Click here to reset