DeepAI AI Chat
Log In Sign Up

A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation

by   Tianyu Liu, et al.

Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications. Existing work usually attempts to detect these hallucinations based on a corresponding oracle reference at a sentence or document level. However ground-truth references may not be readily available for many free-form text generation applications, and sentence- or document-level detection may fail to provide the fine-grained signals that would prevent fallacious content in real time. As a first step to addressing these issues, we propose a novel token-level, reference-free hallucination detection task and an associated annotated dataset named HaDes (HAllucination DEtection dataSet). To create this dataset, we first perturb a large number of text segments extracted from English language Wikipedia, and then verify these with crowd-sourced annotations. To mitigate label imbalance during annotation, we utilize an iterative model-in-loop strategy. We conduct comprehensive data analyses and create multiple baseline models.


page 5

page 6


Towards Content Transfer through Grounded Text Generation

Recent work in neural generation has attracted significant interest in c...

Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank

Detecting fine-grained differences in content conveyed in different lang...

TokTrack: A Complete Token Provenance and Change Tracking Dataset for the English Wikipedia

We present a dataset that contains every instance of all tokens ( words...

Sentence Semantic Regression for Text Generation

Recall the classical text generation works, the generation framework can...

Semantic Novelty Detection and Characterization in Factual Text Involving Named Entities

Much of the existing work on text novelty detection has been studied at ...

Diving Deep into Modes of Fact Hallucinations in Dialogue Systems

Knowledge Graph(KG) grounded conversations often use large pre-trained m...

Generating (Formulaic) Text by Splicing Together Nearest Neighbors

We propose to tackle conditional text generation tasks, especially those...