Smaller Language Models are Better Black-box Machine-Generated Text Detectors

With the advent of fluent generative language models that can produce convincing utterances very similar to those written by humans, distinguishing whether a piece of text is machine-generated or human-written becomes more challenging and more important, as such models could be used to spread misinformation, fake news, fake reviews and to mimic certain authors and figures. To this end, there have been a slew of methods proposed to detect machine-generated text. Most of these methods need access to the logits of the target model or need the ability to sample from the target. One such black-box detection method relies on the observation that generated text is locally optimal under the likelihood function of the generator, while human-written text is not. We find that overall, smaller and partially-trained models are better universal text detectors: they can more precisely detect text generated from both small and larger models. Interestingly, we find that whether the detector and generator were trained on the same data is not critically important to the detection success. For instance the OPT-125M model has an AUC of 0.81 in detecting ChatGPT generations, whereas a larger model from the GPT family, GPTJ-6B, has AUC of 0.45.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

page 10

research
02/19/2020

Attacking Neural Text Detectors

Machine learning based language models have recently made significant pr...
research
05/13/2023

GPT-Sentinel: Distinguishing Human and ChatGPT Generated Content

This paper presents a novel approach for detecting ChatGPT-generated vs....
research
01/26/2023

DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature

The fluency and factual knowledge of large language models (LLMs) height...
research
05/24/2023

Ghostbuster: Detecting Text Ghostwritten by Large Language Models

We introduce Ghostbuster, a state-of-the-art system for detecting AI-gen...
research
05/27/2023

DNA-GPT: Divergent N-Gram Analysis for Training-Free Detection of GPT-Generated Text

Large language models (LLMs) have notably enhanced the fluency and diver...
research
11/02/2020

Automatic Detection of Machine Generated Text: A Critical Survey

Text generative models (TGMs) excel in producing text that matches the s...
research
06/07/2019

Real or Fake? Learning to Discriminate Machine from Human Generated Text

Recent advances in generative modeling of text have demonstrated remarka...

Please sign up or login with your details

Forgot password? Click here to reset