LLMDet: A Large Language Models Detection Tool

05/24/2023
by   Kangxi Wu, et al.
0

With the advancement of generative language models, the generated text has come remarkably close to high-quality human-authored text in terms of fluency and diversity. This calls for a highly practical detection tool that can identify the source of text, preferably pinpointing the language model it originates from. However, existing detection tools typically require access to language models and can only differentiate between machine-generated and human-authored text, failing to meet the requirements of rapid detection and text tracing. Therefore, in this paper, we propose an efficient, secure, and scalable detection tool called LLMDet, which calculates the proxy perplexity of text by utilizing the prior information of the model's next-token probabilities, obtained through pre-training. Subsequently, we use the self-watermarking information of the model, as measured by proxy perplexity, to detect the source of the text. We found that our method demonstrates impressive detection performance while ensuring speed and security, particularly achieving a recognition accuracy of 97.97% for human-authored text. Furthermore, our detection tool also shows promising results in identifying the large language model (e.g., GPT-2, OPT, LLaMA, Vicuna...) responsible for the text. We release the code and processed data at <https://github.com/TrustedLLM/LLMDet>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2020

Limits of Detecting Text Generated by Large-Scale Language Models

Some consider large-scale language models that can generate long and coh...
research
05/09/2023

ChatGPT as a Text Simplification Tool to Remove Bias

The presence of specific linguistic signals particular to a certain sub-...
research
05/22/2023

G3Detector: General GPT-Generated Text Detector

The burgeoning progress in the field of Large Language Models (LLMs) her...
research
06/30/2023

Provable Robust Watermarking for AI-Generated Text

As AI-generated text increasingly resembles human-written content, the a...
research
08/01/2023

Advancing Beyond Identification: Multi-bit Watermark for Language Models

This study aims to proactively tackle misuse of large language models be...
research
05/24/2023

Ghostbuster: Detecting Text Ghostwritten by Large Language Models

We introduce Ghostbuster, a state-of-the-art system for detecting AI-gen...
research
07/29/2023

Towards Codable Text Watermarking for Large Language Models

As large language models (LLMs) generate texts with increasing fluency a...

Please sign up or login with your details

Forgot password? Click here to reset