G3Detector: General GPT-Generated Text Detector

05/22/2023
by   Haolan Zhan, et al.
0

The burgeoning progress in the field of Large Language Models (LLMs) heralds significant benefits due to their unparalleled capacities. However, it is critical to acknowledge the potential misuse of these models, which could give rise to a spectrum of social and ethical dilemmas. Despite numerous preceding efforts centered around distinguishing synthetic text, most existing detection systems fail to identify data synthesized by the latest LLMs, such as ChatGPT and GPT-4. In response to this challenge, we introduce an unpretentious yet potent detection approach proficient in identifying synthetic text across a wide array of fields. Moreover, our detector demonstrates outstanding performance uniformly across various model architectures and decoding strategies. It also possesses the capability to identify text generated utilizing a potent detection-evasion technique. Our comprehensive research underlines our commitment to boosting the robustness and efficiency of machine-generated text detection mechanisms, particularly in the context of swiftly progressing and increasingly adaptive AI technologies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2022

Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods

Advances in natural language generation (NLG) have resulted in machine g...
research
05/24/2023

LLMDet: A Large Language Models Detection Tool

With the advancement of generative language models, the generated text h...
research
06/30/2023

Provable Robust Watermarking for AI-Generated Text

As AI-generated text increasingly resembles human-written content, the a...
research
07/07/2023

RADAR: Robust AI-Text Detection via Adversarial Learning

Recent advances in large language models (LLMs) and the intensifying pop...
research
04/10/2023

On the Possibilities of AI-Generated Text Detection

Our work focuses on the challenge of detecting outputs generated by Larg...
research
04/12/2023

Detection of Fake Generated Scientific Abstracts

The widespread adoption of Large Language Models and publicly available ...
research
07/29/2023

Towards Codable Text Watermarking for Large Language Models

As large language models (LLMs) generate texts with increasing fluency a...

Please sign up or login with your details

Forgot password? Click here to reset