Stochastic Parrots Looking for Stochastic Parrots: LLMs are Easy to Fine-Tune and Hard to Detect with other LLMs

The self-attention revolution allowed generative language models to scale and achieve increasingly impressive abilities. Such models - commonly referred to as Large Language Models (LLMs) - have recently gained prominence with the general public, thanks to conversational fine-tuning, putting their behavior in line with public expectations regarding AI. This prominence amplified prior concerns regarding the misuse of LLMs and led to the emergence of numerous tools to detect LLMs in the wild. Unfortunately, most such tools are critically flawed. While major publications in the LLM detectability field suggested that LLMs were easy to detect with fine-tuned autoencoders, the limitations of their results are easy to overlook. Specifically, they assumed publicly available generative models without fine-tunes or non-trivial prompts. While the importance of these assumptions has been demonstrated, until now, it remained unclear how well such detection could be countered. Here, we show that an attacker with access to such detectors' reference human texts and output not only evades detection but can fully frustrate the detector training - with a reasonable budget and all its outputs labeled as such. Achieving it required combining common "reinforcement from critic" loss function modification and AdamW optimizer, which led to surprisingly good fine-tuning generalization. Finally, we warn against the temptation to transpose the conclusions obtained in RNN-driven text GANs to LLMs due to their better representative ability. These results have critical implications for the detection and prevention of malicious use of generative language models, and we hope they will aid the designers of generative models and detectors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2023

Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense

Generative Language Models gained significant attention in late 2022 / e...
research
11/12/2018

Fine-tuning of Language Models with Discriminator

Cross-entropy loss is a common choice when it comes to multiclass classi...
research
10/13/2022

Predicting Fine-Tuning Performance with Probing

Large NLP models have recently shown impressive performance in language ...
research
05/18/2023

Large Language Models can be Guided to Evade AI-Generated Text Detection

Large Language Models (LLMs) have demonstrated exceptional performance i...
research
07/09/2023

Assessing the efficacy of large language models in generating accurate teacher responses

(Tack et al., 2023) organized the shared task hosted by the 18th Worksho...
research
08/20/2023

How Good Are Large Language Models at Out-of-Distribution Detection?

Out-of-distribution (OOD) detection plays a vital role in enhancing the ...
research
07/20/2023

LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?

Large language models (LLMs) have exhibited impressive capabilities in c...

Please sign up or login with your details

Forgot password? Click here to reset