Three Bricks to Consolidate Watermarks for Large Language Models

07/26/2023
by   Pierre Fernandez, et al.
0

The task of discerning between generated and natural texts is increasingly challenging. In this context, watermarking emerges as a promising technique for ascribing generated text to a specific model. It alters the sampling generation process so as to leave an invisible trace in the generated output, facilitating later detection. This research consolidates watermarks for large language models based on three theoretical and empirical considerations. First, we introduce new statistical tests that offer robust theoretical guarantees which remain valid even at low false-positive rates (less than 10^-6). Second, we compare the effectiveness of watermarks using classical benchmarks in the field of natural language processing, gaining insights into their real-world applicability. Third, we develop advanced detection schemes for scenarios where access to the LLM is available, as well as multi-bit watermarking.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2021

Language Model Evaluation Beyond Perplexity

We propose an alternate approach to quantifying how well language models...
research
06/29/2023

Benchmarking Large Language Model Capabilities for Conditional Generation

Pre-trained large language models (PLMs) underlie most new developments ...
research
11/10/2022

Can Transformers Reason in Fragments of Natural Language?

State-of-the-art deep-learning-based approaches to Natural Language Proc...
research
07/01/2023

Personality Traits in Large Language Models

The advent of large language models (LLMs) has revolutionized natural la...
research
12/30/2022

Black-box language model explanation by context length probing

The increasingly widespread adoption of large language models has highli...
research
05/11/2023

Autocorrelations Decay in Texts and Applicability Limits of Language Models

We show that the laws of autocorrelations decay in texts are closely rel...
research
08/01/2023

Advancing Beyond Identification: Multi-bit Watermark for Language Models

This study aims to proactively tackle misuse of large language models be...

Please sign up or login with your details

Forgot password? Click here to reset