Robust Distortion-free Watermarks for Language Models

07/28/2023
by   Rohith Kuditipudi, et al.
0

We propose a methodology for planting watermarks in text from an autoregressive language model that are robust to perturbations without changing the distribution over text up to a certain maximum generation budget. We generate watermarked text by mapping a sequence of random numbers – which we compute using a randomized watermark key – to a sample from the language model. To detect watermarked text, any party who knows the key can align the text to the random number sequence. We instantiate our watermark methodology with two sampling schemes: inverse transform sampling and exponential minimum sampling. We apply these watermarks to three language models – OPT-1.3B, LLaMA-7B and Alpaca-7B – to experimentally validate their statistical power and robustness to various paraphrasing attacks. Notably, for both the OPT-1.3B and LLaMA-7B models, we find we can reliably detect watermarked text (p ≤ 0.01) from 35 tokens even after corrupting between 40-50% of the tokens via random edits (i.e., substitutions, insertions or deletions). For the Alpaca-7B model, we conduct a case study on the feasibility of watermarking responses to typical user instructions. Due to the lower entropy of the responses, detection is more difficult: around 25% of the responses – whose median length is around 100 tokens – are detectable with p ≤ 0.01, and the watermark is also less robust to certain automated paraphrasing attacks we implement.

READ FULL TEXT

page 17

page 20

page 22

page 23

page 38

page 39

page 40

research
01/24/2023

A Watermark for Large Language Models

Potential harms of large language models can be mitigated by watermarkin...
research
08/07/2023

RecycleGPT: An Autoregressive Language Model with Recyclable Module

Existing large language models have to run K times to generate a sequenc...
research
08/14/2023

LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked

Large language models (LLMs) have skyrocketed in popularity in recent ye...
research
06/06/2023

LLMZip: Lossless Text Compression using Large Language Models

We provide new estimates of an asymptotic upper bound on the entropy of ...
research
08/18/2022

Using Large Language Models to Simulate Multiple Humans

We propose a method for using a large language model, such as GPT-3, to ...
research
08/01/2023

Advancing Beyond Identification: Multi-bit Watermark for Language Models

This study aims to proactively tackle misuse of large language models be...
research
10/22/2020

UniCase – Rethinking Casing in Language Models

In this paper, we introduce a new approach to dealing with the problem o...

Please sign up or login with your details

Forgot password? Click here to reset