Memorization for Good: Encryption with Autoregressive Language Models

05/15/2023
by   Samuel Stevens, et al.
0

Over-parameterized neural language models (LMs) can memorize and recite long sequences of training data. While such memorization is normally associated with undesired properties such as overfitting and information leaking, our work casts memorization as an unexplored capability of LMs. We propose the first symmetric encryption algorithm with autoregressive language models (SELM). We show that autoregressive LMs can encode arbitrary data into a compact real-valued vector (i.e., encryption) and then losslessly decode the vector to the original message (i.e., decryption) via random subspace optimization and greedy decoding. While SELM is not amenable to conventional cryptanalysis, we investigate its security through a novel empirical variant of the classic IND-CPA (indistinguishability under chosen-plaintext attack) game. Our code and datasets are available at https://github.com/OSU-NLP-Group/SELM.

READ FULL TEXT

page 2

page 17

research
08/02/2023

OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models

We introduce OpenFlamingo, a family of autoregressive vision-language mo...
research
09/11/2023

Tortoise: An Authenticated Encryption Scheme

We present Tortoise, an experimental nonce-based authenticated encryptio...
research
02/08/2023

Training-free Lexical Backdoor Attacks on Language Models

Large-scale language models have achieved tremendous success across vari...
research
02/03/2023

Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective

Knowledge distillation (KD) is a highly promising method for mitigating ...
research
06/13/2023

Large Language Models Sometimes Generate Purely Negatively-Reinforced Text

When using adversarial training, it is common practice to train against ...
research
10/12/2020

TextHide: Tackling Data Privacy in Language Understanding Tasks

An unsolved challenge in distributed or federated learning is to effecti...
research
11/17/2022

Ignore Previous Prompt: Attack Techniques For Language Models

Transformer-based large language models (LLMs) provide a powerful founda...

Please sign up or login with your details

Forgot password? Click here to reset