Large Language Models Sometimes Generate Purely Negatively-Reinforced Text

06/13/2023
by   Fabien Roger, et al.
0

When using adversarial training, it is common practice to train against the most egregious failures. However, this might imply using examples with sensitive information (such as leaked passwords or security vulnerabilities) as training data. One might assume that language models trained with gradient descent never generate text snippets which were only present in examples associated with the lowest possible reward. In this paper, we show that this assumption is wrong: in some situations, large language models do learn from such negatively-reinforced examples. We present a specific training setup that enables Pythia-160M to guess passwords 13 randomly, despite only showing it these passwords on examples where the model is incentivized to not output these passwords. Our code is available at www.github.com/FabienRoger/Learning-From-Negative-Examples

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2021

Deduplicating Training Data Makes Language Models Better

We find that existing language modeling datasets contain many near-dupli...
research
11/28/2022

CoNAL: Anticipating Outliers with Large Language Models

In many task settings, text classification models are likely to encounte...
research
07/06/2023

Training Models to Generate, Recognize, and Reframe Unhelpful Thoughts

Many cognitive approaches to well-being, such as recognizing and reframi...
research
04/03/2023

Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling

How do large language models (LLMs) develop and evolve over the course o...
research
06/17/2022

Evolution through Large Models

This paper pursues the insight that large language models (LLMs) trained...
research
05/15/2023

Memorization for Good: Encryption with Autoregressive Language Models

Over-parameterized neural language models (LMs) can memorize and recite ...
research
05/21/2022

Scaling Laws and Interpretability of Learning from Repeated Data

Recent large language models have been trained on vast datasets, but als...

Please sign up or login with your details

Forgot password? Click here to reset