Shattering the Agent-Environment Interface for Fine-Tuning Inclusive Language Models

05/19/2023
by   Wanqiao Xu, et al.
0

A centerpiece of the ever-popular reinforcement learning from human feedback (RLHF) approach to fine-tuning autoregressive language models is the explicit training of a reward model to emulate human feedback, distinct from the language model itself. This reward model is then coupled with policy-gradient methods to dramatically improve the alignment between language model outputs and desired responses. In this work, we adopt a novel perspective wherein a pre-trained language model is itself simultaneously a policy, reward function, and transition function. An immediate consequence of this is that reward learning and language model fine-tuning can be performed jointly and directly, without requiring any further downstream policy optimization. While this perspective does indeed break the traditional agent-environment interface, we nevertheless maintain that there can be enormous statistical benefits afforded by bringing to bear traditional algorithmic concepts from reinforcement learning. Our experiments demonstrate one concrete instance of this through efficient exploration based on the representation and resolution of epistemic uncertainty. In order to illustrate these ideas in a transparent manner, we restrict attention to a simple didactic data generating process and leave for future work extension to systems of practical scale.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2023

DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models

Learning from human feedback has been shown to improve text-to-image mod...
research
08/04/2023

Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization

Recent months have seen the emergence of a powerful new trend in which l...
research
05/17/2023

SLiC-HF: Sequence Likelihood Calibration with Human Feedback

Learning from human feedback has been shown to be effective at aligning ...
research
06/01/2022

On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting

The availability of large pre-trained models is changing the landscape o...
research
09/01/2023

Let the Models Respond: Interpreting Language Model Detoxification Through the Lens of Prompt Dependence

Due to language models' propensity to generate toxic or hateful response...
research
02/13/2023

Guiding Pretraining in Reinforcement Learning with Large Language Models

Reinforcement learning algorithms typically struggle in the absence of a...
research
09/01/2023

Efficient RLHF: Reducing the Memory Usage of PPO

Reinforcement Learning with Human Feedback (RLHF) has revolutionized lan...

Please sign up or login with your details

Forgot password? Click here to reset