Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

04/12/2022
by   Yuntao Bai, et al.
2

We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work.

READ FULL TEXT

page 10

page 23

page 24

research
05/24/2022

Reward Uncertainty for Exploration in Preference-based Reinforcement Learning

Conveying complex objectives to reinforcement learning (RL) agents often...
research
05/17/2023

SLiC-HF: Sequence Likelihood Calibration with Human Feedback

Learning from human feedback has been shown to be effective at aligning ...
research
05/29/2023

How to Query Human Feedback Efficiently in RL?

Reinforcement Learning with Human Feedback (RLHF) is a paradigm in which...
research
10/12/2020

Human-centric Dialog Training via Offline Reinforcement Learning

How can we train a dialog model to produce better conversations by learn...
research
09/13/2023

Statistical Rejection Sampling Improves Preference Optimization

Improving the alignment of language models with human preferences remain...
research
12/01/2021

A General Language Assistant as a Laboratory for Alignment

Given the broad capabilities of large language models, it should be poss...
research
04/12/2022

Make The Most of Prior Data: A Solution for Interactive Text Summarization with Preference Feedback

For summarization, human preference is critical to tame outputs of the s...

Please sign up or login with your details

Forgot password? Click here to reset