Training Language Models with Language Feedback at Scale

03/28/2023
by   Jérémy Scheurer, et al.
1

Pretrained language models often generate outputs that are not in line with human preferences, such as harmful text or factually incorrect summaries. Recent work approaches the above issues by learning from a simple form of human feedback: comparisons between pairs of model-generated outputs. However, comparison feedback only conveys limited information about human preferences. In this paper, we introduce Imitation learning from Language Feedback (ILF), a new approach that utilizes more informative language feedback. ILF consists of three steps that are applied iteratively: first, conditioning the language model on the input, an initial LM output, and feedback to generate refinements. Second, selecting the refinement incorporating the most feedback. Third, finetuning the language model to maximize the likelihood of the chosen refinement given the input. We show theoretically that ILF can be viewed as Bayesian Inference, similar to Reinforcement Learning from human feedback. We evaluate ILF's effectiveness on a carefully-controlled toy task and a realistic summarization task. Our experiments demonstrate that large language models accurately incorporate feedback and that finetuning with ILF scales well with the dataset size, even outperforming finetuning on human summaries. Learning from both language and comparison feedback outperforms learning from each alone, achieving human-level summarization performance.

READ FULL TEXT
research
04/29/2022

Training Language Models with Natural Language Feedback

Pretrained language models often do not perform tasks in ways that are i...
research
09/01/2023

RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback

Reinforcement learning from human feedback (RLHF) is effective at aligni...
research
05/23/2023

Improving Language Models via Plug-and-Play Retrieval Feedback

Large language models (LLMs) exhibit remarkable performance across vario...
research
03/30/2023

Self-Refine: Iterative Refinement with Self-Feedback

Like people, LLMs do not always generate the best text for a given gener...
research
08/08/2023

Shepherd: A Critic for Language Model Generation

As large language models improve, there is increasing interest in techni...
research
09/19/2023

Large language models can accurately predict searcher preferences

Relevance labels, which indicate whether a search result is valuable to ...
research
05/04/2023

ChatGPT-steered Editing Instructor for Customization of Abstractive Summarization

Tailoring outputs of large language models, such as ChatGPT, to specific...

Please sign up or login with your details

Forgot password? Click here to reset