Languages are Rewards: Chain of Hindsight Finetuning using Human Feedback

02/06/2023
by   Hao Liu, et al.
0

Learning from human preferences is important for language models to be helpful and useful for humans, and to align with human and social values. Existing works focus on supervised finetuning of pretrained models, based on curated model generations that are preferred by human labelers. Such works have achieved remarkable successes in understanding and following instructions (e.g., InstructGPT, ChatGPT, etc). However, to date, a key limitation of supervised finetuning is that it cannot learn from negative ratings; models are only trained on positive-rated data, which makes it data inefficient. Because collecting human feedback data is both time consuming and expensive, it is vital for the model to learn from all feedback, akin to the remarkable ability of humans to learn from diverse feedback. In this work, we propose a novel technique called Hindsight Finetuning for making language models learn from diverse human feedback. In fact, our idea is motivated by how humans learn from hindsight experience. We condition the model on a sequence of model generations paired with hindsight feedback, and finetune the model to predict the most preferred output. By doing so, models can learn to identify and correct negative attributes or errors. Applying the method to GPT-J, we observe that it significantly improves results on summarization and dialogue tasks using the same amount of human feedback.

READ FULL TEXT

page 14

page 15

research
04/29/2022

Training Language Models with Natural Language Feedback

Pretrained language models often do not perform tasks in ways that are i...
research
03/04/2022

Training language models to follow instructions with human feedback

Making language models bigger does not inherently make them better at fo...
research
05/17/2023

SLiC-HF: Sequence Likelihood Calibration with Human Feedback

Learning from human feedback has been shown to be effective at aligning ...
research
08/30/2023

Peering Through Preferences: Unraveling Feedback Acquisition for Aligning Large Language Models

Aligning large language models (LLMs) with human values and intents crit...
research
10/28/2022

When Life Gives You Lemons, Make Cherryade: Converting Feedback from Bad Responses into Good Labels

Deployed dialogue agents have the potential to integrate human feedback ...
research
12/20/2022

Task Ambiguity in Humans and Language Models

Language models have recently achieved strong performance across a wide ...
research
11/28/2022

Fine-tuning language models to find agreement among humans with diverse preferences

Recent work in large language modeling (LLMs) has used fine-tuning to al...

Please sign up or login with your details

Forgot password? Click here to reset