Training Language Models with Natural Language Feedback

04/29/2022
by   Jérémy Scheurer, et al.
7

Pretrained language models often do not perform tasks in ways that are in line with our preferences, e.g., generating offensive text or factually incorrect summaries. Recent work approaches the above issue by learning from a simple form of human evaluation: comparisons between pairs of model-generated task outputs. Comparison feedback conveys limited information about human preferences per human evaluation. Here, we propose to learn from natural language feedback, which conveys more information per human evaluation. We learn from language feedback on model outputs using a three-step learning algorithm. First, we condition the language model on the initial output and feedback to generate many refinements. Second, we choose the refinement with the highest similarity to the feedback. Third, we finetune a language model to maximize the likelihood of the chosen refinement given the input. In synthetic experiments, we first evaluate whether language models accurately incorporate feedback to produce refinements, finding that only large language models (175B parameters) do so. Using only 100 samples of human-written feedback, our learning algorithm finetunes a GPT-3 model to roughly human-level summarization.

READ FULL TEXT
research
03/28/2023

Training Language Models with Language Feedback at Scale

Pretrained language models often generate outputs that are not in line w...
research
05/15/2023

RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs

Despite their unprecedented success, even the largest language models ma...
research
08/08/2023

Shepherd: A Critic for Language Model Generation

As large language models improve, there is increasing interest in techni...
research
02/06/2023

Languages are Rewards: Chain of Hindsight Finetuning using Human Feedback

Learning from human preferences is important for language models to be h...
research
09/21/2023

Reranking for Natural Language Generation from Logical Forms: A Study based on Large Language Models

Large language models (LLMs) have demonstrated impressive capabilities i...
research
11/10/2022

Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control

Pretrained language models have demonstrated extraordinary capabilities ...
research
09/19/2023

Large language models can accurately predict searcher preferences

Relevance labels, which indicate whether a search result is valuable to ...

Please sign up or login with your details

Forgot password? Click here to reset