DeepAI AI Chat
Log In Sign Up

Learning to summarize from human feedback

09/02/2020
by   Nisan Stiennon, et al.
68

As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about—summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models. We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want.

READ FULL TEXT

page 31

page 32

05/24/2023

Neural Summarization of Electronic Health Records

Hospital discharge documentation is among the most essential, yet time-c...
05/23/2023

On Learning to Summarize with Large Language Models as References

Recent studies have found that summaries generated by large language mod...
12/19/2022

Human-in-the-loop Abstractive Dialogue Summarization

Abstractive dialogue summarization has received increasing attention rec...
09/18/2019

Fine-Tuning Language Models from Human Preferences

Reward learning enables the application of reinforcement learning (RL) t...
02/23/2023

Aligning Text-to-Image Models using Human Feedback

Deep generative models have shown impressive results in text-to-image sy...
12/04/2020

CUED_speech at TREC 2020 Podcast Summarisation Track

In this paper, we describe our approach for the Podcast Summarisation ch...
09/22/2021

Recursively Summarizing Books with Human Feedback

A major challenge for scaling machine learning is training models to per...

Code Repositories

Transformer-RL

Experiments to train transformer network to master reinforcement learning environments.


view repo

learning-from-human-feedback

Reimplementation of OpenAI's "Learning to summarize from human feedback"


view repo