Reward Collapse in Aligning Large Language Models

05/28/2023
by   Ziang Song, et al.
4

The extraordinary capabilities of large language models (LLMs) such as ChatGPT and GPT-4 are in part unleashed by aligning them with reward models that are trained on human preferences, which are often represented as rankings of responses to prompts. In this paper, we document the phenomenon of reward collapse, an empirical observation where the prevailing ranking-based approach results in an identical reward distribution regardless of the prompts during the terminal phase of training. This outcome is undesirable as open-ended prompts like “write a short story about your best friend” should yield a continuous range of rewards for their completions, while specific prompts like “what is the capital of New Zealand” should generate either high or low rewards. Our theoretical investigation reveals that reward collapse is primarily due to the insufficiency of the ranking-based objective function to incorporate prompt-related information during optimization. This insight allows us to derive closed-form expressions for the reward distribution associated with a set of utility functions in an asymptotic regime. To overcome reward collapse, we introduce a prompt-aware optimization scheme that provably admits a prompt-dependent reward distribution within the interpolating regime. Our experimental results suggest that our proposed prompt-aware utility functions significantly alleviate reward collapse during the training of reward models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2015

Shaping Proto-Value Functions via Rewards

In this paper, we combine task-dependent reward shaping and task-indepen...
research
05/04/2023

Language, Time Preferences, and Consumer Behavior: Evidence from Large Language Models

Language has a strong influence on our perceptions of time and rewards. ...
research
09/18/2023

Stabilizing RLHF through Advantage Model and Selective Rehearsal

Large Language Models (LLMs) have revolutionized natural language proces...
research
09/27/2018

Controllable Neural Story Generation via Reinforcement Learning

Open story generation is the problem of automatically creating a story f...
research
06/30/2023

Preference Ranking Optimization for Human Alignment

Large language models (LLMs) often contain misleading content, emphasizi...
research
09/27/2021

Learning Multimodal Rewards from Rankings

Learning from human feedback has shown to be a useful approach in acquir...
research
03/22/2021

Combining Reward Information from Multiple Sources

Given two sources of evidence about a latent variable, one can combine t...

Please sign up or login with your details

Forgot password? Click here to reset