Aligning Language Models with Preferences through f-divergence Minimization

02/16/2023
by   Dongyoung Go, et al.
0

Aligning language models with preferences can be posed as approximating a target distribution representing some desired behavior. Existing approaches differ both in the functional form of the target distribution and the algorithm used to approximate it. For instance, Reinforcement Learning from Human Feedback (RLHF) corresponds to minimizing a reverse KL from an implicit target distribution arising from a KL penalty in the objective. On the other hand, Generative Distributional Control (GDC) has an explicit target distribution and minimizes a forward KL from it using the Distributional Policy Gradient (DPG) algorithm. In this paper, we propose a new approach, f-DPG, which allows the use of any f-divergence to approximate any target distribution. f-DPG unifies both frameworks (RLHF, GDC) and the approximation methods (DPG, RL with KL penalties). We show the practical benefits of various choices of divergence objectives and demonstrate that there is no universally optimal objective but that different divergences are good for approximating different targets. For instance, we discover that for GDC, the Jensen-Shannon divergence frequently outperforms forward KL divergence by a wide margin, leading to significant improvements over prior work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2023

Generalized Munchausen Reinforcement Learning using Tsallis KL Divergence

Many policy optimization approaches in reinforcement learning incorporat...
research
07/17/2021

Greedification Operators for Policy Optimization: Investigating Forward and Reverse KL Divergences

Approximate Policy Iteration (API) algorithms alternate between (approxi...
research
12/21/2020

A Distributional Approach to Controlled Text Generation

We propose a Distributional Approach to address Controlled Text Generati...
research
05/27/2021

Optimistic Reinforcement Learning by Forward Kullback-Leibler Divergence Optimization

This paper addresses a new interpretation of reinforcement learning (RL)...
research
05/23/2022

RL with KL penalties is better viewed as Bayesian inference

Reinforcement learning (RL) is frequently employed in fine-tuning large ...
research
05/01/2015

Volumetric Bias in Segmentation and Reconstruction: Secrets and Solutions

Many standard optimization methods for segmentation and reconstruction c...
research
07/31/2020

Deep Direct Likelihood Knockoffs

Predictive modeling often uses black box machine learning methods, such ...

Please sign up or login with your details

Forgot password? Click here to reset