Rewarded soups: towards Pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards

06/07/2023
by   Alexandre Ramé, et al.
0

Foundation models are first pre-trained on vast unsupervised datasets and then fine-tuned on labeled data. Reinforcement learning, notably from human feedback (RLHF), can further align the network with the intended usage. Yet the imperfections in the proxy reward may hinder the training and lead to suboptimal results; the diversity of objectives in real-world tasks and human opinions exacerbate the issue. This paper proposes embracing the heterogeneity of diverse rewards by following a multi-policy strategy. Rather than focusing on a single a priori reward, we aim for Pareto-optimal generalization across the entire space of preferences. To this end, we propose rewarded soup, first specializing multiple networks independently (one for each proxy reward) and then interpolating their weights linearly. This succeeds empirically because we show that the weights remain linearly connected when fine-tuned on diverse rewards from a shared pre-trained initialization. We demonstrate the effectiveness of our approach for text-to-text (summarization, Q A, helpful assistant, review), text-image (image captioning, text-to-image generation, visual grounding, VQA), and control (locomotion) tasks. We hope to enhance the alignment of deep models, and how they interact with the world in all its diversity.

READ FULL TEXT

page 33

page 34

research
02/23/2023

Aligning Text-to-Image Models using Human Feedback

Deep generative models have shown impressive results in text-to-image sy...
research
12/16/2021

Unsupervised Reinforcement Learning in Multiple Environments

Several recent works have been dedicated to unsupervised reinforcement l...
research
11/07/2022

Few-shot Image Generation with Diffusion Models

Denoising diffusion probabilistic models (DDPMs) have been proven capabl...
research
03/10/2022

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time

The conventional recipe for maximizing model accuracy is to (1) train mu...
research
12/20/2022

Recycling diverse models for out-of-distribution generalization

Foundation models are redefining how AI systems are built. Practitioners...
research
09/05/2023

SeisCLIP: A seismology foundation model pre-trained by multi-modal data for multi-purpose seismic feature extraction

Training specific deep learning models for particular tasks is common ac...

Please sign up or login with your details

Forgot password? Click here to reset