Uncertainty Estimation for Language Reward Models

03/14/2022
by   Adam Gleave, et al.
0

Language models can learn a range of capabilities from unsupervised training on text corpora. However, to solve a particular problem (such as text summarization) it is typically necessary to fine-tune them on a task-specific dataset. It is often easier for humans to choose between options than to provide labeled data, and prior work has achieved state-of-the-art performance by training a reward model from such preference comparisons. However, collecting a large preference comparison dataset is still expensive – and the learned reward models are unreliable out-of-distribution. We seek to address these problems via uncertainty estimation, which can improve sample efficiency and robustness using active learning and risk-averse reinforcement learning (RL). Specifically, we use bootstrap aggregating (bagging) to train an ensemble of reward models differing in the initialization of their final layer. Ensembles have proved successful in prior applications of active learning, but we find that in our setting ensemble active learning does not outperform random sampling. Further experiments show that while the aggregate predictions are well-calibrated, the ensemble's estimated epistemic uncertainty is only weakly correlated with model error. We suspect this is because the ensemble members are fine-tuned from a single model and so are similar to one another. This suggests current pre-training methods will need to be modified to support uncertainty estimation, e.g. by training multiple language models.

READ FULL TEXT

page 7

page 15

page 16

page 17

research
11/03/2022

Fine-Tuning Language Models via Epistemic Neural Networks

Large language models are now part of a powerful new paradigm in machine...
research
10/19/2020

Cold-start Active Learning through Self-supervised Language Modeling

Active learning strives to reduce annotation costs by choosing the most ...
research
10/19/2017

Decomposition of Uncertainty for Active Learning and Reliable Reinforcement Learning in Stochastic Systems

Bayesian neural networks (BNNs) with latent variables are probabilistic ...
research
09/18/2019

Fine-Tuning Language Models from Human Preferences

Reward learning enables the application of reinforcement learning (RL) t...
research
10/09/2021

Bayesian Active Summarization

Bayesian Active Learning has had significant impact to various NLP probl...
research
07/12/2021

Uncertainty-based Query Strategies for Active Learning with Transformers

Active learning is the iterative construction of a classification model ...
research
03/26/2019

Active Stacking for Heart Rate Estimation

Heart rate estimation from electrocardiogram signals is very important f...

Please sign up or login with your details

Forgot password? Click here to reset