Combining Reward Information from Multiple Sources

03/22/2021
by   Dmitrii Krasheninnikov, et al.
0

Given two sources of evidence about a latent variable, one can combine the information from both by multiplying the likelihoods of each piece of evidence. However, when one or both of the observation models are misspecified, the distributions will conflict. We study this problem in the setting with two conflicting reward functions learned from different sources. In such a setting, we would like to retreat to a broader distribution over reward functions, in order to mitigate the effects of misspecification. We assume that an agent will maximize expected reward given this distribution over reward functions, and identify four desiderata for this setting. We propose a novel algorithm, Multitask Inverse Reward Design (MIRD), and compare it to a range of simple baselines. While all methods must trade off between conservatism and informativeness, through a combination of theory and empirical results on a toy environment, we find that MIRD and its variant MIRD-IF strike a good balance between the two.

READ FULL TEXT
research
11/08/2017

Inverse Reward Design

Autonomous agents optimize the reward function we give them. What they d...
research
04/15/2021

Bayesian and Dempster-Shafer models for combining multiple sources of evidence in a fraud detection system

Combining evidence from different sources can be achieved with Bayesian ...
research
03/14/2022

Invariance in Policy Optimisation and Partial Identifiability in Reward Learning

It's challenging to design reward functions for complex, real-world task...
research
01/24/2019

Learning Independently-Obtainable Reward Functions

We present a novel method for learning a set of disentangled reward func...
research
09/28/2018

A belief combination rule for a large number of sources

The theory of belief functions is widely used for data from multiple sou...
research
07/08/2022

Information-Gathering in Latent Bandits

In the latent bandit problem, the learner has access to reward distribut...
research
05/28/2023

Reward Collapse in Aligning Large Language Models

The extraordinary capabilities of large language models (LLMs) such as C...

Please sign up or login with your details

Forgot password? Click here to reset