Combining Reward Information from Multiple Sources

by   Dmitrii Krasheninnikov, et al.
berkeley college

Given two sources of evidence about a latent variable, one can combine the information from both by multiplying the likelihoods of each piece of evidence. However, when one or both of the observation models are misspecified, the distributions will conflict. We study this problem in the setting with two conflicting reward functions learned from different sources. In such a setting, we would like to retreat to a broader distribution over reward functions, in order to mitigate the effects of misspecification. We assume that an agent will maximize expected reward given this distribution over reward functions, and identify four desiderata for this setting. We propose a novel algorithm, Multitask Inverse Reward Design (MIRD), and compare it to a range of simple baselines. While all methods must trade off between conservatism and informativeness, through a combination of theory and empirical results on a toy environment, we find that MIRD and its variant MIRD-IF strike a good balance between the two.


Inverse Reward Design

Autonomous agents optimize the reward function we give them. What they d...

Bayesian and Dempster-Shafer models for combining multiple sources of evidence in a fraud detection system

Combining evidence from different sources can be achieved with Bayesian ...

Invariance in Policy Optimisation and Partial Identifiability in Reward Learning

It's challenging to design reward functions for complex, real-world task...

Learning Independently-Obtainable Reward Functions

We present a novel method for learning a set of disentangled reward func...

A belief combination rule for a large number of sources

The theory of belief functions is widely used for data from multiple sou...

Assisted Robust Reward Design

Real-world robotic tasks require complex reward functions. When we defin...

Truth Validation with Evidence

In the modern era, abundant information is easily accessible from variou...

1 Introduction

While deep reinforcement learning (RL) has led to considerable success when

given an accurate reward function Silver et al. (2016); OpenAI (2018), specifying a reward function that captures human preferences in real-world tasks is challenging (Amodei et al., 2016; Christiano et al., 2017). Value learning seeks to sidestep this difficulty by learning rewards from various types of data, such as natural language (Radlinski et al., 2019), demonstrations (Ziebart, 2010; Fu et al., 2018), comparisons (Christiano et al., 2017), ratings (Daniel et al., 2014), human reinforcement (MacGlashan et al., 2017), proxy rewards (Hadfield-Menell et al., 2017), the state of the world Shah et al. (2019b), etc. Many approaches to value learning aim to create a distribution over reward functions (Hadfield-Menell et al., 2016, 2017; Ramachandran and Amir, 2007; Sadigh et al., 2017).

Figure 1: Alice says she wants avocado toast, but her past actions imply she prefers cake to toast. What should the agent believe about the true reward ?

A natural idea is to combine the information from these diverse sources. However, the sources may conflict. For example, in Figure 1

, Alice might say that she wants the healthy avocado toast, but empirically she almost always chooses to eat the tasty cake. A reward learning method based on natural language may confidently infer that Alice would prefer the toast, whereas a method based on revealed preferences might confidently infer that she wants cake. Presumably one of the algorithms made a poor assumption: perhaps the natural language algorithm failed to model that Alice only said that she wants avocado toast to impress her friends. Alternatively, maybe Alice wants to start eating healthy, but chooses the cake in moments of weakness, and the revealed preferences algorithm failed to model this bias. While it is unclear what the “true” reward is, we do know that at least one of the methods is

confidently wrong. We would like to defer on the question of what the “true” reward is, and instead retreat to a position of uncertainty. Formally, we want a distribution over the parameters of the true reward function given the inferred rewards and .

We could apply Bayes Rule: under the assumption that and are conditionally independent given . However, when rewards conflict due to misspecification, for every one of the likelihoods will be very low, and the posterior will be low everywhere, likely leading to garbage after normalization (Frazier et al., 2017). While it is possible to mitigate this by making the assumptions more realistic, e.g. by modeling human biases (Shah et al., 2019a; Majumdar et al., 2017; Evans et al., 2016), it is very hard to get a perfect model (Armstrong and Mindermann, 2018; Steinhardt and Evans, 2017). So, we aim for a method that is robust to misspecification.

What makes for a good reward combination method? To answer this question, we must know how the resulting distribution will be used. As in Hadfield-Menell et al. (2016), we assume that an agent maximizes expected reward, given that the agent can gather more information about the reward

. In this framework, an agent must balance three concerns: actively learning about the reward 

Sadigh et al. (2017); Mindermann et al. (2019), preserving its ability to pursue the true reward in the future when it knows more (Turner et al., 2019; Krakovna et al., 2018), and acting to gain immediate reward. Each concern suggests different desiderata for a reward combination method, which we formalize for rewards functions that are linear in features of the state.

There are several reasonable methods for combining reward functions based on the intuition that we should put weight on any parameter values “in between” the parameters of the given reward functions. These reward-space methods operate in the space of parameters of the reward. However, this ignores any details about the environment and its features. We introduce a method called Multi-task Inverse Reward Design (MIRD) that leverages the environment to infer which reward functions are compatible with the behaviors “in between” the behaviors incentivized by the two input rewards, making it a behavior-space method. Through a combination of theory and empirical results on a simple environment, we evaluate how well the proposed methods meet our desiderata, and conclude that MIRD is a good option when the reward distribution must be used to act, while its variant MIRD-IF is better for increased robustness and a higher chance of support on the true reward.

2 Background

Markov Decision Process (MDP). A MDP is a tuple , where is the set of states, is the set of actions,

is the transition probability function,

is the reward function, and is the finite planning horizon. We consider MDPs where the reward is linear in features, and does not depend on action: , where are the parameters defining the reward function and computes features of a given state. A policy specifies how to act in the MDP. A trajectory is a sequence of states and actions, where the actions are sampled from a policy and the states are sampled from the transition function . We abuse notation and write to denote . The feature expectations (FE) of policy are the expected feature counts when acting according to : . For conciseness we denote the feature expectations arising from optimizing reward as .

Inverse Reinforcement Learning (IRL). In IRL the goal is to infer the reward parameters given a MDP without reward and expert demonstrations .

Maximum causal entropy inverse RL (MCEIRL). As human demonstrations are rarely optimal, Ziebart (2010) models the expert as a noisily rational agent that acts close to randomly when the difference in the actions’ expected returns is small, but nearly always chooses the best action when it leads to a much higher expected return. Formally, MCEIRL models the expert as using soft value iteration to optimize its reward function: , where is the "rationality" parameter, and plays the role of a normalizing constant. The state-action value function is computed as

Reward learning by simulating the past (RLSP).  Shah et al. (2019b) note that environments where humans have acted are already optimized for human preferences, and so contain reward information. The RLSP algorithm considers the current state as the final state in a human trajectory generated by the human with preferences . The RLSP likelihood is obtained by marginalizing out , since only is actually observed.

Inverse Reward Design (IRD).  Inverse reward design (IRD) notes that since reward designers often produce rewards by checking what the reward does in the training environment using an iterative trial-and-error process, the final reward they produce is likely to produce good behavior in the environments that they used during the reward design process. So, need not be identical to , but merely provides evidence about it. Formally, IRD assumes that the probability of specifying a given is proportional to the exponent of the expected true return in the training environment: . IRD inverts this generative model of reward design to sample from the posterior of the true reward: .

3 Uses of the reward distribution

So far, we have argued that we would like to combine the information from two different sources into a single distribution over reward functions. But what makes such a distribution good? We take a pragmatic approach: we identify downstream tasks for such a distribution, find desiderata for a reward distribution for these tasks, and use these desiderata to guide our search for methods.

3.1 Optimizing the reward

We follow the framework of Hadfield-Menell et al. (2016) in which an agent maximizes expected return under reward uncertainty. In this setting, maximizing the return causes the agent to use its actions both to obtain (expected) reward, and to gather information about the true reward in order to better obtain reward in the future (Hadfield-Menell et al., 2016; Woodward et al., 2019). We might expect that an agent could first learn the reward and then act; however, this is unrealistic: we would be unhappy with a household robot that had to learn how we want walls to be painted before it would bake a cake. The agent must act given reward uncertainty, but maintain its ability to optimize any of the potential rewards in the future when it will know more about the reward. Put simply, it needs to preserve its option value while still obtaining as much reward as it can currently get.

We would like to evaluate our reward distribution by plugging it into an algorithm that maximizes expected reward given reward uncertainty, but unfortunately current methods (Hadfield-Menell et al., 2016; Woodward et al., 2019) are computationally expensive and only work in environments where option value preservation is not a concern. So, we analyze reward learning and option value preservation separately (Figure 2).

Figure 2: In our setting the agent maximizes expected return under reward uncertainty, and can gather more information about the reward. This leads to balancing three considerations: maximizing the reward, learning about the true reward, and preserving option value.

Active reward learning. An agent that learns rewards while acting will need to select queries with care. Active reward learning aims to pick queries that maximize the information gained about the reward, or minimize the expected regret Sadigh et al. (2017); Mindermann et al. (2019)111We speculate that future active reward learning methods will also have to ensure that the queries are relevant and timely: in our household robot example, this would prevent the robot from asking about wall paint color when it is meant to be baking a cake.. We analyse how the reward distributions resulting from different reward combination methods fit as priors for the active reward learning methods.

Option-value preservation.   Low impact agents (Turner et al., 2019; Krakovna et al., 2018; Armstrong and Levinstein, 2017) take fewer irreversible actions, and so preserve their ability to optimize multiple reward functions. However, work in this area does not consider the possibility of learning more information about the reward function, and so these methods penalize entire plans that would have a high impact. In contrast, we would like our agent to pursue impactful plans, but pause when option value would be destroyed, and get more reward information.

We start with the attainable utility preservation (AUP) method introduced by Turner et al. (2019). AUP formalizes the impact of an action on a reward function as the change in the Q-value as a result of taking that action (relative to a no-op action ). For our purposes, we only care about cases where we are no longer able to optimize the reward function, as opposed to ones where we are better able to optimize the reward function. So, we use the truncated difference summary function proposed by Krakovna et al. (2018) that only counts decreases in Q-values and not increases:


Turner et al. (2019)

penalize the agent whenever it would cause too much impact, using a hyperparameter

to trade off task reward with impact. They plan using the reward . However, by penalizing impact in the reward function, the agent is penalized for any plan that would cause impact, whereas we want our agent to start on potentially impactful plans, but stop once a particular action would destroy option value. So, we only penalize impact during action selection:


3.2 Acting in a new environment

Often, we wish to use the reward function in the same environment as the one in which it was specified or inferred. For example, a human expert may have demonstrated how to bake a cake in a particular kitchen. Since we know at training time that the agent will be deployed in the same kitchen, the only guarantee we need is that the agent’s behavior is correct. If two rewards lead to the same behavior in this kitchen, we don’t care which one we end up using.

However, we may instead want to apply a reward function learned in one environment to a different environment. In this case, there may be new options in the test environment that we never saw during training. As a result, even if two rewards led to identical behavior in the training environment, they need not do so in the test environment. Hence when the goal is to transfer the reward distribution to a new environment, it is crucial to ensure that the distribution contains the true reward, instead of just containing a reward that gives rise to the same behavior as the true one in the training environment.

4 Desiderata for the reward posterior

We discuss properties of the posterior that inform the choice of a reward combination algorithm for different applications of the resulting reward distribution. We focus on the two applications identified above: option-value preserving planning and active reward learning.

4.1 Robust to plausible modes of misspecification

All else equal, we want the true reward to have high probability in the reward distribution. Below we introduce two plausible ways of misspecifying reward functions linear in features of the state, and formulate two desiderata that give a degree of robustness against these types of misspecification.

Independent per-feature corruption.

One model for misspecification is that the weight for each feature has some chance of being “corrupted”, and the chance of the feature being corrupted in one input reward is independent from that of the other reward. If the fraction of misspecified values is small, then if two reward vectors’ values are different at a particular index, it is likely that one of the values is correct. Given this, it is desirable for the reward distribution to assign significant probability to every reward function s.t. the value of the weight for each feature is in one of the values of that feature’s weight of the input rewards. Formally, we denote the set of such reward parameters

Desideratum 1 (Support on ).

All should be in the support of .

Note that when we do not need to transfer the reward function to a new environment, Desideratum 1 can be made weaker. We only require that the distribution have support on rewards that give rise to the same behavior as the rewards in , and not necessarily on itself.

Des. 1 is helpful for both option value preservation and active reward learning. If it is in fact the case that several features are corrupted independently, having significant probability mass on rewards in would ensure the agent preserves its ability to optimize them, and would be able to easily learn the true reward with active learning.

Misspecified relative importance of different behaviors. Given a reward vector we define the tradeoff point between the i-th and j-th features of the state as the ratio between the values of corresponding to these features: . The true reward’s tradeoff point may be in between the tradeoff points of the input rewards, and so we want our distribution to support all such tradeoff points.

Desideratum 2 (Support on intermediate feature tradeoffs).

The reward posterior should have support on rewards that prescribe all intermediate tradeoff points between the states’ features.

In practice tradeoff points only affect the behavior to the extent allowed by the environment. We only need to maintain all intermediate tradeoff points if we want to transfer to a new environment.

Des. 2 is especially important for active reward learning. Consider a scenario where a personal assistant has to book a flight for its owner Alice, and chooses between options with different costs and durations. The available flights are ($1, 9h), ($2, 5h), ($6, 2h) and ($9, 1h). Alice said she wants the shortest flight, giving rise to a reward . In addition, the assistant observed that previously Alice usually preferred cheaper flights, resulting in the second reward . Here the assistant is uncertain about Alice’s true preference about the tradeoff between cost and time, and so it is sensible to have probability mass on reward functions that would result in choosing any one of the flights, and clarifying this with Alice before making the decision.

4.2 Informative about desirable behavior

In order for the agent to act, the reward distribution must provide information about useful behavior:

Desideratum 3 (Informative about desirable behavior).

If and give rise to the same behavior, most of the reward functions in the distribution should give rise to this behavior.

This desideratum can be formalized in different degrees of strength. The strong version states that if the FEs of are equal on some feature, FEs of this feature of any reward in the support of the posterior equal those of :


The medium version considers the FEs as a whole instead of per-feature:


The weak version assumes that the FEs of all are the same, rather than just and :


Des. 3 allows option-value preserving planning to do useful things instead of performing the no-op action all the time: more certainty about the desirable behavior allows it to care less about preserving the expected return attainable by behaviors that are not desirable. The strong version of Des. 3 is especially useful, as it is easy to imagine scenarios where the two input rewards agree on some but not all aspects of the desired behavior. On the other hand, the medium and weak versions of Des. 3 require the behaviors arising from the input rewards to be exactly the same for all features – a circumstance we expect to be uncommon in real situations.

Furthermore, by restricting possible behaviors, Des. 3 simplifies the job of approaches that actively learn the reward function. If the reward distribution used by our household robot always incentivizes using soap while washing dishes (because both input rewards did so), the robot’s active learning algorithm does not need to query the human about whether or not to use soap.

Des. 3 might be problematic if both input rewards agree on an undesirable behavior w.r.t. a particular feature. This presents a natural tradeoff between informativeness and the abilities to preserve option value and actively learn arbitrary reward functions: the more informative we assume the input rewards are about the desired behavior, the less robust we are to their misspecification.

4.3 Behavior-space balanced

Suppose we would like to preserve the option value of pursuing two different behaviors. Option-value preservation is more robust when the distribution over FEs induced by the reward distribution places significant and approximately equal weight on the two behaviors. Intuitively, the “weight” AUP puts on preserving expected return attainable by a particular behavior is proportional to the probability of that behavior in the reward distribution. So we would like the FEs of the input rewards to be balanced across the reward distribution. Concretely:

Desideratum 4 (Behavior-space balance).

The FEs of the rewards sampled from the distribution should correspond to behaviors arising from the input reward functions with a similar frequency:


5 Multitask Inverse Reward Design

Inspired by Inverse Reward Design Hadfield-Menell et al. (2017), we introduce a reward combination method Multitask Inverse Reward Design (MIRD). We assume that the two input rewards meet the IRD assumption for two different rewards, namely that the input rewards generate nearly optimal behavior in the training environment according to the corresponding task rewards.A straightforward way to construct the distribution for the combined reward function is to require the FEs of the rewards in the distribution to be convex combinations of the FEs of the input rewards:

To satisfy this requirement, we propose a distribution that depends on the behavior (set of trajectories) resulting from the agents optimizing and :


Each of the reward functions used to generate the trajectories in

is sampled from the Bernoulli mixture over the input reward functions. We introduce a random variable

that determines the probability of sampling to generate trajectory :


We sample from using policies arising from soft value iteration. For , we set to the reward learned by Maximum Causal Entropy Inverse RL (MCEIRL) (Ziebart, 2010). Denoting the Dirac delta as , we have Having defined all relevant conditional distributions, we can sample from the joint and the marginal .

In summary, to generate a sample from , we (1) simulate , a set of trajectories in which % trajectories arise from and the rest from , and (2) do MCEIRL with .

Theorem 1.

Given the rationality parameter used in MCEIRL, soft value iteration with rationality applied to the rewards in the support of the MIRD posterior results in FEs that are convex combinations of FEs of the input rewards and :


Proof. See appendix A for full proof. Key to the proof is the fact that MCEIRL finds such that its FEs match those of the soft value iteration expert who generated the trajectories Ziebart (2010). In our case, the FEs of match , which contains trajectories arising from and in various proportions.

While Theorem 1 considers soft value iteration, note that this can be made arbitrarily close to regular value iteration by making the rationality parameter arbitrarily large.

Corollary 1.1.

The average FEs of the reward functions in the posterior are:

Corollary 1.2.

Eq. 10 implies that if the FEs of the and w.r.t. a given feature are equal, the FEs of all rewards in the distribution for that feature equal the FEs of and for that feature. Thus MIRD satisfies the strong version of Des. 3 (informative about desirable behavior).

Corollary 1.3.

When the probability of sampling equals the probability of sampling , and hence the probabilities of observing behaviors and when optimizing a reward function from the posterior are equal. Hence Des. 4 (behavior balance) is satisfied.

Theorem 1 allows us to bound the expected return , of the true reward in terms of expected true returns of and :

Theorem 2.

The minimum expected true return achieved by optimizing any of the rewards in the MIRD posterior with soft value iteration with rationality equals the minimum true return from optimizing one of the input rewards:


Proof. See Appendix B.

Theorem 2 is the best possible regret bound for following rewards in a distribution resulting from reward combination: the worst the agent can do when following a reward from the distribution is no worse than when just randomly choosing to follow one of the input reward functions.

Generally FEs arising from are not convex combinations of and , so MIRD does not satisfy any version of Des. 1 (support on ). When and conflict on more than one feature, Des. 2 (support on intermediate feature tradeoffs) is not satisfied either.

Independent features formulation. Since both Des. 1 and Des. 2 aim to ensure robustness to reward misspecification, one of our primary goals, we would like a variant of MIRD that satisfies them. An algorithm that supports rewards that give rise to all potential behaviors arising from reward functions in is a simple modification of MIRD. The only change required in the original formulation of MIRD is sampling the reward vectors giving rise to the trajectories in from instead of just . Now is a vector of dimensionality , sampled from . Each is sampled from a Multinoulli distribution over , parameterized by . We refer to this version of MIRD as MIRD independent features, or MIRD-IF.

Informative about desirable
Support on
Support on intermediate
feature tradeoffs
(strong) (medium) (weak)
Uniform Points
Table 1: Performance of reward combination algorithms on the desiderata outlined in Section 4.

6 Analysis

We first introduce several baselines each of which does not meet some of the outlined desiderata, and then analyze all reward combination methods in the context of our modified version of AUP.

6.1 Reward-space baselines

Additive. The additive reward combination method is commonly used in previous work utilizing reward combination Kolter and Ng (2009); Desai et al. (2018); Shah et al. (2019b). The reward posterior is simply , where is the hyperparameter used to balance the two input reward functions.

Gaussian. Here, we use the standard Bayesian framework, and assume that and are generated from with Gaussian noise:


is the identity matrix,

is the multivariate Gaussian probability density function, and

are hyperparameters controlling the standard deviations. One can express higher trust in a given input reward by lowering its

. For our experiments, we use an uninformative Gaussian prior (that is, ) and set . The posterior is then .

Convex combinations of input reward vectors (CC-in). Consider a scenario where the input reward functions are correctly specified for two different tasks. A natural reward-space analogue to MIRD would be to consider a distribution over all convex combinations of the input reward vectors:


As before, the weight is sampled from .

Theorem 3.

If , the FEs of any reward in the support of the CC-in posterior equal . Hence CC-in satisfies the medium version of Des. 3 (informative about desirable behavior).

Proof. See Appendix C.

We provide an example showing that CC-in does not satisfy the strong version of Des 3 in Appendix E. Furthermore, CC-in does not satisfy Des. 1 (support on ): consider combining rewards and . Here , but is not a convex combination of the input rewards.

Convex combinations of (CC-). Instead of only considering convex combinations of the input rewards, we consider convex combinations of :


The vector of weights for each convex combination is sampled from the Dirichlet distribution: .

Theorem 4.

If all FEs of rewards in are equal, the FEs of any reward in the CC- posterior equal the FEs of rewards in . Hence CC- satisfies Des. 3 (weak).

Proof. See Appendix D.

Uniform points. Note that as , Eq. 13 essentially becomes . It is helpful to analyze the properties of this uniform points distribution separately.

Table 1 summarizes the properties of MIRD, MIRD-IF, and the reward-space baselines.

6.2 Analysis on a toy environment

To demonstrate the use of the desiderata and each method’s performance on them, we developed a toy environment for testing reward combination methods. Due to space limitations, we defer the full qualitative and quantitative explanations to Appendix F and report the broad results here.

Setup. We use the setting of AUP planning to demonstrate the importance of support on , informativeness about desired the behavior, and behavior balance desiderata for option value preservation. We do not evaluate Des 2 (support on intermediate feature tradeoffs), as it is not helpful for preserving option value: it is primarily useful for active learning, which we do not test. We obtain the reward posterior by combining the input rewards using each of our seven reward combination methods, and evaluate the behavior of AUP planning w.r.t. each posterior over five seeds. Our implementation of AUP uses state-action value functions computed with traditional value iteration with discount ; the impact penalty weight is set to 1 unless specified otherwise. We use for the Beta and the Dirichlet distributions used in CC-in, CC-, MIRD, and MIRD-IF. The input rewards are either both specified by hand, or is learned with RLSP while is hand-specified. We use the Cooking environment described in Figure 3.

Figure 3: The Cooking environment. Left: the gridworld layout and the current state. The states’ features indicate the number of jars of flour, pieces of dough, cakes, avocado toasts, and servings of pasta. Right: irreversible actions for cooking. At each state the agent can move, cook, or do nothing. Cooking is possible when the agent is to the left of flour or dough. Making cake or avocado toast involves first making dough from the flour, and then making cake or toast; making pasta uses flour directly.

Support on . We specify both input rewards to strongly incentivise either cake or toast by setting and , and to mildly incentivise pasta: . The desired behavior is to preserve the flour, as any one of the foods might be desirable. This requires the distribution to place weight on rewards such as ; which would make pasta. As expected, methods that have support on behave as desired, and other methods do not.

Informativeness about desirable behavior (strong). In the setting above the two input rewards lead the agent to make either cake or toast, but not pasta. So, a reward distribution meeting the strong version of Des. 3 would only make cake or toast, and so the agent should make dough (which is useful for both). Note that despite the same setup, this is exactly the opposite behavior than desired previously, showing the tradeoff between informativeness and support on . As such, our experimental results are reversed.

Informativeness about desirable behavior (weak). We hand-specify one of the reward functions to incentivise the agent to make cakes: . is inferred by RLSP with a uniform prior over . Since the current state contains a cake, RLSP infers a positive reward for cakes, , and near-zero rewards for the other features. Hence we have two rewards that both incentivise the agent to make another cake. The desired behavior is to make the cake, and not worry about preserving the ability to make pasta or toast as neither input reward cares about it. All but the Gaussian method satisfy the weak informativeness desideratum, and so they all succeed. The Gaussian method fails since it thinks could be negative.

Behavior balance. is specified to only reward toast, , , and is specified only reward cake: , . So we have two disagreeing rewards: one incentivises cake, while the other incentivises toast. The desired behavior from AUP here is to make dough as it is desirable for both the cake and the toast, but then to stop as the next step is unclear. Ensuring that the reward for toast does not overwhelm the reward for cake requires behavior balance. We vary between 0.01 and 0.7 to obtain many degrees of imbalance between toast and cake, and vary the AUP weight between 0.01 and 5 to see the effects of behavior imbalance on AUP. We plot our results in appendix F. MIRD and MIRD-IF have the lowest frequency of the toast reward overwhelming the cake reward, which indicates that MIRD and MIRD-IF are most behavior balanced. However, occasionally these methods do nothing instead of making dough.

7 Related Work

Utility aggregation. Combining the specified reward functions is in many ways similar to combining utility functions of different agents. Harsanyi’s aggregation theorem Harsanyi (1955) suggests that maximizing a fixed linear combination of utility functions results in Pareto-optimal outcomes for correctly specified utility functions. This corresponds to our Additive baseline.

Multi-task and meta inverse RL. Multi-task IRL assumes that the demonstrations originate from multiple experts performing different tasks, and seeks to recover reward functions that explain the preferences of each of the experts (Babes et al., 2011; Choi and Kim, 2012). The IRL part of MIRD and MIRD-IF is conceptually similar to these approaches: the trajectories in each sampled demonstration set can be seen as arising from an expert executing different tasks. However, MIRD seeks to explain each as if the trajectories arose from a single reward function. This resembles meta IRL: Li and Burdick (2017) explain the preferences of an expert who demonstrated several different tasks by learning a reward shared between the tasks, while Xu et al. (2018) and Gleave and Habryka (2018) learn an initialization for the reward function parameters that is helpful for learning the rewards of new tasks. MIRD is different from these methods in that it (1) uses reward functions instead of expert demonstrations as inputs, (2) recovers the full distribution over the underlying reward function, and (3) explicitly accounts for misspecification.

8 Limitations and future work

Summary. We analyze the problem of combining two potentially misspecified reward functions into a distribution over rewards. We identify active reward learning and option value preservation as two key applications for such a distribution, and determine four properties that the distribution would ideally satisfy for these applications. We suggest some simple reward-space methods as well as a behavior-space method, Multitask Inverse Reward Design (MIRD), which is grounded in the behavior arising from the input rewards. MIRD works well for most applications, and MIRD-IF can be used for greater robustness and increased likelihood that the true reward is in the distribution. If the only application is to preserve option value, then the uniform points method also works well.

Active reward learning and refined desiderata. The primary avenue for future work is to evaluate the reward combination methods in the context of active reward learning, and formalize Des. 2 (support on intermediate tradeoffs) in a way most helpful for active learning. Furthermore, it would be helpful to introduce a soft version of Des. 3 (informativeness), as the current formulation only permits the reward distribution to have support on a subspace of instead of supporting the entire , as preferable for active learning.

Realistic environments. Another avenue for future work is scaling up MIRD to realistic environments in which the dynamics are not known, the state space is not enumerable, and the reward function may be nonlinear. While it would be straightforward to use an existing deep IRL algorithm Fu et al. (2018); Finn et al. (2016)

in place of MCEIRL, running a maximum likelihood IRL algorithm to generate a single sample from the reward distribution could be prohibitively expensive. Instead, it might be helpful to look into using Bayesian Neural Networks 

MacKay (1992) to model the reward posterior.

Other sources of preference information.

Future work could use MIRD to not only combine reward functions, but to combine the preference information encoded in policies or even trajectories: for example, it would be straightforward to use an input policy (obtained e.g. with imitation learning) in place of one of the input reward functions to generate the trajectories in



  • D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané (2016) Concrete problems in ai safety. arXiv preprint arXiv:1606.06565. Cited by: §1.
  • S. Armstrong and B. Levinstein (2017)

    Low impact artificial intelligences

    arXiv preprint arXiv:1705.10720. Cited by: §3.1.
  • S. Armstrong and S. Mindermann (2018) Occam’s razor is insufficient to infer the preferences of irrational agents. In Advances in Neural Information Processing Systems, pp. 5598–5609. Cited by: §1.
  • M. Babes, V. Marivate, K. Subramanian, and M. L. Littman (2011) Apprenticeship learning about multiple intentions. In

    Proceedings of the 28th International Conference on Machine Learning (ICML-11)

    pp. 897–904. Cited by: §7.
  • J. Choi and K. Kim (2012) Nonparametric bayesian inverse reinforcement learning for multiple reward functions. In Advances in Neural Information Processing Systems, pp. 305–313. Cited by: §7.
  • P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei (2017) Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, pp. 4299–4307. Cited by: §1.
  • C. Daniel, M. Viering, J. Metz, O. Kroemer, and J. Peters (2014) Active reward learning.. In Robotics: Science and Systems, Cited by: §1.
  • N. Desai, A. Critch, and S. J. Russell (2018) Negotiable reinforcement learning for pareto optimal sequential decision-making. In Advances in Neural Information Processing Systems, pp. 4713–4721. Cited by: §6.1.
  • O. Evans, A. Stuhlmüller, and N. D. Goodman (2016) Learning the preferences of ignorant, inconsistent agents.. In AAAI, pp. 323–329. Cited by: §1.
  • C. Finn, S. Levine, and P. Abbeel (2016) Guided cost learning: deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pp. 49–58. Cited by: §8.
  • D. T. Frazier, C. P. Robert, and J. Rousseau (2017) Model misspecification in abc: consequences and diagnostics. arXiv preprint arXiv:1708.01974. Cited by: §1.
  • J. Fu, K. Luo, and S. Levine (2018) Learning robust rewards with adversarial inverse reinforcement learning. In International Conference on Learning Representations, Cited by: §1, §8.
  • A. Gleave and O. Habryka (2018) Multi-task maximum entropy inverse reinforcement learning. arXiv preprint arXiv:1805.08882. Cited by: §7.
  • D. Hadfield-Menell, S. Milli, P. Abbeel, S. J. Russell, and A. Dragan (2017) Inverse reward design. In Advances in Neural Information Processing Systems, pp. 6765–6774. Cited by: §1, §5.
  • D. Hadfield-Menell, S. J. Russell, P. Abbeel, and A. Dragan (2016) Cooperative inverse reinforcement learning. In Advances in neural information processing systems, pp. 3909–3917. Cited by: §1, §1, §3.1, §3.1.
  • J. C. Harsanyi (1955) Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility. Journal of political economy 63 (4), pp. 309–321. Cited by: §7.
  • J. Z. Kolter and A. Y. Ng (2009) Near-bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 513–520. Cited by: §6.1.
  • V. Krakovna, L. Orseau, R. Kumar, M. Martic, and S. Legg (2018) Penalizing side effects using stepwise relative reachability. arXiv preprint arXiv:1806.01186. Cited by: §1, §3.1, §3.1.
  • K. Li and J. W. Burdick (2017) Meta inverse reinforcement learning via maximum reward sharing for human motion analysis. arXiv preprint arXiv:1710.03592. Cited by: §7.
  • J. MacGlashan, M. K. Ho, R. Loftin, B. Peng, G. Wang, D. L. Roberts, M. E. Taylor, and M. L. Littman (2017) Interactive learning from policy-dependent human feedback. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2285–2294. Cited by: §1.
  • D. J. MacKay (1992)

    A practical bayesian framework for backpropagation networks

    Neural computation 4 (3), pp. 448–472. Cited by: §8.
  • A. Majumdar, S. Singh, A. Mandlekar, and M. Pavone (2017) Risk-sensitive inverse reinforcement learning via coherent risk models. In Robotics: Science and Systems, Cited by: §1.
  • S. Mindermann, R. Shah, A. Gleave, and D. Hadfield-Menell (2019) Active inverse reward design. In International Conference on Machine Learning, Cited by: §1, §3.1.
  • OpenAI (2018) OpenAI five. Note: Cited by: §1.
  • F. Radlinski, K. Balog, B. Byrne, and K. Krishnamoorthi (2019) Coached conversational preference elicitation: a case study in understanding movie preferences. In Proceedings of the Annual SIGdial Meeting on Discourse and Dialogue, Cited by: §1.
  • D. Ramachandran and E. Amir (2007) Bayesian inverse reinforcement learning.. In IJCAI, Vol. 7, pp. 2586–2591. Cited by: §1.
  • D. Sadigh, A. D. Dragan, S. Sastry, and S. A. Seshia (2017) Active preference-based learning of reward functions. In Robotics Science and Systems, Cited by: §1, §1, §3.1.
  • R. Shah, N. Gundotra, P. Abbeel, and A. Dragan (2019a) On the feasibility of learning, rather than assuming, human biases for reward inference. In International Conference on Machine Learning, pp. 5670–5679. Cited by: §1.
  • R. Shah, D. Krasheninnikov, J. Alexander, A. Dragan, and P. Abbeel (2019b) Preferences implicit in the state of the world.. In International Conference on Learning Representations, External Links: Link Cited by: §1, §2, §6.1.
  • D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis (2016) Mastering the game of go with deep neural networks and tree search. Nature 529, pp. 484–503. Cited by: §1.
  • J. Steinhardt and O. Evans (2017) Model mis-specification and inverse reinforcement learning. External Links: Link Cited by: §1.
  • A. M. Turner, D. Hadfield-Menell, and P. Tadepalli (2019) Conservative agency via attainable utility preservation. arXiv preprint arXiv:1902.09725. Cited by: §1, §3.1, §3.1.
  • M. Woodward, C. Finn, and K. Hausman (2019) Learning to interactively learn and assist. arXiv preprint arXiv:1906.10187. Cited by: §3.1, §3.1.
  • K. Xu, E. Ratner, A. Dragan, S. Levine, and C. Finn (2018) Learning a prior over intent via meta-inverse reinforcement learning. arXiv preprint arXiv:1805.12573. Cited by: §7.
  • B. D. Ziebart (2010) Modeling purposeful adaptive behavior with the principle of maximum causal entropy. PhD thesis, Carnegie Mellon University. Cited by: Appendix A, §1, §2, §5, §5.

Appendix A Proof of Theorem 1

is a set of trajectories each of which arises either from optimizing or optimizing with soft value iteration with rationality . The fraction of trajectories in that result from is , and the fraction of trajectories arising from is . Hence the feature expectations (FEs) of are


entails that each sample is the MCEIRL reward . MCEIRL with rationality finds a reward vector such that its FEs match those of the soft value iteration expert (with rationality ) who generated the trajectories Ziebart (2010). Hence each sampled gives rise to FEs matching . This and Eq. 14 together imply that the FEs arising from optimizing any with soft value iteration are a convex combination of the FEs of the input rewards:


Appendix B Proof of Theorem 2

By Theorem 1, the FEs arising from any in the posterior are a convex combination of the FEs of the input rewards. Hence finding in the support of that minimizes is equivalent to finding that minimizes

Since the true returns when following policies arising from the input reward functions and are scalars, the term above is minimized with when , and otherwise.

Appendix C Proof of Theorem 3

Since the FEs of the input rewards are equal, the return of any mixture reward for those FEs is . Suppose there exist FEs that result in a higher expected return for the mixture reward:


For this to be true must result in a higher return than for either or . This is a contradiction, since already results in the highest return for both input rewards. Hence corresponds to the optimal behavior for any mixture reward .

Appendix D Proof of Theorem 4

Analogous to the proof of Theorem 3 (Appendix C).

Appendix E Additive and CC-in do not satisfy the strong version of Des. 3 (informativeness)

Additive. Consider the reward combination scenario in Figure 4. Maximizing or an agent would correspondingly end up in or , both of which have the third feature equal 1. However, maximizing would lead the agent to , where the value of the third feature is 0.

CC-in. Again consider the reward combination task in Figure 4. Maximizing and separately would correspondingly lead to choosing and , both states having the third feature equal 1. However, maximizing , a convex combination of the input rewards, would lead to the agent choosing where the value of the third feature equals 0.

As the Additive and CC-in reward posteriors have support on rewards giving rise to behaviors different from the behaviors of the input rewards w.r.t. the third feature of the toy MDP, neither method meets the strong version of Des. 3 (informative about desirable behavior).

Figure 4: Toy reward combination problem. The aim is to combine reward vectors and . The MDP is as follows. The starting state is . From the agent must irreversibly and deterministically transition to , , or , and stay there indefinitely. The features of each state are given by .

Appendix F Analysis details

In this appendix we provide more details and explanations regarding the behavior of the various reward combination methods in the Cooking environment. Every qualitative result reported below is validated over five seeds.

f.1 Support on

We hand-specify both input reward functions to strongly incentivise either cake or toast by setting ; , and to mildly incentivise pasta: . The desired behavior from AUP here is to preserve the flour, as any one of the foods requiring it might be desirable.

The Additive method returns a single reward function that incentivises the agent to make both cake and toast. AUP applied to one reward function simply optimizes it without needing to preserve option value, so here it makes either cake or toast (and does not preserve the flour).

The CC-in method does not preserve the flour, choosing to make dough instead. Because of the high values of and , optimizing any convex combination of leads the agent to either make cake or toast, and so AUP infers that it does not need to preserve the ability to make pasta. MIRD chooses to make dough for a similar reason: since all trajectories in arise from optimizing or , each trajectory demonstrates the agent making either cake or toast, and no trajectory demonstrates it making pasta. So no reward inferred from incentivises the agent to make pasta, and hence AUP does not preserve the ability to make pasta.

All methods whose reward samples sometimes give rise to the behaviors arising from preserve the flour. This is because contains s.t. optimizing it leads the agent to make pasta: ; . As MIRD-IF, Gaussian, uniform points, and CC- methods all have substantial probability mass on rewards that incentivise making pasta, AUP applied to the reward samples from these methods preserves the ability to make pasta.

f.2 Informativeness about desirable behavior (strong).

In the setting above the two input rewards lead the agent to make either cake or toast, but not pasta. Here a reward posterior meeting the strong version of Des. 3 would only give rise to reward functions that incentivise the agent to make either cake or toast. Note that despite the same setup, this is exactly the opposite behavior than desired previously, showing the tradeoff between informativeness and support on . As such, the results are reversed.

We see that MIRD-IF, Gaussian, uniform points, and CC- methods do not meet Des. 3 (strong), since they all have substantial probability mass on rewards that incentivise making pasta. Because of this AUP preserves the ability to make pasta, and does not lead the agent to make dough. On the other hand, MIRD, Additive, and CC-in methods never give rise to rewards incentivising the agent to make pasta, and AUP applied to these reward distributions results in making dough. Although Additive and CC-in do not support reward functions that lead the agent to make pasta in this particular case, they do not meet the strong version of Des. 3 (shown in Appendix E).

f.3 Informativeness about desirable behavior (medium & weak)

We hand-specify one of the reward functions to incentivise the agent to make cakes: . is inferred by RLSP with a uniform prior over . The current state contains a cake which was likely made by a human, so RLSP infers a positive reward for cakes, , and near-zero rewards for the other features. So we have two agreeing rewards that both incentivise the agent to make another cake. The desired behavior from AUP planning here is to make the cake, and not worry about preserving the ability to make pasta or toast as neither input reward cares about it.

The Gaussian posterior fails to make the cake. The distribution over here is a Gaussian with and . The posterior assigns substantial probability to being negative, and AUP planning avoids making the cake to preserve the ability to optimize rewards that punish cakes.

Here each reward vector in leads the agent to make cake. Since every method except Gaussian satisfies at least the weak version of Des. 3, all reward functions in the support of their posteriors lead the agent to make cake. Because of this AUP infers that it is not important to preserve the ability to make pasta or toast, and successfully makes cakes when applied to rewards sampled from these posteriors.

f.4 Behavior balance

To better understand robustness of the reward combination methods to imbalanced input rewards, we analyze the performance of our methods by varying the extent of reward imbalance and the degree to which AUP seeks to preserve option value.

The second input reward is hand-specified to only reward toast, , . The first reward is hand-specified as well: . We vary between 0.01 and 0.7 to obtain many degrees of imbalance between toast and cake. So the input rewards disagree whether to make toast or cake, and agree that pasta is irrelevant. Hence the desirable action from AUP here would be to preserve the ability to choose between cake and toast, as either could be desirable. In addition to varying , we vary the weight of the AUP penalty .

Figure 5: Most common outcomes of AUP applied to the six reward posteriors over 5 seeds. Pink corresponds to AUP failure for more than half of the seeds: overwhelms and the agent makes toast. Purple corresponds to AUP success (AUP makes dough) for more than half of the seeds. Yellow corresponds to the middle-ground outcome of doing nothing in more than half of the seeds. The hyperparameters of the and the distributions are set to 0.5.

Additive always fails and makes toast as it has support on a single reward function that prioritizes toast. Our results for the other six reward combination methods are shown in Figure 5. We observe that MIRD and MIRD-IF are significantly more robust to imbalanced input rewards than any reward-space method. However, sometimes MIRD and MIRD-IF lead to AUP doing nothing instead of making dough. This is explained by the fact that MIRD and MIRD-IF use MCEIRL, which sometimes outputs negative rewards. For example, consider the case where in the process of generating a MIRD sample we generate 10 trajectories all of which demonstrate making toast. MCEIRL with these trajectories might output a positive reward on toast and a negative reward on cake and dough (instead of positive reward for toast and zero for cake and dough), even if a zero-centered Gaussian prior is used. These negative rewards on dough sometimes lead AUP to avoid making it.

Furthermore, we observe that MIRD is more robust than MIRD-IF, and CC-in is more robust than CC-. This is because in this case support on rewards giving rise to behaviors arising from makes the reward distributions less balanced. In particular, denoting the value of we vary as ,

Here only the second reward vector incentivises the agent to make cake, while both the third and the fourth rewards incentivise the agent to make toast. This slight imbalance of in favor of making toast lead to CC- and MIRD-IF posteriors being more likely to cause AUP to make toast instead of preserving the ability to choose between cake and toast.

Finally, we observe that the Gaussian method is roughly equally robust to reward imbalance across the whole range of . This is because unlike the other methods, the standard deviation of the Gaussian posterior does not depend and , but only depends on hyperparameters and , which are fixed to 1.