Interpretable Multi-Objective Reinforcement Learning through Policy Orchestration

by   Ritesh Noothigattu, et al.
Carnegie Mellon University

Autonomous cyber-physical agents and systems play an increasingly large role in our lives. To ensure that agents behave in ways aligned with the values of the societies in which they operate, we must develop techniques that allow these agents to not only maximize their reward in an environment, but also to learn and follow the implicit constraints of society. These constraints and norms can come from any number of sources including regulations, business process guidelines, laws, ethical principles, social norms, and moral values. We detail a novel approach that uses inverse reinforcement learning to learn a set of unspecified constraints from demonstrations of the task, and reinforcement learning to learn to maximize the environment rewards. More precisely, we assume that an agent can observe traces of behavior of members of the society but has no access to the explicit set of constraints that give rise to the observed behavior. Inverse reinforcement learning is used to learn such constraints, that are then combined with a possibly orthogonal value function through the use of a contextual bandit-based orchestrator that picks a contextually-appropriate choice between the two policies (constraint-based and environment reward-based) when taking actions. The contextual bandit orchestrator allows the agent to mix policies in novel ways, taking the best actions from either a reward maximizing or constrained policy. In addition, the orchestrator is transparent on which policy is being employed at each time step. We test our algorithms using a Pac-Man domain and show that the agent is able to learn to act optimally, act within the demonstrated constraints, and mix these two functions in complex ways.



page 4


Incorporating Behavioral Constraints in Online AI Systems

AI systems that learn through reward feedback about the actions they tak...

Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning

While most approaches to the problem of Inverse Reinforcement Learning (...

Reward Constrained Policy Optimization

Teaching agents to perform tasks using Reinforcement Learning is no easy...

MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning

Inferring reward functions from demonstrations and pairwise preferences ...

PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning

We study reinforcement learning (RL) with no-reward demonstrations, a se...

Improving Confidence in the Estimation of Values and Norms

Autonomous agents (AA) will increasingly be interacting with us in our d...

Learning Behavioral Soft Constraints from Demonstrations

Many real-life scenarios require humans to make difficult trade-offs: do...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Concerns about the ways in which cyber-physical and/or autonomous decision making systems behave when deployed in the real world are growing: what various stakeholder are worried about is that the systems achieves its goal in ways that are not considered acceptable according to values and norms of the impacted community, also called “specification gaming” behaviors. Thus, there is a growing need to understand how to constrain the actions of an AI system by providing boundaries within which the system must operate.

To tackle this problem, we may take inspiration from humans, who often constrain the decisions and actions they take according to a number of exogenous priorities, be they moral, ethical, religious, or business values [Sen1974], and we may want the systems we build to be restricted in their actions by similar principles [Arnold et al.2017]. The overriding concern is that the autonomous agents we construct may not obey these values on their way to maximizing some objective function [Simonite2018].

The idea of teaching machines right from wrong has become an important research topic in both AI [Yu et al.2018] and farther afield [Wallach and Allen2008]

. Much of the research at the intersection of artificial intelligence and ethics falls under the heading of

machine ethics, i.e., adding ethics and/or constraints to a particular system’s decision making process [Anderson and Anderson2011]. One popular technique to handle these issues is called value alignment, i.e., the idea that an agent can only pursue goals that follow values that are aligned to the human values and thus beneficial to humans [Russell, Dewey, and Tegmark2015].

Another important notion for these autonomous decision making systems is the idea of transparency or interpretability, i.e., being able to see why the system made the choices it did. theodorou2016my theodorou2016my observe that the Engineering and Physical Science Research Council (EPSRC) Principles of Robotics dictates the implementation of transparency in robotic systems. The authors go on to define transparency in a robotic or autonomous decision making system as, “… a mechanism to expose the decision making of the robot”.

This still leaves open the question of how to provide the behavioral constraints to the agent. A popular technique is called the bottom-up approach, i.e., teaching a machine what is right and wrong by example [Allen, Smit, and Wallach2005]. In this paper, we adopt this approach as we consider the case where only examples of the correct behavior are available to the agent, and it must therefore learn from only these examples.

We propose a framework which enables an agent to learn two policies: (1) which is a reward maximizing policy obtained through direct interaction with the world and (2) which is obtained via inverse reinforcement learning over demonstrations by humans or other agents of how to obey a set of behavioral constraints in the domain. Our agent then uses a contextual-bandit-based orchestrator to learn to blend the policies in a way that maximizes a convex combination of the rewards and constraints. Within the RL community this can be seen as a particular type of apprenticeship learning [Abbeel and Ng2004] where the agent is learning how to be safe, rather than only maximizing reward [Leike et al.2017].

One may argue that we should employ for all decisions as it will be more “safe” than employing . Indeed, although one could only use for the agent, there are a number of reasons to employ the orchestrator. First, the humans or other demonstrators, may be good at demonstrating what not to do in a domain but may not provide examples of how best to maximize reward. Second, the demonstrators may not be as creative as the agent when mixing the two policies [Ventura and Gates2018]. By allowing the orchestrator to learn when to apply which policy, the agent may be able to devise better ways to blend the policies, leading to behavior which both follows the constraints and achieves higher reward than any of the human demonstrations. Third, we may not want to obtain demonstrations of what to do in all parts of the domain e.g., there may be dangerous or hard-to-model regions, or there may be mundane parts of the domain in which human demonstrations are too costly to obtain. In this case, having the agent learn through RL what to do in the non-demonstrated parts is of value. Finally, as we have argued, interpretability is an important feature of our system. Although the policies themselves may not be directly interpretable (though there is recent work in this area [Verma et al.2018, Liu et al.2018]), our system does capture the notion of transparency and interpretability as we can see which policy is being applied in real time.

Contributions.  We propose and test a novel approach to teach machines to act in ways that achieve and compromise multiple objectives in a given environment. One objective is the desired goal and the other one is a set of behavioral constraints, learnt from examples. Our technique uses aspects of both traditional reinforcement learning and inverse reinforcement learning to identify policies that both maximize rewards and follow particular constraints within an environment. Our agent then blends these policies in novel and interpretable ways using an orchestrator based on the contextual bandits framework. We demonstrate the effectiveness of these techniques on the Pac-Man domain where the agent is able to learn both a reward maximizing and a constrained policy, and select between these policies in a transparent way based on context, to employ a policy that achieves high reward and obeys the demonstrated constraints.

2 Related Work

Ensuring that our autonomous systems act in line with our values while achieving their objectives is a major research topic in AI. These topics have gained popularity among a broad community including philosophers [Wallach and Allen2008] and non-profits [Russell, Dewey, and Tegmark2015]. yu2018building yu2018building provide an overview of much of the recent research at major AI conferences on ethics in artificial intelligence.

Agents may need to balance objectives and feedback from multiple sources when making decisions. One prominent example is the case of autonomous cars. There is extensive research from multidisciplinary groups into the questions of when autonomous cars should make lethal decisions [Bonnefon, Shariff, and Rahwan2016], how to aggregate societal preferences to make these decisions [Noothigattu et al.2017], and how to measure distances between these notions [Loreggia et al.2018a, Loreggia et al.2018b]. In a recommender systems setting, a parent or guardian may want the agent to not recommend certain types of movies to children, even if this recommendation could lead to a high reward [Balakrishnan et al.2018a, Balakrishnan et al.2018b]. Recently, as a compliment to their concrete problems in AI saftey which includes reward hacking and unintended side effects [Amodei et al.2016], a DeepMind study has compiled a list of specification gaming examples, where very different agents game the given specification by behaving in unexpected (and undesired) ways.11138 AI “specification gaming” examples are available at:

Within the field of reinforcement learning there has been specific work on ethical and interpretable RL. wu2017low wu2017low detail a system that is able to augment an existing RL system to behave ethically. In their framework, the assumption is that, given a set of examples, most of the examples follow ethical guidelines. The system updates the overall policy to obey the ethical guidelines learned from demonstrations using IRL. However, in this system only one policy is maintained so it has no transparency. laroche2017reinforcement laroche2017reinforcement introduce a system that is capable of selecting among a set of RL policies depending on context. They demonstrate an orchestrator that, given a set of policies for a particular domain, is able to assign a policy to control the next episode. However, this approach use the classical multi-armed bandit, so the state context is not considered on the choice of the policy.

Interpretable RL has received significant attention in recent years. LussP2016 LussP2016 introduce action constraints over states to enhance the interpretability of policies. VermaMSKC18 VermaMSKC18 present a reinforcement learning framework, called Programmatically Interpretable Reinforcement Learning (PIRL), that is designed to generate interpretable and verifiable agent policies. PIRL represents policies using a high-level, domain-specific programming language. Such programmatic policies have the benefit of being more easily interpreted than neural networks, and being amenable to verification by symbolic methods. Additionally, Guiliang05887 Guiliang05887 introduce Linear Model U-trees to approximate neural network predictions. An LMUT is learned using a novel on-line algorithm that is well-suited for an active play setting, where the mimic learner observes an ongoing interaction between the neural net and the environment. Empirical evaluation shows that an LMUT mimics a Q function substantially better than five baseline methods. The transparent tree structure of an LMUT facilitates understanding the learned knowledge by analyzing feature influence, extracting rules, and highlighting the super-pixels in image inputs.

3 Background

3.1 Reinforcement Learning

Reinforcement learning defines a class of algorithms solving problems modeled as a Markov decision process (MDP)

[Sutton and Barto1998].

A Markov decision problem is usually denoted by the tuple , where

  • is a set of possible states

  • is a set of actions

  • is a transition function defined by , where and

  • is a reward function

  • is a discount factor that specifies how much long term reward is kept.

The goal in an MDP is to maximize the discounted long term reward received. Usually the infinite-horizon objective is considered:


Solutions come in the form of policies , which specify what action the agent should take in any given state deterministically or stochastically. One way to solve this problem is through Q-learning with function approximation [Bertsekas and Tsitsiklis1996]. The Q-value of a state-action pair, , is the expected future discounted reward for taking action in state . A common method to handle very large state spaces is to approximate the function as a linear function of some features. Let denote relevant features of the state-action pair . Then, we assume , where

is an unknown vector to be learned by interacting with the environment. Every time the reinforcement learning agent takes action

from state , obtains immediate reward and reaches new state , the parameter is updated using

difference (2)

where is the learning rate.

-greedy is a common strategy used for exploration. That is, during the training phase, a random action is played with a probability of

and the action with maximum Q-value is played otherwise. The agent follows this strategy and updates the parameter according to Equation (2) until the Q-value converge or for a large number of time-steps.

3.2 Inverse Reinforcement Learning

IRL seeks to find the most likely reward function , which an expert is executing [Abbeel and Ng2004, Ng and Russell2000]. The IRL methods assume the presence of an expert that solves an MDP, where the MDP is fully known and observable by the learner except for the reward function. Since the state and action of the expert is fully observable by the learner, it has access to trajectories executed by the expert. A trajectory consists of a sequence of state and action pairs, , where is the state of the environment at time , is the action played by the expert at the corresponding time and is the length of this trajectory. The learner is given access to such trajectories to learn the reward function. Since the space of all possible reward functions is extremely large, it is common to represent the reward function as a linear combination of features. , where are weights to be learned, and is a feature function that maps a state-action-state tuple to a real value, denoting the value of a specific feature of this tuple [Abbeel and Ng2004]. Current state-of-the-art IRL algorithms utilize feature expectations as a way of evaluating the quality of the learned reward function [Abbeel and Ng2004]. For a policy , the feature expectations starting from state is defined as

where the expectation is taken with respect to the state sequence achieved on taking actions according to starting from

. One can compute an empirical estimate of the feature expectations of the expert’s policy with the help of the trajectories

, using


Given a weight vector , one can compute the optimal policy for the corresponding reward function , and estimate its feature expectations in a way similar to (3). IRL compares this with expert’s feature expectations to learn best fitting weight vectors . Instead of a single weight vector, the IRL algorithm by abbeel2004apprenticeship abbeel2004apprenticeship learns a set of possible weight vectors, and they ask the agent designer to pick the most appropriate weight vector among these by inspecting their corresponding policies.

3.3 Contextual Bandits

Following 11 11, the contextual bandit problem is defined as follows. At each time , the player is presented with a context vector and must choose an arm . Let denote a reward vector, where is the reward at time associated with the arm . We assume that the expected reward is a linear function of the context, i.e. , where is an unknown weight vector (to be learned from the data) associated with the arm .

The purpose of a contextual bandit algorithm is to minimize the cumulative regret. Let where is the set of possible contexts and is the context at time , a hypothesis computed by the algorithm at time and the optimal hypothesis at the same round. The cumulative regret is: .

One widely used way to solve the contextual bandit problem is the Contextual Thompson Sampling algorithm (CTS)

[Agrawal and Goyal2013] given as Algorithm 1. In CTS, the reward for choosing arm at time follows a parametric likelihood function . Following AgrawalG13 AgrawalG13, the posterior distribution at time ,

is given by a multivariate Gaussian distribution

, , where , is the size of the context vectors , and we have , , constants, and .

1:  Initialize: , for .
2:  Foreach do
3:   Sample from .
4:   Play arm
5:   Observe
6:   , ,  
7:  End
Algorithm 1 Contextual Thompson Sampling Algorithm

Every step consists of generating a -dimensional sample from , for each arm. We then decide which arm to pull by solving for . This means that at each time step we are selecting the arm that we expect to maximize the observed reward given a sample of our current beliefs over the distribution of rewards, . We then observe the actual reward of pulling arm , and update our beliefs.

3.4 Problem Setting

In our setting, the agent is in multi-objective Markov decision processes (MOMDPs), instead of the usual scalar reward function , a reward vector is present. The vector consists of dimensions or components representing the different objectives, i.e., . However, not all components of the reward vector are observed in our setting. There is an objective that is hidden, and the agent is only allowed to observe expert demonstrations to learn this objective. These demonstrations are given in the form of trajectories . To summarize, for some objectives, the agent has rewards observed from interaction with the environment, and for some objectives the agent has only expert demonstrations. The aim is still the same as single objective reinforcement learning, which is trying to maximize for each .

4 Approach

4.1 Domain

We demonstrate the applicability of our approach using the classic game of Pac-Man. The layout of Pac-Man we use for this is given in Figure 1, and the following are the rules used for the environment (adopted from Berkeley AI Pac-Man222 The goal of the agent (which controls Pac-Man’s motion) is to eat all the dots in the maze, known as Pac-Dots, as soon as possible while simultaneously avoiding collision with ghosts. On eating a Pac-Dot, the agent obtains a reward of . And on successfully winning the game (which happens on eating all the Pac-Dots), the agent obtains a reward of . In the meantime, the ghosts in the game roam the maze trying to kill Pac-Man. On collision with a ghost, Pac-Man loses the game and gets a reward of . The game also has two special dots called capsules or Power Pellets in the corners of the maze, which on consumption, give Pac-Man the temporary ability of “eating” ghosts. During this phase, the ghosts are in a “scared” state for 40 frames and move at half their speed. On eating a ghost, the agent gets a reward of , the ghost returns to the center box and returns to its normal “unscared” state. Finally, there is a constant time-penalty of for every step taken.

Figure 1: Layout of Pac-Man

For the sake of demonstration of our approach, we define not eating ghosts as the desirable constraint in the game of Pac-Man. However, please recall that this constraint is not given explicitly to the agent, but only through examples. To play optimally in the original game one should eat ghosts to earn bonus points, but doing so is being demonstrated as undesirable. Hence, the agent has to combine the goal of collecting the most points while not eating ghosts if possible.

4.2 Overall Approach

The overall approach we follow is depicted by Figure 2. It has three main components. The first is the inverse reinforcement learning component to learn the desirable constraints (depicted in green in Figure 2). We apply inverse reinforcement learning to the demonstrations depicting desirable behavior, to learn the underlying constraint rewards being optimized by the demonstrations. We then apply reinforcement learning on these learned rewards to learn a strongly constraint satisfying policy .

Next, we augment this with a pure reinforcement learning component (depicted in red in Figure 2). For this, we directly apply reinforcement learning to the original environment rewards (like Pac-Man’s unmodified game) to learn a domain reward maximizing policy . Just to recall, the reason we have this second component is that the inverse reinforcement learning component may not be able to pick up the original environment rewards very well since the demonstrations were intended mainly to depict desirable behavior. Further, since these demonstrations are given by humans, they are prone to error, amplifying this issue. Hence, the constraint obeying policy is likely to exhibit strong constraint satisfying behavior, but may not be optimal in terms of maximizing environment rewards. Augmenting with the reward maximizing policy will help the system in this regard.

So now, we have two policies, the constraint-obeying policy and the reward-maximizing policy . To combine these two, we use the third component, the orchestrator (depicted in blue in Figure 2). This is a contextual bandit algorithm that orchestrates the two policies, picking one of them to play at each point of time. The context is the state of the environment (state of the Pac-Man game); the bandit decides which arm (policy) to play at the corresponding point of time.

IRL for Constraints

RL for Game Rewards


Constrained Demonstration

Rewards Capturing Constraints

Constrained Policy

Environment Rewards

Reward Maximizing Policy


Figure 2: Overview of our system. At each time step the Orchestrator selects between two policies, and depending on the observations from the Environment. The two policies are learned before engaging with the environment. is obtained using IRL on the demonstrations to learn a reward function that captures the particular constraints demonstrated. The second, is obtained by the agent through RL on the environment directly.

4.3 Alternative Approaches

Observe that in our approach, we combine or “aggregate” the two objectives (environment rewards and desired constraints) at the policy stage. Alternative approaches to doing this are combining the two objectives at the reward stage or the demonstrations stage itself:

  • Aggregation at reward phase. As before, we can perform inverse reinforcement learning to learn the underlying rewards capturing the desired constraints. Now, instead of learning a policy for each of the two reward functions (environment rewards and constraint rewards) followed by aggregating them, we could just combine the reward functions themselves. And then, we could learn a policy on this “aggregated” rewards to perform well on both the objectives, environment reward and favorable constraints. (This captures the intuitive idea of “incorporating the constraints into the environment rewards” if we were explicitly given the penalty of violating constraints).

  • Aggregation at data phase. Moving another step backward, we could aggregate the two objectives of play at the data phase. This could be performed as follows. We perform pure reinforcement learning as in the original approach given in Figure 2 (depicted in red). Once we have our reward maximizing policy , we use it to generate numerous reward-maximizing demonstrations. Then, we combine these environment reward trajectories with the original constrained demonstrations, aggregating the two objectives in the process. And once we have the combined data, we can perform inverse reinforcement learning to learn the appropriate rewards, followed by reinforcement learning to learn the corresponding policy.

Aggregating at the policy phase is where we go all the way to the end of the pipeline learning a policy for each of the objectives, followed by aggregating them. This is the approach we follow as mentioned in Section 4.2. Note that, we have a parameter (as described in more detail in Section 5.3) that trades off environmental rewards and rewards capturing constraints. A similar parameter can be used by the reward aggregation and data aggregation approaches, to decide how to weigh the two objectives while performing the corresponding aggregation.

The question now is, “which of these aggregation procedures is the most useful?”. The reason we use aggregation at the policy stage is to gain interpretability. Using an orchestrator to pick a policy at each point of time helps us identify which policy is being played at each point of time and also the reason for which it is being chosen (in the case of an interpretable orchestrator, which it is in our case). More details on this are mentioned in Section 6.

5 Concretizing Our Approach

Here we describe the exact algorithms we use for each of the components of our approach.

5.1 Details of the Pure RL

For the reinforcement learning component, we use Q-learning with linear function approximation as described in Section 3.1. For Pac-Man, some of the features we use for an pair (for the function) are: “whether food will be eaten”, “distance of the next closest food”, “whether a scared (unscared) ghost collision is possible” and “distance of the closest scared (unscared) ghost”.

For the layout of Pac-Man we use (shown in Figure 1), an upper bound on the maximum score achievable in the game is . This is because there are Pac-Dots, each ghost can be eaten at most twice (because of two capsules in the layout), Pac-Man can win the game only once and it would require more than steps in the environment. On playing a total of games, our reinforcement learning algorithm (the reward maximizing policy ) achieves an average game score of , and the maximum score achieved is . We mention this here, so that the results in Section 6 can be seen in appropriate light.

5.2 Details of the IRL

For inverse reinforcement learning, we use the linear IRL algorithm as described in Section 3.2. For Pac-Man, observe that the original reward function depends only on the following factors: “number of Pac-Dots eating in this step ”, “whether Pac-Man has won in this step”, “number of ghosts eaten in this step” and “whether Pac-Man has lost in this step”. For our IRL algorithm, we use exactly these as the features . As a sanity check, when IRL is run on environment reward optimal trajectories (generated from our policy ), we recover something very similar to the original reward function . In particular, the weights of the reward features learned is given by

which when scaled is almost equivalent to the true weights in terms of their optimal policies. The number of trajectories used for this is .

Ideally, we would prefer to have the constrained demonstrations given to us by humans. But for our domain of Pac-Man, we generate them synthetically as follows. We learn a policy by training it on the game with the original reward function augmented with a very high negative reward () for eating ghosts. This causes to play well in the game while avoiding eating ghosts as much as possible.333We do this only for generating demonstrations. In real domains, we would not have access to the exact constraints that we want to be satisfied, and hence a policy like cannot be learned; learning from human demonstrations would then be essential. Now, to emulate erroneous human behavior, we use with an error probability of . That is, at every time step, with probability we pick a completely random action, and otherwise follow . This gives us our constrained demonstrations, on which we perform inverse reinforcement learning to learn the rewards capturing the constraints. The weights of the reward function learned is given by

and it is evident that it has learned that eating ghosts strongly violates the favorable constraints. The number of demonstrations used for this is . We scale these weights to have a similar norm as the original reward weights , and denote the corresponding reward function by .

Finally, running reinforcement learning on these rewards , gives us our constraint policy . On playing a total of games, achieves an average game score of and eats just ghosts on an average. Note that, when eating ghosts is prohibited in the domain, an upper bound on the maximum score achievable is .

5.3 Orchestration with Contextual Bandits

We use contextual bandits to pick one of the policies ( and ) to play at each point of time. These two policies act as the two arms of the bandit, and we use a modified CTS algorithm to train the bandit. The context of the bandit is given by features of the current state (for which we want to decide which policy to choose), i.e., . For the game of Pac-Man, the features of the state we use for context are: (i) A constant to represent the bias term, and (ii) The distance of Pac-Man from the closest scared ghost in . One could use a more sophistical context with many more features, but we use this restricted context to demonstrate a very interesting behavior (shown in Section 6).

The exact algorithm used to train the orchestrator is given in Algorithm 2. Apart from the fact that arms are policies (instead of atomic actions), the main difference from the CTS algorithm is the way rewards are fed into the bandit. For simplicity, we call the constraint policy as arm and the reward policy as arm . We now go over Algorithm 2. First, all the parameters are initialized as in the CTS algorithm (Line 1). For each time-step in the training phase (Line 3), we do the following. Pick an arm according to the Thompson Sampling algorithm and the context (Lines 4 and 5). Play the action according to the chosen policy (Line 6). This takes us to the next state . We also observe two rewards (Line 7): (i) the original reward in environment, and (ii) the constraint rewards according to the rewards learnt by inverse reinforcement learning, i.e., . can intuitively be seen as the predicted reward (or penalty) for any constraint satisfaction (or violation) in this step.

1:  Initialize: , for .
2:  Observe start state .
3:  Foreach do
4:   Sample from .
5:   Pick arm .
6:   Play corresponding action .
7:   Observe rewards and , and the next state .
8:   Define        
9:   Update , ,
10:  End
Algorithm 2 Orchestrator Based Algorithm

To train the contextual bandit to choose arms that perform well on both metrics (environment rewards and constraints), we feed it a reward that is a linear combination of and (Line 8). Another important point to note is that and are immediate rewards achieved on taking action from , they do not capture long term effects of this action. In particular, it is important to also look at the “value” of the next state reached, since we are in the sequential decision making setting. Precisely for this reason, we also incorporate the value-function of the next state according to both the reward maximizing component and constraint component (which encapsulate the long-term rewards and constraint satisfaction possible from ). This gives exactly Line 8, where is the value-function according the constraint policy , and is the value-function according to the reward maximizing policy . In this equation,

is a hyperparameter chosen by a user to decide how much to trade off environment rewards for constraint satisfaction. For example, when

is set to , the orchestrator would always play the reward policy , while for , the orchestrator would always play the constraint policy . For any value of in-between, the orchestrator is expected to pick policies at each point of time that would perform well on both metrics (weighed according to ). Finally, for the desired reward and the context , the parameters of the bandit are updated according to the CTS algorithm (Line 9).

6 Evaluation and Test

We test our approach on the Pac-Man domain given in Figure 1, and measure its performance on two metrics, (i) the total score achieved in the game (the environment rewards) and (ii) the number of ghosts eaten (the constraint violation). We also vary , and observe how these metrics are traded off against each other. For each value of , the orchestrator is trained for games. The results are shown in Figure 3. Each point in the graph is averaged over test games.

Figure 3: Both performance metrics as is varied. The red curve depicts the average game score achieved, and the blue curve depicts the average number of ghosts eaten.

The graph shows a very interesting pattern. When is at most than , the agent eats a lot of ghosts, but when it is above , it eats almost no ghosts. In other words, there is a value which behaves as a tipping point, across which there is drastic change in behavior. Beyond the threshold, the agent learns that eating ghosts is not worth the score it is getting and so it avoids eating as much as possible. On the other hand, when is smaller than this threshold, it learns the reverse and eats as many ghosts as possible.

Policy-switching. As mentioned before, one of the most important property of our approach is interpretability, we know exactly which policy is being played at each point of time. For moderate values of , the orchestrator learns a very interesting policy-switching technique: whenever at least one of the ghosts in the domain is scared, it plays , but if no ghosts are scared, it plays . In other words, it starts off the game by playing until a capsule is eaten. As soon as the first capsule is eaten, it switches to and plays it till the scared timer runs off. Then it switches back to until another capsule is eaten, and so on.444A video of our agent demonstrating this behavior is uploaded in the Supplementary Material. The agent playing the game in this video was trained with . It has learned a very intuitive behavior: when there is no scared ghost in the domain, there is no possibility of violating constraints, and hence the agent is as greedy as possible (i.e., play ), but when there are scared ghosts, better to be safe (i.e., play ).

7 Discussion

In this paper, we have considered the problem of autonomous agents learning policies that are constrained by implicitly-specified norms and values while still optimizing their policies with respect to environmental rewards. We have taken an approach that combines IRL to determine constraint-satisfying policies from demonstrations, RL to determine reward-maximizing policies, and a contextual bandit to orchestrate between these policies in a transparent way. This proposed architecture and approach for the problem is novel. It also requires a novel technical contribution in the contextual bandit algorithm because the arms are policies rather than atomic actions, thereby requiring rewards to account for sequential decision making. We have demonstrated the algorithm on the Pac-Man video game and found it to perform interesting switching behavior among policies.

We feel that the contribution herein is only the starting point for research in this direction. We have identified several avenues for future research, especially with regards to IRL. We can pursue deep IRL to learn constraints without hand-crafted features, develop an IRL that is robust to noise in the demonstrations, and research IRL algorithms to learn from just one or two demonstrations (perhaps in concert with knowledge and reasoning). In real-world settings, demonstrations will likely be given by different users with different versions of abiding behavior; we would like to exploit the partition of the set of traces by user to improve the policy or policies learned via IRL. Additionally, the current orchestrator selects a single policy at each time, but more sophisticated policy aggregation techniques for combining or mixing policies is possible. Lastly, it would be interesting to investigate whether the policy aggregation rule ( in the current proposal) can be learned from demonstrations.


We would like to thank Gerald Tesauro and Aleksandra Mojsilovic for their helpful feedback and comments on this project.


  • [Abbeel and Ng2004] Abbeel, P., and Ng, A. Y. 2004. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the 21st International Conference on Machine Learning (ICML).
  • [Agrawal and Goyal2013] Agrawal, S., and Goyal, N. 2013. Thompson sampling for contextual bandits with linear payoffs. In ICML (3), 127–135.
  • [Allen, Smit, and Wallach2005] Allen, C.; Smit, I.; and Wallach, W. 2005. Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology 7(3):149–155.
  • [Amodei et al.2016] Amodei, D.; Olah, C.; Steinhardt, J.; Christiano, P.; Schulman, J.; and Mané, D. 2016. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.
  • [Anderson and Anderson2011] Anderson, M., and Anderson, S. L. 2011. Machine Ethics. Cambridge University Press.
  • [Arnold et al.2017] Arnold, T.; Thomas; Kasenberg, D.; and Scheutzs, M. 2017. Value alignment or misalignment - what will keep systems accountable? In AI, Ethics, and Society, Papers from the 2017 AAAI Workshop.
  • [Balakrishnan et al.2018a] Balakrishnan, A.; Bouneffouf, D.; Mattei, N.; and Rossi, F. 2018a. Using contextual bandits with behavioral constraints for constrained online movie recommendation. In Proc. IJCAI.
  • [Balakrishnan et al.2018b] Balakrishnan, A.; Bouneffouf, D.; Mattei, N.; and Rossi, F. 2018b. Incorporating behavioral constraints in online AI systems. arXiv preprint arXiv:1809.05720.
  • [Bertsekas and Tsitsiklis1996] Bertsekas, D., and Tsitsiklis, J. 1996. Neuro-dynamic programming. Athena Scientific.
  • [Bonnefon, Shariff, and Rahwan2016] Bonnefon, J.-F.; Shariff, A.; and Rahwan, I. 2016. The social dilemma of autonomous vehicles. Science 352(6293):1573–1576.
  • [Langford and Zhang2008] Langford, J., and Zhang, T. 2008.

    The Epoch-Greedy Algorithm for Contextual Multi-armed Bandits.

    In Proc. 21st NIPS.
  • [Laroche and Feraud2017] Laroche, R., and Feraud, R. 2017. Reinforcement learning algorithm selection. In Proceedings of the 6th International Conference on Learning Representations (ICLR).
  • [Leike et al.2017] Leike, J.; Martic, M.; Krakovna, V.; Ortega, P.; Everitt, T.; Lefrancq, A.; Orseau, L.; and Legg, S. 2017. AI safety gridworlds. arXiv preprint arXiv:1711.09883.
  • [Liu et al.2018] Liu, G.; Schulte, O.; Zhu, W.; and Li, Q. 2018. Toward interpretable deep reinforcement learning with linear model u-trees. CoRR abs/1807.05887.
  • [Loreggia et al.2018a] Loreggia, A.; Mattei, N.; Rossi, F.; and Venable, K. B. 2018a. Preferences and ethical principles in decision making. In Proceedings of the 1st AAAI/ACM Conference on AI, Ethics, and Society (AIES).
  • [Loreggia et al.2018b] Loreggia, A.; Mattei, N.; Rossi, F.; and Venable, K. B. 2018b. Value alignment via tractable preference distance. In Yampolskiy, R. V., ed., Artificial Intelligence Safety and Security. CRC Press. chapter 16.
  • [Luss and Petrik2016] Luss, R., and Petrik, M. 2016. Interpretable policies for dynamic product recommendations. In Proc. Conf. Uncertainty Artif. Intell.,  74.
  • [Ng and Russell2000] Ng, A. Y., and Russell, S. J. 2000. Algorithms for inverse reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning, ICML ’00, 663–670. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
  • [Noothigattu et al.2017] Noothigattu, R.; Gaikwad, S.; Awad, E.; Dsouza, S.; Rahwan, I.; Ravikumar, P.; and Procaccia, A. D. 2017. A voting-based system for ethical decision making. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI).
  • [Russell, Dewey, and Tegmark2015] Russell, S.; Dewey, D.; and Tegmark, M. 2015. Research priorities for robust and beneficial artificial intelligence. AI Magazine 36(4):105–114.
  • [Sen1974] Sen, A. 1974. Choice, ordering and morality. In Körner, S., ed., Practical Reason. Oxford: Blackwell.
  • [Simonite2018] Simonite, T. 2018. When bots teach themselves to cheat. Wired.
  • [Sutton and Barto1998] Sutton, R. S., and Barto, A. G. 1998. Introduction to Reinforcement Learning. Cambridge, MA, USA: MIT Press, 1st edition.
  • [Theodorou, Wortham, and Bryson2016] Theodorou, A.; Wortham, R. H.; and Bryson, J. J. 2016. Why is my robot behaving like that? designing transparency for real time inspection of autonomous robots. In AISB Workshop on Principles of Robotics. University of Bath.
  • [Ventura and Gates2018] Ventura, D., and Gates, D. 2018. Ethics as aesthetic: A computational creativity approach to ethical behavior. In Proc. Int. Conf. Comput. Creativity, 185–191.
  • [Verma et al.2018] Verma, A.; Murali, V.; Singh, R.; Kohli, P.; and Chaudhuri, S. 2018. Programmatically interpretable reinforcement learning. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, 5052–5061.
  • [Wallach and Allen2008] Wallach, W., and Allen, C. 2008. Moral machines: Teaching robots right from wrong. Oxford University Press.
  • [Wu and Lin2017] Wu, Y.-H., and Lin, S.-D. 2017. A low-cost ethics shaping approach for designing reinforcement learning agents. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI).
  • [Yu et al.2018] Yu, H.; Shen, Z.; Miao, C.; Leung, C.; Lesser, V. R.; and Yang, Q. 2018. Building ethics into artificial intelligence. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI), 5527–5533.