Reward Advancement: Transforming Policy under Maximum Causal Entropy Principle

07/11/2019 ∙ by Guojun Wu, et al. ∙ 0

Many real-world human behaviors can be characterized as a sequential decision making processes, such as urban travelers choices of transport modes and routes (Wu et al. 2017). Differing from choices controlled by machines, which in general follows perfect rationality to adopt the policy with the highest reward, studies have revealed that human agents make sub-optimal decisions under bounded rationality (Tao, Rohde, and Corcoran 2014). Such behaviors can be modeled using maximum causal entropy (MCE) principle (Ziebart 2010). In this paper, we define and investigate a general reward trans-formation problem (namely, reward advancement): Recovering the range of additional reward functions that transform the agent's policy from original policy to a predefined target policy under MCE principle. We show that given an MDP and a target policy, there are infinite many additional reward functions that can achieve the desired policy transformation. Moreover, we propose an algorithm to further extract the additional rewards with minimum "cost" to implement the policy transformation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In sequential decision making problems (Ziebart et al. (2010)), human agents complete tasks by evaluating the rewards received over states traversed and actions employed. Each human agent may have her own unique reward function, which governs how much reward she may receive over states and actions (Wong et al. (2015); Zhang (2006)). For example, urban travelers may evaluate the travel cost vs travel time with different weights, when deciding which transport mode, route, and transfer stations to take (Wu et al. (2017)). Uber drivers may prefer different urban regions to look for passengers, depending on their familiarity to the regions, and distance to their home locations, etc (Wu et al. (2018)

). To quantify and measure the unique reward function each human agent possesses, maximum causal entropy inverse reinforcement learning (IRL) (

Ziebart et al. (2008)) has been proposed to find the reward function and the corresponding policy, that best represents demonstrated behaviors from the human agent with the highest causal entropy, subject to the constraint of matching feature expectations to the distribution of demonstrated behaviors.

Going beyond the human agent reward learning problem, in this paper, we move one step further to investigate how we can influence and change agent’s policy (i.e., decisions) to a target policy from the original policy with minimum cost, by purposely updating and advancing the rewards received by the human agent.

Figure 1 illustrates this problem with a concrete example in the public transit setting. We assume passengers want to travel from to , obviously, there are two different routes. The first one is to take to and the other route is take to and take subway to . The reward of first path is and the second one is

. Naturally, the probability to take

is . If we want to balance the passengers flow between two routes, we need to provide additional reward of the second route. And it will lead to a balanced passengers flow. However, there are multiple options in terms of providing additional rewards. For example, in the Figure 1 c, we provide to taking , but we can have another option like providing to and to . Then, the question is, how to calculate the optimal pattern in terms of minimizing the total additional reward we provide.

This problem of finding additional reward to transform human agent’s policy with minimum cost is of crucial practical importance. For example, passengers of urban transit system like buses and subways always have their own policy while travelling for example, which bus or subway line to take. However, due to the lack of knowledge about other passengers’ decisions, their individual policy would usually cause unbalanced distribution of passengers both spatially and temporally. To mitigate this problem, the government can design a global optimal policy for each passenger based on global information. However, simply asking passengers to follow that policy is hardly possible, for example, if we tell someone to start her trip to work one hour before her normal schedule, we have no chance she will follow. To transform agents’ policy to our specific designed policy, we need to provide additional reward to the those agents such as providing discounted price if she start earlier (Zheng et al. (2014); Lachapelle et al. (2011)). Then, how to minimize the cost of transforming agents’ policy is critical.

In the literature, reward transformations  (Wiewiora et al. (2003); Ng et al. (1999); Konidaris et al. (2012); Devlin and Kudenko (2011)) have been studied extensively, primarily focusing on transforming the reward, with the goal of preserving the same policy (which is formally termed as “reward shaping”). Differing from reward shaping, our design goal is more general, namely, transforming rewards, so the agent behaves as a target policy , which may or may not be the agent’s original policy . We refer this problem as a “reward advancement” problem.

Figure 1: An Example of Reward Advancement

In this paper, we make the first attempt to tackle the reward advancement problem. Given a Markov Decision Process and a target policy

, we investigate the range of additional rewards that can transform the agent’s policy to the predefined target policy under MCE principle. Our main contributions are summarized as follows.

  • We are the first to define and study the reward advancement problem, namely, finding the updating rewards to transform human agent’s behaving policy to a predefined target policy. We provide a close-form solution to this problem. The solution indicates that there exist infinite many such additional rewards, that can achieve the desired policy transformation.

  • Moreover, we define and investigate min-cost reward advancement problem, which aims to find the additional rewards that can transform the agent’s policy to , while minimizing the cost of the policy transformation.

  • We also demonstrated the correctness and accuracy of our reward advancement algorithm using both synthetic data and a large-scale (6 months) passenger-level public transit data from Shenzhen, China.

2 Preliminaries

In this section, we review the basics of finite Markov Decision Process and Maximum Causal Entropy (MCE) policy.

2.1 Markov Decision Process (MDP)

An MDP is represented as a tuple , where is a finite set of states and is a set of actions. is the probabilistic transition function with as the probability of arriving at state by executing action at state , is the discounting factor111Without loss of generality, we assume in this paper, where our results can be extended to the case with ., is the initial distribution, and

is the reward function. A randomized, memory-less policy is a function that specifies a probability distribution on the action to be executed in each state, defined as

. The planning problem in an MDP aims to find a policy , such that the expected total reward is maximized, namely,

(1)

where and

are random variables for the state and action at the time step

, and is the set of time horizons. The initial state follows the initial distribution . Here, is the memory-less policy space.

2.2 Policy under Maximum Causal Entropy Principle

Optimal policy outlined in eq.(1) achieves the highest expected reward for the agent. It is widely used for machine (i.e., robot) agent design (Ng et al. (2000); Levine and Abbeel (2014); Li and Todorov (2004)), where perfect rationality can be safely assumed (Li and Todorov (2004)). However, many studies have revealed that decisions made by human agents (even experts) are probabilistic and sub-optimal (Kuefler et al. (2017); Wu et al. (2017); Tao et al. (2014)). These phenomena indicate that human agents are making decisions with bounded rationality (Wu et al. (2017); Tao et al. (2014)), where actions are chosen with probabilities corresponding to the expected future rewards they are leading to. As a result, various inverse reinforcement learning algorithms were proposed to recover the reward function, such that the distribution of action and state sequence under a near-optimal policy match the demonstrated human behaviors.

One well-known solution to the inverse reinforcement learning problem is Maximum Causal Entropy Inverse Reinforcement Learning (Ziebart et al. (2010)). It proposes to find the policy that best represents demonstrated behaviors with highest causal entropy, which is summarized as follows.

Conditional Entropy is used to measure the uncertainty of one distribution based on a given side information , i.e., .

Causal Entropy measures the uncertainty present in the causally conditioned distribution of a sequence variable , given the preceding partial sequences and , with . It can be interpreted as the expected number of bits needed to encode the sequence given the previous variables and sequentially revealed side information, , which has been revealed at each point in time and excluding unrevealed future side information, , which is When the sequence is Markovian, the causal entropy can be written as . As a result, the causal entropy of an MDP is characterized as with and as the action sequence and the state sequence (side information), respectively. And represents the expected visitation frequency of the state-action pair , when one trajectory is generated under policy .

The policy under maximum causal entropy principle (i.e., MCE policy) best represents the demonstrated behaviors with the highest causal entropy, and is subject to the constraint of matching reward expectations to the distribution of demonstrated behaviors. Denote as Q-function on state-action pair , indicating the expected rewards to be received starting from , MCE policy can be formulated as the following maximum causal entropy problem:

Problem 1: Maximum Causal Entropy Policy:
(2)
(3)
(4)

where is expected empirical rewards extracted from the behavior data. is a set of demonstrated trajectories from a human agent, and denotes the size of the trajectory set. is the reward received on trajectory .

Theorem 1.

The MCE policy characterized in Problem 1 eq.(2)–(4) follows the softmax format, .

Proof.

This can be proven by introducing Lagrangian multipliers for constraints, and letting the derivative of Lagrangian function be zero. See more details in the supplementary material. ∎

3 Reward Advancement

Inverse reinforcement learning problem (Ng et al. (1999); Ziebart et al. (2010, 2008); Finn et al. (2016)) aims to inversely learn agent’s reward function from their demonstrated trajectories, namely, inferring how agent makes decisions. In this work, we move one step further to investigate how we can influence and change agent’s policy (i.e., decision making) to a target policy from the original policy observed from the demonstrated trajectories, by purposely updating and advancing rewards with in the MDP. Reward transformations (Ng et al. (1999); Wiewiora (2003)) have been studied in the literature, primarily focusing on transforming the rewards, with the goal of preserving the same policy (which is formally termed as “reward shaping”). Differing from reward shaping, our design goal is more general, say, transforming rewards, so the agent behave as a predefined target policy , which may or may not be the agent’s current policy . This problem is referred to as a “reward advancement” problem, and we formally define it as follows.

Reward Advancement Problem. Given an MDP , the agent’s MCE policy is . we aim to find additional rewards to be added to the original reward , such that the agent’s MCE policy under the updated MDP follows a predefined target policy .

For MDP , the Q-function of executing a policy can be expressed as . Then, the Q-function with additional reward is , where , , and .

As a result, transforming from the original MCE policy , the new MCE policy is a function of addition reward , or equivalently , i.e., . Given a predefined , finding the right , such that for any and , solves the reward advancement problem. The following Theorem 2 introduces the complete solution set to this problem.

Theorem 2.

Given an MDP , the sufficient and necessary condition to transform its MCE policy to a predefined policy is to provide additional Q-function , such that

(5)

where is any real number function defined on states. Such additional Q-function is called “advancement function”.

Proof.

(sketch) If we set , then can be viewed as softmax sum of Q function of all state-action pair associated with . Then, can be calculated based on the MCE based policy. See more details in the supplementary material. ∎

Theorem 2 indicates that there are infinite many advancement strategies, i.e. that can transform an original MCE policy to a given . However, different advancement strategies may lead to different costs in reality while the implementation of additional rewards. For example, in ride-hailing service, additional rewards provided to Uber drivers could be in the form of monetary values; in urban public transportation systems, the additional rewards to passengers could be in the form of ride discount. More additional rewards applied lead to more cost to the implement. Besides, without any lower bound on , the advancement function can be as low as . In turn, the addition rewards inferred via Bellman equation can be arbitrarily small as well. It is equivalent to increase the ride rate to be extremely large for public transits, which is not feasible in real world scenario. Si, we will introduce and provide solution to the reward advancement problem with minimum cost as the objective in the following section.

4 Min-Cost Reward Advancement

Now, we investigate how to identify additional rewards that transform the agent to an MCE policy , while guaranteeing minimum “implementation cost”, namely, a min-cost reward advancement problem. For many real-world cases, however, we can only manipulate rewards by providing additional features like changing passenger’s inherent reward by providing monetary incentives. So we take the approach that advance agent’s reward by providing additional feature. For simplicity, we consider that reward function is in a linear fashion, i.e., , where

is the feature vector. Then, the additional reward can be defined as

, where is the additional feature we provide to advance agent’s reward. Then, we can define the ”implementation cost” as cost of providing additional features, which is given by , where is the cost function.

Before we jump to details, we can make the assumption that , where and is the i-th entry in and , since that in real-world application, if there is something we can provide to make both side (like passengers and drivers) happy, which means reduce cost and increase reward of passengers, we ought to provide this feature as much as possible. Based on this assumption, we can then take a look at how to assign additional reward to different features efficiently. The constraints on each feature can be denoted as,

(6)

where and can be any real value. Of course if you set then there will exist no valid solutions.

Based on the constraints we have on cost, we can define whether a given additional reward is achievable,

Definition 1.

For a given , if we call the additional reward achievable.

It is obvious that and is the lower and upper bound of additional reward that can be provide by altering features without violating any constraints, i.e. if a target policy needs additional reward exceed those bounds are not achievable. For example, we can’t convert all private car owners to bus-takers, which is theoretically probable if we award every one, say, $10,000 per bus trip, but it far more exceeds governments budget.

Theorem 3.

Given additional rewards , if is achievable, then we have the minimum cost of ,

where is the lower bound of achievable additional reward, is the upper bound of additional reward at each feature, is the index of feature list sorted by descending order of

Proof.

(sketch) We first sort the feature by their cost-efficiency, which is . Then, we start from the minimum achievable value of , which is and pick features to provide additional reward according to their cost-efficiency, then we would get the most cost-efficient way of assignment by this greedy method. ∎

So far, we have successfully find a way to assign a give additional reward to its corresponding features . Then, we can start to answer the question: what is the additional reward with minimum cost?

Preposition 1.

If , then .

We use Preposition 1 to show that the min-cost reward advancement is actually a min-reward advancement problem with upper and lower bound constraints. Then, constraints on additional reward, which is , where and are the upper and lower bound of additional reward based on constraints on . So, the min-cost reward advancement problem can be formulated as a two-stage problem, first learning the minimum additional rewards, , which we should provide to transform agent’s policy to , defined as Min-Reward Stage. And then assign it to features based on eq. 13, also known as Assignment Stage.

Min-Reward Stage. The min-reward reward advancement can be formulated as,

Pro blem 2: Min-Reward Reward Advancement
(7)
(8)
(9)
(10)

where is the additional Q-function and is the additional reward. The objective function eq. 7 is to get the minimum additional reward we should provide, which is equivalent to minimum cost of providing those reward according to Preposition 1. Constraint eq. 8 reflects the relationship between additional Q-function and the additional reward . And eq. 9 is used to guarantee the target policy can be achieved after providing additional reward . Since we are using MCE policy assumption throughout this paper, here we adopt the result of Theorem 2. One may use other randomized policy assumption and the problem still would have similar solution. Besides, we have additional reward constraints eq. 10, representing the upper and lower bound in terms of additional reward we can provide to the agent.

Theorem 4.

If we define , then the solution to Problem 2, the Min-Reward Reward Advancement Problem, can be written as,

(11)

where and is the lower bound and upper bound of in Problem 2 and is given by,

(12)
Proof.

(sketch) The can be viewed as value function of state with an arbitrary reward function. If is easy to understand that when we set constraints on additional reward of each state-action pair, we actually are setting constraints on the value function of each state and we can use value iteration to solve this problem. The full proof can be found in supplementary material. ∎

Assignment Stage. While we extract for each state-action pair , we still need to assign additional rewards to different features to assure the minimum cost of transforming policy. The Theorem 3 indicates that for each possible , there exists a assignment of additional features to achieve minimum transfer cost, which is,

(13)
1:INPUT: States , Actions , Original Rewards and Original Trajectory Set and cost constraint and ;
2:OUTPUT: Additional reward on each state-action pair (One from many solutions);
3:Calculate
4:Calculate
5:For each state-action pair , calculate ;
6:For each state-action pair , calculate and
7:Use and as rewards to perform value iteration to calculate the lower bound and upper bound ;
8:for Each state-action pair  do
9:     if  then
10:         Return NO VALID SOLUTION      
11:     Calculate
12:For each state-action pair , calculate
Algorithm 1 Min-Cost Reward Advancement via Value Iteration

The Algorithm. 1 demonstrates how to get the optimal solution to the min-cost reward advancement problem. First, we use Line 3 and Line 4 to calculate bounds of additional rewards. And Line 5 Line 11 is used to calculate the minimum additional reward we can provide to transform the agents policy. For some cases that , which is the transition probability, is missing, we can use samples from trajectories

to estimate the expectation in Line 5, Line 7 and Line 11. Then, finally, the Line 12 produces optimal additional feature assignments to each state-action pair.

5 Evaluation

In this section, we evaluate the correctness and accuracy of our min-cost reward advancement algorithm, with synthetic object world scenario. Then, by modeling passengers’ travel decisions in public transit system as a Markov Decision Process, we conduct empirical case studies using a large-scale (6 months) passenger-level public transit data collected in Shenzhen, China, from 07/01/2016 to 12/30/2016.

Evaluation on object world. First, we use an object world (Levine et al. (2011)) scenario to evaluate our reward advancement algorithm. A Object World is a Grid World with random placed colored object and running into grid with object with different color will have different reward, we call it ”collect the object”. And the agent will also get a large reward by arriving the destination. So, the ideal policy should be going to the destination while collect object with higher reward as many as possible. Figure 4 shows an example of object world. There are grids. We randomly placed green objects and red objects in the scenario. At each grid, agent can take different actions, including stay and move towards one of four directions. With certain given transition probability, the agent would go to a random neighboring grid along the direction she has chosen. We set the discount factor to be for all experiments. Then, we use Figure 4 to show the efficiency and effectiveness of the min-cost reward advancement algorithm. For object world have more that state-action pairs, only trajectories are needed to learn an accurate additional reward to transform agent’s policy to a predefined policy. The Figure 4 shows that the total cost of reward advancement increase linearly while the lower bound of additional reward at each increases, which demonstrates that the cost of transforming policy via min-cost reward advancement is applicable.

Figure 2: A Object World with 2 different colors.
Figure 3: Number of trajectories used vs accuracy.
Figure 4: Total cost over reward lower bound.

Case studies. In this section, we will use a public transit case as an example to illustrate that human agents’ behaviors follow MCE policy and reward advancement strategy. We collected 6 months passenger-level public transit data from Shenzhen, China, which allows us to evaluate the potential of redistributing passengers by transforming their decision policies, in trip starting time, station and transport mode selection.

Passengers are making a sequence of decisions when completing a trip, such as which bus routes and subway line to take, which stop/station to transfer. Such sequential decision making processes can be naturally modeled as Markov decision processes (MDPs). And since nearby stops/stations usually are similar to passengers, we will split the whole city into grid and aggregate stops/stations within same grid together. The states are regional grids during different time intervals. Actions are available bus routes and subway lines the passenger can take. Our model and formulation follow the work (Wu et al. (2018)) (See  (Wu et al. (2018) )for more details). Also, the evaluation in (Wu et al. (2018)) indicates that the human agent would follow a MCE policy after changing of reward, which means providing additional reward can shape human agent’s behaviors to the target policy. We inversely learn the reward functions of passengers using Maximum Causal Entropy Inverse Reinforcement Learning (Ziebart et al. (2010)). We consider the reward passengers are evaluating contains the monetary cost of the trip. Thus, we model the additional rewards as the monetary incentive for taking bus and subway lines.

Figure 5: Spatial Decision Policy Transformation

There always exists spatial decision imbalance in public transit system. For example, Figure 5(a) shows that there are two subway stations, Mei Cun and Shang Mei Lin, which are geographically close to each other. However, from Figure 5(b), there are much more passengers traveling via Shang Mei Lin station rather than Mei Cun station. One target policy (as shown in Figure 5(c)) we used in the experiment allow passengers going through Shang Mei Lin station at each -minute time span. The result of reward advancement is showed in Figure 5(d). Clearly, the additional rewards needed to transform their policy varies over time, which suggests a dynamic pricing mechanism to advance the passengers spatial decision policy.

6 Conclusion

In this work, we define and study a novel reward advancement problem, namely, finding the updating rewards to transform human agent’s behavior to a predefined target policy . We provide a close-form solution to this problem. The solution we found indicates that there exist infinite many such additional rewards, that can achieve the desired policy transformation. Moreover, we define and investigate min-cost reward advancement problem, which aims to find the additional rewards that can transform the agent’s policy to , while minimizing the cost of the policy transformation. We solve this problem by developing an efficient algorithm. We demonstrated the correctness and accuracy of our reward advancement solution using both synthetic data and a large-scale (6 months) passenger-level public transit data from Shenzhen, China.

References

  • Wu et al. [2017] Guojun Wu, Yichen Ding, Yanhua Li, Jun Luo, Fan Zhang, and Jie Fu. Data-driven inverse learning of passenger preferences in urban public transits. In Decision and Control (CDC), 2017 IEEE 56th Annual Conference on, pages 5068–5073. IEEE, 2017.
  • Tao et al. [2014] Sui Tao, David Rohde, and Jonathan Corcoran. Examining the spatial–temporal dynamics of bus passenger travel behaviour using smart card data and the flow-comap. Journal of Transport Geography, 41:21–36, 2014.
  • Ziebart et al. [2010] Brian D Ziebart, J Andrew Bagnell, and Anind K Dey. Modeling interaction via the principle of maximum causal entropy. In

    Proceedings of the 27th International Conference on International Conference on Machine Learning

    , 2010.
  • Wong et al. [2015] RCP Wong, WY Szeto, and SC Wong. A two-stage approach to modeling vacant taxi movements. Transportation Research Part C: Emerging Technologies, 59:147–163, 2015.
  • Zhang [2006] Lei Zhang. Agent-based behavioral model of spatial learning and route choice. In Transportation Research Board 85th Annual Meeting, 2006.
  • Wu et al. [2018] Guojun Wu, Yanhua Li, Jie Bao, Yu Zheng, Jieping Ye, and Jun Luo. Human-centric urban transit evaluation and planning. In IEEE International Conference on Data Mining, 2018, 2018.
  • Ziebart et al. [2008] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pages 1433–1438. Chicago, IL, USA, 2008.
  • Zheng et al. [2014] Yu Zheng, Licia Capra, Ouri Wolfson, and Hai Yang. Urban computing: concepts, methodologies, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 2014.
  • Lachapelle et al. [2011] Ugo Lachapelle, Larry Frank, Brian E Saelens, James F Sallis, and Terry L Conway. Commuting by public transit and physical activity: where you live, where you work, and how you get there. Journal of Physical Activity and Health, 2011.
  • Wiewiora et al. [2003] Eric Wiewiora, Garrison W Cottrell, and Charles Elkan. Principled methods for advising reinforcement learning agents. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 792–799, 2003.
  • Ng et al. [1999] Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pages 278–287, 1999.
  • Konidaris et al. [2012] George Konidaris, Ilya Scheidwasser, and Andrew Barto. Transfer in reinforcement learning via shared features. Journal of Machine Learning Research, 13(May):1333–1371, 2012.
  • Devlin and Kudenko [2011] Sam Devlin and Daniel Kudenko. Theoretical considerations of potential-based reward shaping for multi-agent systems. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pages 225–232. International Foundation for Autonomous Agents and Multiagent Systems, 2011.
  • Ng et al. [2000] Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pages 663–670, 2000.
  • Levine and Abbeel [2014] Sergey Levine and Pieter Abbeel.

    Learning neural network policies with guided policy search under unknown dynamics.

    In Advances in Neural Information Processing Systems, pages 1071–1079, 2014.
  • Li and Todorov [2004] Weiwei Li and Emanuel Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222–229, 2004.
  • Kuefler et al. [2017] Alex Kuefler, Jeremy Morton, Tim Wheeler, and Mykel Kochenderfer. Imitating driver behavior with generative adversarial networks. In Intelligent Vehicles Symposium (IV), 2017 IEEE, pages 204–211. IEEE, 2017.
  • Finn et al. [2016] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pages 49–58, 2016.
  • Wiewiora [2003] Eric Wiewiora. Potential-based shaping and q-value initialization are equivalent.

    Journal of Artificial Intelligence Research

    , 19:205–208, 2003.
  • Levine et al. [2011] Sergey Levine, Zoran Popovic, and Vladlen Koltun. Nonlinear inverse reinforcement learning with gaussian processes. In Advances in Neural Information Processing Systems, pages 19–27, 2011.