Inverse reinforcement learning (IRL) 
algorithms estimate a reward function that explains the motions demonstrated by an operator or other agents on a task described by a Markov Decision Process (MDP). The recovered reward function can be used by a robot to replicate the demonstrated task , or by an algorithm to analyze the demonstrator’s preference . Therefore, IRL algorithms can make multi-task robot control simpler by alleviating the need to explicitly set a cost function for each task, and make robot friendlier by personalizing services based on the recovered condition and preference of the operator.
The accuracy of the recovered function depends heavily on the ratio of visited states in the demonstrations to the whole state space, because the demonstrator’s motion policy can be estimated more accurately if every state is repeatedly visited. However, the ratio is low for many useful applications, since they usually have huge or high-dimensional state spaces, while the demonstrations are relatively rare for each task. For example, in a path planning task on a mild grid, the demonstrator chooses paths based on the destination, but may not move to the same destination hundreds of times in practice. For robot manipulation tasks based on ordinary RGB images, east task specifies a final result, but it is expensive to repeat each task millions of times. For human motion analysis, it is physically improbable to follow an instruction thousands of times in the huge state space of human poses. Therefore, it is difficult to estimate an accurate reward function for a single task with limited data.
In practice, usually multiple tasks can be observed from the same demonstrator, and the problem of rare demonstrations can be handled by combining data from all tasks, hence the meta-learning problem. Existing solutions mainly classification problems, like using the data from all tasks to learn an optimizer for each task, using the data from all tasks to learn a metric space where a single task can be more accurate with limited data, using the data from all tasks to learn a good initialization or a good initial parameter for each task, etc. Some of these methods are applicable to inverse reinforcement learning problems, but they mainly consider transfer of motion policy.
In many IRL applications, we observe that a demonstrator usually has an inherent reward for each state, materialized as the innate state preferences of a human, the hardware-dependent cost function of a robot, the default structure of an environment, etc. For a given task, the demonstrators are usually reluctant to drastically change the inherent reward function to complete the task; instead, they alter the innate reward function minimally to generate a task-specific reward function and plan the motion. For example, in path planning, the C-space of a mobile robot at home rarely changes, and the robot’s motion depends on the goal state; in human motion analysis, the costs of different poses are mostly invariant, while the actual motion depends on the desired directions.
Based on this observation, we propose a meta inverse reinforcement learning algorithm by maximizing the shared rewards among all tasks. We model the reward function for each task as a probabilistic distribution conditioned on an inherent baseline function, and estimate the most likely reward function in the distribution that explains the observed task-specific demonstrations.
Ii Related Works
The idea of inverse optimal control is proposed by Kalman , white the inverse reinforcement learning problem is firstly formulated in , where the agent observes the states resulting from an assumingly optimal policy, and tries to learn a reward function that makes the policy better than all alternatives. Since the goal can be achieved by multiple reward functions, this paper tries to find one that maximizes the difference between the observed policy and the second best policy. This idea is extended by , in the name of max-margin learning for inverse optimal control. Another extension is proposed in , where the purpose is not to recover the real reward function, but to find a reward function that leads to a policy equivalent to the observed one, measured by the amount of rewards collected by following that policy.
Since a motion policy may be difficult to estimate from observations, a behavior-based method is proposed in , which models the distribution of behaviors as a maximum-entropy model on the amount of reward collected from each behavior. This model has many applications and extensions. For example,  considers a sequence of changing reward functions instead of a single reward function.  and 
consider complex reward functions, instead of linear one, and use Gaussian process and neural networks, respectively, to model the reward function. considers complex environments, instead of a well-observed Markov Decision Process, and combines partially observed Markov Decision Process with reward learning.  models the behaviors based on the local optimality of a behavior, instead of the summation of rewards.  uses a multi-layer neural network to represent nonlinear reward functions.
Another method is proposed in 
, which models the probability of a behavior as the product of each state-action’s probability, and learns the reward function via maximum a posteriori estimation. However, due to the complex relation between the reward function and the behavior distribution, the author uses computationally expensive Monte-Carlo methods to sample the distribution. This work is extended by, which uses sub-gradient methods to simplify the problem. Another extensions is shown in , which tries to find a reward function that matches the observed behavior. For motions involving multiple tasks and varying reward functions, methods are developed in  and , which try to learn multiple reward functions.
Most of these methods need to solve a reinforcement learning problem in each step of reward learning, thus practical large-scale application is computationally infeasible. Several methods are applicable to large-scale applications. The method in  uses a linear approximation of the value function, but it requires a set of manually defined basis functions. The methods in [10, 19] update the reward function parameter by minimizing the relative entropy between the observed trajectories and a set of sampled trajectories based on the reward function, but they require a set of manually segmented trajectories of human motion, where the choice of trajectory length will affect the result. Besides, these methods solve large-scale problems by approximating the Bellman Optimality Equation, thus the learned reward function and Q function are only approximately optimal. In our previous work, we proposed an approximation method that guarantees the optimality of the learned functions as well as the scalability to large state space problems .
To learn a model from limited data, meta learning algorithms are developed. A survey of different work is given in , viewing meta-learner as a way to improve biases for base-learners. The method in  uses neural memory machine to do the meta learning. The method in  minimizes the representations. The method in  learns by gradient descent. The method  learns optimizers.
Meta learning algorithms are also applied to reinforcement learning problems. The method in  tunes meta parameters for reinforcement learning, learning rate for TD learning, action selection trade-off, and discount factor. The method in  uses one network to play multiple games. The method in  trains reinforcement learning with slower rl. The method in 
learns a good initial parameter that reaches optimal parameters with limited gradient descent. Meta learning in inverse reinforcement learning focuses on imitation learning, like one-shot imitation learning.
Iii Meta Inverse Reinforcement Learning
Iii-a Meta Inverse Reinforcement Learning
We assume that an agent needs to handle multiple tasks in an environment, denoted by , where denotes the task and denotes the number of tasks.
We describe a task as a Markov Decision Process, consisting of the following variables:
, a set of states
, a set of actions
, a state transition function that defines the probability that state becomes after action .
, a reward function that defines the immediate reward of state .
, a discount factor that ensures the convergence of the MDP over an infinite horizon.
For a task , the agent performs a set of demonstrations , represented by sequences of state-action pairs:
where denotes the length of the sequence . Given the observed sequences for the tasks, inverse reinforcement learning algorithms try to recover a reward function for each task.
Our key observation in multi-task IRL is that the demonstrator has an inherent reward function , generating a baseline reward for each state in all tasks. To complete the task, the agent generates a reward function from a distribution conditioned on to plan the motion. Therefore, the motion is generated as:
For the task, we want to find the most likely sampled from that explains the demonstration
. Assuming all the tasks are independent from each other, the following joint distribution is formulated:
The reward functions can be found via maximum-likelihood estimation:
where denotes a function space, is the negative loglikelihood of , and is the negative loglikelihood .
Iii-B Loss for Inverse Reinforcement Learning
While many solutions exist for the inverse reinforcement learning problem, we adopt the solution based on function approximation developed in our earlier work  to handle the practical high-dimensional state spaces.
The core idea of the method is to approximate the Bellman Optimality Equation  with a function approximation framework. The Bellman Optimality Equation is given as:
It is computationally prohibitive to solve in high-dimensional state spaces.
But with a parameterized VR function, we describe the summation of the reward function and the discounted optimal value function as:
where denotes the parameter of VR function. The function value of a state is named as VR value.
Substituting Equation (4) into Bellman Optimality Equation, the optimal Q function is given as:
the optimal value function is given as:
and the reward function can be computed as:
This framework avoids solving the Bellman Optimality Equation. Besides, this formulation can be generalized to other extensions of Bellman Optimality Equation by replacing the operator with other types of Bellman backup operators. For example, is used in the maximum-entropy method; is used in Bellman Gradient Iteration .
To apply this framework to IRL problems, this work chooses a motion model based on the optimal Q function :
where is a parameter controlling the degree of confidence in the agent’s ability to choose actions based on Q values. Other models can also be used, like in .
Assuming the approximation function is a neural network, the parameter -weights and biases, the negative log-likelihood of is given by:
Iii-C Loss for Reward Sharing
Since the demonstrator makes minimal changes to adapt the inherent reward function into task-specific one , we model the distribution as:
where measures the difference between and
. Thus the loss function for reward sharing is given as:
where is the partition function and remains the same for all .
We test several functions as . The first choice is L2 loss, where
where denotes the set of differences, evaluated on the full state space or only the visited states.
The second choice is Huber loss with , a differentiable approximation of the L1 loss popular in sparse models:
The third choice is standard deviation:
The fourth choice is information entropy, after converting into a probabilistic distribution with sofmax function:
With the loss function for IRL and reward sharing, the reward functions can be learned via gradient method. The algorithm is shown in Algorithm 1.
Iv-a Path Planning
We consider a path planning problem on an uneven terrain, where an agent can observe the whole terrain to find the optimal paths from random starting points to arbitrary goal points, but a mobile robot can only observe the agent’s demonstrations to learn how to plan paths. Given a starting point and a goal point, an optimal path depends solely on the costs to move across the terrain. To learn the costs, we formulate a Markov Decision Process for each goal point, where a state denotes a small region of the terrain and an action denotes a possible movement. The reward of a state equals to the negative of the cost to move across the corresponding region, while the goal state has an additional reward to attract movements.
In this work, we create a discretized terrain with several hills, where each hill is defined as a peak of cost distribution and the costs around each hill decay exponentially, and the true cost of a region is the summation of the costs from all hills. Ten worlds are randomly generated, and in each world, ten tasks are generated, each with a different goal state. For each task, the agent demonstrates ten trajectories, where the length of a trajectory depends on how many steps to reach the goal state.
We evaluate the proposed method with different reward sharing loss functions under different number of tasks and different number of trajectories. The evaluated loss functions include no reward sharing, reward sharing with standard deviation, information entropy, L2 loss, and huber loss. The number of tasks ranges from 1 to 16, and for each task, the number of trajectories ranges from 1 to 10. The learning rate is 0.01, with Adam optimizer. The accuracy of a reward is computed as the correlation coefficient between the learned reward function and the ground truth one. The results are shown in Figure 2.
The result shows that the meta learning step can significantly improve the accuracy of reward learning, among which the huber loss function leads to the best performance in average. L2 loss and standard deviation have similar performance, not surprisingly. However, the information entropy has a really bad performance.
Iv-B Motion Analysis
During rehabilitation, a patient with spinal cord injuries sits on a box, with a flat plate force sensor mounted on box to capture the center-of-pressure (COP) of the patient during movement. Each experiment is composed of two sessions, one without transcutaneous stimulation and one with stimulation. The electrodes configuration and stimulation signal pattern are manually selected by the clinician .
In each session, the physician gives eight (or four) directions for the patient to follow, including left, forward left, forward, forward right, right, right backward, backward, backward left, and the patient moves continuously to follow the instruction. The physician observes the patient’s behaviors and decides the moment to change the instruction.
Six experiments are done, each with two sessions. The COP trajectories in Figure 3 denote the case with four directional instructions; Figure 4, 5, 6, 7, and 8 denote the sessions with eight directional instructions.
The COP sensory data from each session is discretized on a
grid, which is fine enough to capture the patient’s small movements. The problem is formulated into a MDP, where each state captures the patient’s discretized location and velocity, and the set of actions changes the velocity into eight possible directions. The velocity is represented with a two-dimensional vector showing eight possible velocity directions. Thus the problem has 80000 states and 8 actions, and each action is assumed to lead to a deterministic state.
|forward||backward||left||right||top left||top right||bottom left||bottom right||origin|
To learn the reward function from the observed trajectories based on the formulated MDP, we use the coordinate and velocity direction of each grid as the feature, and learn the reward function parameter from each set of data after segmentation based on peak detection on distances from the origin. The function approximator is a neural network with three hidden layers and nodes. The huber loss function is used in reward sharing, and the result is show in Table I.
It shows that the patient’s ability to following instructions vary among different directions, and the values will assist physicians to design the stimulating signals.
This work proposes a solution to learn an accurate reward function for each task with limited demonstrations but from the same demonstrator, by maximizing the shared rewards among different tasks. We proposed several loss functions to maximize the shared reward, and compared their accuracies in a simulated environment. It shows that huber loss has the best performance.
In future work, we will apply the proposed method to imitation learning.
A. Y. Ng and S. Russell, “Algorithms for inverse reinforcement learning,” in
in Proc. 17th International Conf. on Machine Learning, 2000.
-  R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press Cambridge, 1998, vol. 1, no. 1.
-  P. Abbeel and A. Y. Ng, “Apprenticeship learning via inverse reinforcement learning,” in Proceedings of the twenty-first international conference on Machine learning. ACM, 2004, p. 1.
-  B. Najafi, K. Aminian, A. Paraschiv-Ionescu, F. Loew, C. J. Bula, and P. Robert, “Ambulatory system for human motion analysis using a kinematic sensor: monitoring of daily physical activity in the elderly,” IEEE Transactions on biomedical Engineering, vol. 50, no. 6, pp. 711–723, 2003.
-  R. Kalman and M. M. C. B. D. R. I. for Advanced Studies. Center for Control Theory, When is a Linear Control System Optimal?., ser. RIAS technical report. Martin Marietta Corporation, Research Institute for Advanced Studies, Center for Control Theory, 1963.
-  N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich, “Maximum margin planning,” in Proceedings of the 23rd international conference on Machine learning. ACM, 2006, pp. 729–736.
-  B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey, “Maximum entropy inverse reinforcement learning,” in Proc. AAAI, 2008, pp. 1433–1438.
-  Q. P. Nguyen, B. K. H. Low, and P. Jaillet, “Inverse reinforcement learning with locally consistent reward functions,” in Advances in Neural Information Processing Systems, 2015, pp. 1747–1755.
-  S. Levine, Z. Popovic, and V. Koltun, “Nonlinear inverse reinforcement learning with gaussian processes,” in Advances in Neural Information Processing Systems 24, J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2011, pp. 19–27.
-  C. Finn, S. Levine, and P. Abbeel, “Guided cost learning: Deep inverse optimal control via policy optimization,” arXiv preprint arXiv:1603.00448, 2016.
-  J. Choi and K.-E. Kim, “Inverse reinforcement learning in partially observable environments,” Journal of Machine Learning Research, vol. 12, no. Mar, pp. 691–730, 2011.
-  S. Levine and V. Koltun, “Continuous inverse optimal control with locally optimal examples,” arXiv preprint arXiv:1206.4617, 2012.
-  M. Wulfmeier, P. Ondruska, and I. Posner, “Deep inverse reinforcement learning,” arXiv preprint arXiv:1507.04888, 2015.
-  D. Ramachandran and E. Amir, “Bayesian inverse reinforcement learning,” in Proceedings of the 20th International Joint Conference on Artifical Intelligence, ser. IJCAI’07. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2007, pp. 2586–2591.
-  G. Neu and C. Szepesvári, “Apprenticeship learning using inverse reinforcement learning and gradient methods,” arXiv preprint arXiv:1206.5264, 2012.
-  K. Mombaur, A. Truong, and J.-P. Laumond, “From human to humanoid locomotion—an inverse optimal control approach,” Autonomous robots, vol. 28, no. 3, pp. 369–383, 2010.
-  C. Dimitrakakis and C. A. Rothkopf, “Bayesian multitask inverse reinforcement learning,” in European Workshop on Reinforcement Learning. Springer, 2011, pp. 273–284.
-  J. Choi and K.-E. Kim, “Nonparametric bayesian inverse reinforcement learning for multiple reward functions,” in Advances in Neural Information Processing Systems, 2012, pp. 305–313.
A. Boularias, J. Kober, and J. R. Peters, “Relative entropy inverse
reinforcement learning,” in
International Conference on Artificial Intelligence and Statistics, 2011, pp. 182–189.
-  K. Li and J. W. Burdick, “Large-scale inverse reinforcement learning via function approximation for clinical motion analysis,” arXiv preprint arXiv:1707.09394, 2017.
-  R. Vilalta and Y. Drissi, “A perspective view and survey of meta-learning,” Artificial Intelligence Review, vol. 18, no. 2, pp. 77–95, 2002.
-  A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap, “Meta-learning with memory-augmented neural networks,” in International conference on machine learning, 2016, pp. 1842–1850.
-  B. Hariharan and R. Girshick, “Low-shot visual recognition by shrinking and hallucinating features,” arXiv preprint arXiv:1606.02819, 2016.
-  M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas, “Learning to learn by gradient descent by gradient descent,” in Advances in Neural Information Processing Systems, 2016, pp. 3981–3989.
-  D. Ha, A. Dai, and Q. V. Le, “Hypernetworks,” arXiv preprint arXiv:1609.09106, 2016.
-  N. Schweighofer and K. Doya, “Meta-learning in reinforcement learning,” Neural Networks, vol. 16, no. 1, pp. 5–9, 2003.
-  E. Parisotto, J. L. Ba, and R. Salakhutdinov, “Actor-mimic: Deep multitask and transfer reinforcement learning,” arXiv preprint arXiv:1511.06342, 2015.
-  Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel, “Rl2: Fast reinforcement learning via slow reinforcement learning,” arXiv preprint arXiv:1611.02779, 2016.
-  C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” arXiv preprint arXiv:1703.03400, 2017.
-  Y. Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba, “One-shot imitation learning,” arXiv preprint arXiv:1703.07326, 2017.
-  K. Li and J. W. Burdick, “Bellman Gradient Iteration for Inverse Reinforcement Learning,” ArXiv e-prints, Jul. 2017.
-  S. Harkema, Y. Gerasimenko, J. Hodes, J. Burdick, C. Angeli, Y. Chen, C. Ferreira, A. Willhite, E. Rejc, R. G. Grossman et al., “Effect of epidural stimulation of the lumbosacral spinal cord on voluntary movement, standing, and assisted stepping after motor complete paraplegia: a case study,” The Lancet, vol. 377, no. 9781, pp. 1938–1947, 2011.