Maximum Entropy Multi-Task Inverse RL

Multi-task IRL allows for the possibility that the expert could be switching between multiple ways of solving the same problem, or interleaving demonstrations of multiple tasks. The learner aims to learn the multiple reward functions that guide these ways of solving the problem. We present a new method for multi-task IRL that generalizes the well-known maximum entropy approach to IRL by combining it with the Dirichlet process based clustering of the observed input. This yields a single nonlinear optimization problem, called MaxEnt Multi-task IRL, which can be solved using the Lagrangian relaxation and gradient descent methods. We evaluate MaxEnt Multi-task IRL in simulation on the robotic task of sorting onions on a processing line where the expert utilizes multiple ways of detecting and removing blemished onions. The method is able to learn the underlying reward functions to a high level of accuracy and it improves on the previous approaches to multi-task IRL.



page 4


Multi-task Maximum Entropy Inverse Reinforcement Learning

Multi-task Inverse Reinforcement Learning (IRL) is the problem of inferr...

Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement

Multi-task reinforcement learning (RL) aims to simultaneously learn poli...

Minimum Regret Search for Single- and Multi-Task Optimization

We propose minimum regret search (MRS), a novel acquisition function for...

Learning Multi-Task Transferable Rewards via Variational Inverse Reinforcement Learning

Many robotic tasks are composed of a lot of temporally correlated sub-ta...

Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise

We consider the problem of learning the behavioral preferences of an exp...

MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor Based on Multi-Task Adversarial Training

This paper describes MagicPai's system for SemEval 2021 Task 7, HaHackat...

The Limits of Multi-task Peer Prediction

Recent advances in multi-task peer prediction have greatly expanded our ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Inverse reinforcement learning (IRL) 

[Ng and Russell2000, Russell1998] refers to the problem of ascertaining an agent’s preferences from observations of its behavior while executing a task. For instance, observing a human perform a task on the factory line provides information and facilitates learning the task. This passive mode of transferring skills to a collaborative robot (cobot) is strongly appealing because it significantly mitigates costly human effort in not only manually programming the cobot but also in actively teaching the cobot through interventions. The learned preferences can be utilized by the cobot to imitate the observed task [Osa et al.2018] or assist the human on it [Trivedi and Doshi2018].

Motivated by the long-term goal of bringing robotic automation to existing produce processing lines, we focus on the well-defined but challenging task of sorting onions. Our observations of persons engaged in this job in a real-world produce processing shed attached to a farm revealed multiple sorting techniques in common use. For example, in addition to the overt technique of picking and inspecting few of the onions as they pass by, we noticed that the humans would simply roll the onions (without picking them up) to expose more of their surface. The latter technique allows more onions to be quickly assessed but inaccurately. Consequently, the problem of learning how to sort onions from observations requires multi-task IRL [Arora and Doshi2018]. This variant of IRL allows the possibility that the demonstrator could be switching between multiple reward functions thereby exhibiting multiple ways of solving the given problem or performing multiple tasks. In a previous approach to multi-task Bayesian IRL, DPM-BIRL [Choi and Kim2012], a Dirichlet process model is used to perform non-parametric clustering of the trajectories, where each cluster corresponds to an underlying reward function. Different from Bayesian IRL, Babes-VRoman et al.  [Babes-Vroman et al.2011] apply the iterative EM-based clustering by replacing the mixture of Gaussians with a mixture of reward functions and a reward function of maximum likelihood is learned for each cluster.

Unlike previous approaches, we present a new method for multi-task IRL that generalizes the well-known maximum entropy approach to IRL (MaxEntIRL) [Ziebart et al.2008]. Arora and Doshi [Arora18:Survey] list MaxEntIRL as a key foundational technique for IRL in their survey. Indeed, applications of this technique in various contexts have yielded promising results [Bogert and Doshi2014, Wulfmeier and Posner2015]. A straightforward extension of MaxEntIRL to multiple tasks would be to replace the maximum likelihood computation in the iterative EM with the nonlinear program of MaxEntIRL. But this would require solving the nonlinear program for each of multiple clusters, and repeatedly. In contrast, we formulate the problem as a single entropy-based nonlinear program that combines the MaxEntIRL objective with the objective of finding a cluster assignment distribution having the least entropy. Ziebart et al. [Ziebart08:Maximum] demonstrated the advantage that MaxEntIRL brings to single-task IRL in comparison to the Bayesian technique. We expect to leverage this benefit toward multi-task IRL.

Modeling multi-task IRL as a single optimization problem enables the direct application of well-studied optimization algorithms (e.g. fast gradient-descent) to this problem. In particular, we derive the gradients of the Lagrangian relaxation of the nonlinear program, which then facilitates the use of fast gradient-descent based algorithms. We evaluate the performance of this method in comparison with two previous multi-task IRL techniques on a simulation in ROS Gazebo of the onion sorting problem. We show that the MaxEnt multi-task method improves on both and learns the reward functions to a high level of accuracy, and which allows Sawyer to observe and reproduce both ways of sorting the onions while making few mistakes. However, we also observed room for improvement in one of the learned behaviors.

2 Background

In IRL, the task of a learner is to find a reward function under which the observed behavior of expert, with dynamics modeled as an incomplete MDP , is optimal  [Russell1998, Ng and Russell2000]. Abbeel and Ng [Abbeel04:Apprenticeship] first suggested modeling the reward function as a linear combination of binary features, : , , each of which maps a state from the set of states and an action from the set of expert’s actions to a value in {0,1}. The reward function is then defined as , where are the feature weights

in vector

. The learner’s task is to find a vector that completes the reward function, and subsequently, the MDP such that the observed behavior is optimal.

Many of the early methods for IRL biased their search for the solution to combat the ill-posed nature of IRL and the very large search space [Abbeel and Ng2004, Ziebart et al.2008]. Ziebart et al. [Ziebart08:Maximum], taking a contrasting perspective, sought to find a distribution over all trajectories (sequences of state-action pairs) that exhibited the maximum entropy while being constrained to match the observed feature counts. The problem reduces to finding

, which parameterizes the exponential distribution that exhibits the highest likelihood. The corresponding nonlinear program is shown below.


Here, is the space of all distributions over the set of all trajectories, and . Let denote the set of observed trajectories. Then, the right-hand side of the second constraint above becomes, .

2.1 Multi-Task IRL

The same job on a processing line may be performed in one of many ways each guided by a distinct set of preferences. An expert may switch between these varied behaviors as it performs the job, or multiple experts may interleave to perform the job. If the reward functions producing these behaviors are distinct, then using the traditional IRL methods would yield a single reward function that cannot explain the observed trajectories accurately. However, modeling it as multi-task problem allows the possibility of learning multiple reward functions. If the number of involved reward functions is pre-determined, we may view each unknown reward function as a generative model producing a cluster of trajectories among the observed set. As both, the reward weights and set of observed trajectories generated by it, are unknown, we may utilize the iterative EM to learn both [Babes-Vroman et al.2011].

On the other hand, if the number of reward functions is not known a’priori, multi-task IRL can be viewed as non-parametric mixture model clustering, which is typically anchored by a Dirichlet process [Gelman et al.2013]. We briefly review the Dirichlet process and outline its previous use in DPM-BIRL [Choi and Kim2013].

A Dirichlet Process (DP) is a stochastic process whose sample paths are drawn from functions that are distributed according to the Dirichlet distribution. More formally, function if Dirichlet where is some partition of the space over which base distribution is defined. Here, is the concentration parameter of the Dirichlet distribution and . Observations distributed according to allow us to update the DP. Let be a sequence of observations and be the number of these observations that lie in . Then, the posterior distribution Dirichlet.

DPs find application in Bayesian mixture model clustering [Gelman et al.2013] due to an interesting property exhibited by . Irrespective of whether the base distribution is smooth, is a discrete distribution. Therefore, i.i.d. draws of from as will repeat, and these can be seen as cluster assignments. A generic DP-based Bayesian mixture model can be defined as:

where data has distribution . Notice the lack of any bound on the number of mixture components. To utilize this mixture model for clustering observed data , we must additionally assign each data point to its originating cluster, and these assignments are drawn from convex mixture weights () which are themselves distributed randomly. The number of components may grow as large as needed. The popular stick-breaking construction imposes the distribution to a mixture weight where parameter

is sampled from the Beta distribution. We may then obtain a cluster assignment

for data point by sampling distribution

parameterized by event probabilities


Then, for each data point


where denotes a unique component parameter value.

Choi and Kim [Choi2013bayesian] utilize this Bayesian mixture model application of a DP toward multi-task IRL. The data points are the observed trajectories, parameterize the distinct reward functions, and corresponds to , where is the fixed length of the trajectory, is partition function, and

is taken as the Gaussian distribution. Neal [Neal00:Markov] discusses several MCMC algorithms for posterior inference on DP-based mixture models, and Choi and Kim selected the Metropolis-Hastings.

3 MaxEnt Multi-Task IRL

Ziebart et al. [Ziebart08:Maximum] notes a key benefit of the MaxEnt distribution over trajectories over the distribution mentioned in Section 2.1 (which is the prior over trajectories utilized in the Bayesian formulation for IRL [Ramachandran and Amir2007]), despite their initial similarities. Specifically, the latter formulation, which decomposes the trajectory into its constituent state-action pairs and obtains the probability of each state-action as proportional to the exponentiated Q-function, is vulnerable to the label bias. Due to the locality of the action probability computation, the distribution over trajectories is impacted by the number of action choice points (branching) encountered by a trajectory. On the other hand, the MaxEnt distribution does not suffer from this bias.

This important observation motivates a new method that combines the non-parametric clustering of trajectories and learning multiple reward functions by finding trajectory distributions of maximum entropy. This method would have the benefit of avoiding the label bias, which afflicts the previous technique of DPM-BIRL.

3.1 Unified Optimization

A straightforward approach to the combination would be to replace the parametric distribution in MaxEntIRL with in DP-based mixture model, the distribution over the trajectories with cluster assignment value . Solving the nonlinear program will yield parameter that maximizes the entropy of . Though simple, this approach is inefficient because it requires solving the MaxEnt program repeatedly – each time the DP-based mixture model is updated. As an analytical solution of MaxEnt is not available, the optimization is performed numerically by using either gradient descent  [Ziebart et al.2008] or L-BFGS  [Bogert and Doshi2014].

Instead, we pursue an approach that adds key elements of the DP-based mixture modeling to the nonlinear program of MaxEnt optimization. MaxEnt can learn component parameters (these are the Lagrangian multipliers), which maximize the entropy of the distribution over those trajectories whose cluster assignment . Subsequently, each component distribution assumes the form of an exponential-family distribution parameterized by , which is known to exhibit the maximum entropy. For our DPM model, the distribution is mixture . To multi-task max-entropy objective, we add a second objective of finding component weights that exhibit a minimal entropy. The effect of this objective is to learn a minimal number of distinct clusters. More formally, the objective function is

here can be written as where is the Kronecker delta taking a value of 1 when , and 0 otherwise, and is the distribution over all the trajectories for cluster . The unified nonlinear optimization problem is shown below.

subject to

The first constraint above simply ensures that the joint probability distribution sums to 1. The second constraint makes the analogous constraint in MaxEntIRL more specific to matching expectations of feature functions that belong to the reward function of cluster

. Here,


Constraint 3 of the program in (3) ensures that the mixture weights are convex. Notice that the DP-based mixture model obtains cluster assignment from mixture weights . We may approximate this simulation of simply as , which is the proportion of observed trajectories currently assigned to cluster . For notational convenience, let us denote as indicator . We may then rewrite the first constraint as

and the second constraint is rewritten as,




Furthermore, we may simplify the third constraint of the nonlinear program as follows:

The last equivalence follows from the fact that every observed trajectory must belong to a cluster, and . The final form of the NLP of (3) is as follows.

subject to

where are defined in Eqs. 4 and 5.

3.2 Gradient Descent

The Lagrangian dual for the nonlinear program in (6) is optimized as with


where multipliers can be substituted by using relations derived by equating derivatives of w.r.t. variables of optimization to 0. The target is to learn the multipliers (weights for linear reward function for each learned cluster ) and the variables (for each trajectory ) that achieve . We achieve the target via gradient ascent for and descent for using the following partial derivatives:

where and . The former derivative is same as that used for the single task MaxEntIRL. The latter derivative indicates that the chances of change in assignment is less if a cluster has many trajectories assigned to it (inversely proportional to ) and has a higher likelihood of generating trajectories. Due to lack of space, we do not show the derivations of these gradients in this paper.

4 Domain: Robotic Sorting of Onions

Our broader vision is to make it easy to deploy robotic manipulators on complex processing lines involving pick-inspect-place tasks using IRL. With this vision, we seek to deploy the robotic arm Sawyer for sorting vegetables in processing sheds. Our setup involves a learner robot observing an expert sort onions in a post-harvest processing facility. The expert aims to identify and remove onions with blemishes from the collection of onions present on a conveyor belt. Blemished onions are dropped in a bin while others are allowed to pass. We simulate a human expert with another robot.

Figure 1: A demonstration of the two onion sorting behaviors by the expert in ROS Gazebo. The Sawyer robotic arm rolls its gripper over the onions thereby exposing more of their surface area. Possibly blemished onions are then picked and placed in the bin. Sawyer picks an onion, inspects it closely to check if it is blemished, and places it in the bin on finding it to be blemished.

In a visit to a real-world onion processing line attached to a farm, we observed that two distinct sorting techniques were in common use and would be interleaved by the human sorters. Subsequently, we model the expert as acting according to the output of two MDPs both of which share the state and action sets, the transition function and the reward feature functions. They differ in the weights assigned to the features, which yields different behaviors. The specific task is to learn the reward functions underlying the two MDPs.

The state of a sorter is perfectly observed and composed of four factors: onion and gripper location, quality prediction, and multiple predictions. Here, the onion’s location can be on the sorting table, picked up, under inspection (involves taking it closer to the head), inside the blemished-onion bin, or the onion has been returned to the table post inspection. Gripper locations are similar but does not include the return back to the table. Quality prediction of the onion can be blemished, unblemished, or unknown. Finally, simultaneous quality prediction for multiple onions is either available or not.

The expert’s actions involve focusing attention on a new onion on the table at random, picking it up, bringing the grasped onion closer and inspecting it, placing it in the bin, placing it back on the table, roll its gripper over the onions, and focus attention on the next onion among those whose quality has been predicted. The features utilized as part of reward functions are:

  • [leftmargin=*,topsep=0in,itemsep=0in]

  • BlemishedOnTable(,) is 1 if the considered onion is predicted to be blemished and it is on the table, 0 otherwise;

  • GoodOnTable(,) is 1 if the considered onion is predicted to be unblemished and is on the table; 0 otherwise

  • BlemishedInBin(,) is 1 if the considered onion is predicted to be blemished and is in the bin; 0 otherwise

  • GoodInBin(,) is 1 if the considered onion is predicted to be unblemished and is in the bin and 0 otherwise.

  • [leftmargin=*,topsep=0in,itemsep=0in]

  • MakeMultiplePredictions(,) is 1 if the action makes predictions for multiple onions simultaneously, and 0 otherwise;

  • InspectNewOnion(,) is 1 if the considered onion is inspected for the first time and a quality prediction is made for it;

  • AvoidNoOp(,) is 1 if the action changes the state of the expert, 0 otherwise.

  • PickAlreadyPlaced(,) is 1 if the action picks an onion that has already been placed after inspection, 0 otherwise. This helps avoid pick-place-pick cycles.

Two distinct vectors of real-valued weights on these feature functions yield two distinct reward functions. The MDP with one of these solves to obtain a policy that makes the expert randomly pick an onion from the table, inspect it closely, and place it in the bin if it appears blemished otherwise place it back on the table. The second reward function yields a policy that has the expert robot roll its gripper over the onions, quickly identify all onions and place few onions in the bin if they seem blemished. Both these sorting techniques are illustrated in simulation in Fig. 1.

5 Experiments

We simulated the domain described in Section 4 in the 3D simulator Gazebo 7 available as part of ROS Kinetic with the robotic arm Sawyer

functioning as a stand-in for the expert grader. Sawyer is a single robot arm with 7 degrees of freedom and a range of about 1.25m. We partially simulate a moving conveyor belt in Gazebo by repeatedly making a collection of onions – some of these are blemished – appear on the table for a fixed amount of time after which the onions disappear. Sawyer is tasked with sorting as many onions as possible from each collection before it disappears. The locations of onions are internally tracked and made available to the expert to facilitate the task. We utilize the MoveIt motion planning framework to plan Sawyer’s various sorting actions, which are then reflected in Gazebo.

Metrics   A known metric for evaluating IRL’s performance is the inverse learning error [Choi and Kim2011], which gives the loss of value if the learner uses the policy obtained by solving the expert’s MDP with the learned reward function (parameterized by ) instead of the expert’s true policy obtained by solving its MDP with its actual reward function

Here, and are the true and learned reward function weights (component parameters) for the cluster assigned to the observed trajectory and denotes the corresponding policy. Note that ILE averages the value loss over the observed trajectories.

Another pair of metrics is used to measure the performance of Sawyer on the onion sorting using the learned reward functions. Precision is measured as the proportion of the number of actually blemished onions in the bin from the total number of onions placed in the bin. Recall is the proportion of the actually blemished onions placed in the bin from the total number of actually blemished onions on the table. As careful inspection tends to be more accurate than simply rolling over the onions, we expect the behavior of pick-inspect-place to exhibit a higher precision compared to the alternative. On the other hand, it is slower compared to rolling and placing, hence its recall is expected to be lower.

Figure 2:

Average ILEs of ME-MTIRL, DPM-BIRL and EM-MLIRL as the number of trajectories increases. Vertical bars are the standard deviations. Note these bars may have unequal height around the mean due to the log-scale of the y-axis.

Performance evaluation   We used the metric of ILE as defined previously to measure the performance of the MaxEnt Multi-task IRL (ME-MTIRL). Figure 2 shows the trend of average ILE as the number of input trajectories is increased. It also shows the performances for previous multi-task IRL techniques: DPM-BIRL [Choi and Kim2012] and EM-MLIRL [Babes-Vroman et al.2011]. Each data point is the average of 5 runs. In each run, the gradient ascents and descents are allowed to stabilize thereby yielding stable feature weights and cluster assignments. Note the expected monotonic decrease in ILE exhibited by ME-MTIRL as the number of demonstration trajectories increases. Unlike the two DP based methods, EM-MLIRL performed significantly worse throughout converging to incorrect reward functions that were likely local optima. The MaxEnt method (ME-MTIRL) exhibits ILE that is consistently lower than that of the Bayesian method (DPM-BIRL), which is indicative of better learning performance. Notice that 64 trajectories appear to be sufficient to get a very low ILE. Finally, the method learned that there were two clusters in most runs though a few yielded just one cluster.

Method (TP,FP,FN,TN) P%,R%
Expert Pick-inspect-place (4,0,8,12) 100, 33
Roll-pick-place (8,4,4,8) 66, 66
Learned Pick-inspect-place (3,0,9,12) 100, 25
(ME-MTIRL) Roll-pick-place (6,4,6,8) 60, 50
Learned Pick-inspect-place (2,0,10,12) 100, 16.7
(DPM-BIRL) Roll-pick-place (5,5,7,7) 50, 41.7
Learned Pick-inspect-place (3,1,9,11) 75, 25
(EM-MLIRL) Roll-pick-place (5,4,7,8) 55.6, 41.7
Table 1:

Column labels TP denotes true positive (# blemished onions in bin), FP denotes false positive (# good onions in bin), TN denotes true negatives (# good onions remaining on conveyor), and FN denotes false negatives (# blemished onions remaining on conveyor). P and R denote precision and recall in %, respectively.

However, does the improvement in learning translate to improved performance in the sorting task? In Table 1, we show the average precision and recall of the expert engaged in using the two sorting techniques and the analogous metrics for the learned behaviors using all three IRL approaches. We used the average feature weights learned across 5 trials. The performance of the behaviors learned by ME-MTIRL is closer to that of the expert’s than those learned by the two baseline methods. Notice that the learned pick-inspect-place behavior shows high precision but leaves many onions on the table leading to worse recall. It sometimes gets trapped in a cycle where Sawyer repeatedly picks and places the same onion on the table. This is because a very low weight is learned for the PickAlreadyPlaced(,) feature function, a high weight for this feature would have avoided this cycle. On the other hand, the roll-pick-place behavior is learned satisfactorily and exhibits precision and recall close to those of the true behavior.

6 Related Work

An early work on multi-task preference learning [Birlutiu et al.2009]

represents the relationships among various preferences using a hierarchical Bayesian model. Dimitrakakis and Rothkopf [Dimitrakakis12:Bayesian] generalized this problem of learning from multiple teachers to the dynamic setting of IRL, giving the first theoretically sound formalization of multi-task IRL. They modeled the reward and policy functions as being drawn from a common prior, and placed a hyperprior over the common prior. Both the prior and the hyperprior are updated using the observed trajectories as evidence, with the posterior samples capturing the distribution over reward functions that explain the observed trajectories. The inference problem is intractable, however, even for toy-sized MDPs.

The above setting of multi-task IRL was also restricted in the sense that this was the “labeled” variant; although the reward functions generating the trajectories were unknown, it was assumed that the membership of the observed trajectories to the set generated from a common reward function was known. Babes-Vroman et al. [Babes-Vroman11:Apprenticeship] first addressed the “unlabeled” variant of multi-task IRL, where the pairing between the unknown reward and the trajectories was also unknown. They used expectation maximization to cluster trajectories, and learned a maximum likelihood reward function for each cluster. By contrast, Choi and Kim [Choi12:Nonparametric] took the Bayesian IRL approach using a Dirichlet process mixture model, performing non-parametric clustering of trajectories, also allowing a variable number of clusters for the “unlabelled” variant. Both approaches treated multi-task IRL as multiple single-task IRL problems, which is also the viewpoint taken in our work. Gleave and Habryka [Gleave18:Multi-task] take a markedly different viewpoint: they assume that the reward functions for most tasks lie close to the mean across all tasks. They propose a regularized version of MaxEnt IRL that exploits this assumed similarity among the unknown reward functions, thus transferring information across tasks. Although it is also an application of MaxEnt IRL to the multi-task setting, our approach is a principled integration of the Dirichlet process mixture model into MaxEnt IRL.

The problem of learning from multi-expert demonstrations have also been studied from a non-IRL perspective, specifically in imitation learning, such as the generative adversarial imitation learning (GAIL) 

[Ho and Ermon2016]. Recent extensions of GAIL [Hausman et al.2017, Li et al.2017] augment trajectories with a latent intention variable specifying the task, and then maximize the mutual information between observed trajectories and the intention variable to disentangle the trajectories generated from different tasks. On the other hand, GAIL’s performance has been difficult to replicate and MaxEntIRL has been shown to perform better than GAIL on single tasks [Arora et al.2019].

7 Concluding Remarks

Humans may exhibit multiple distinct ways of solving a problem each of which optimizes a different reward function while still sharing the features. For IRL to remain useful in this context, it must be generalized to not only learn how many reward functions are being utilized in the demonstration, but also to learn the parameters of these reward functions. We presented a new multi-task IRL method that combined maximum entropy IRL – a key IRL technique – with elements of the Dirichlet process Bayesian mixture model. While keeping the number of learned behaviors (minimum entropy clustering) minimal necessary to explain observations, it allows to leverage the advantages of maximum entropy IRL and facilitates solving this generalization directly as a unified optimization problem. On a real-world inspired domain, we showed that it improves on the previous multi-task IRLmethods. The behaviors induced by the learned reward functions imitated the observed ones for the most part.

Having established the value of combining MaxEnt with multi-task IRL in this paper, our next step is to explore how well this method scales to problems with more actions and tasks. An avenue of ongoing work aims to derive sample complexity bounds by relating the optimization to a maximum likelihood problem. Such bounds could inform the number of trajectories needed for a given level of learning performance.


  • [Abbeel and Ng2004] Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In

    Twenty-first International Conference on Machine Learning (ICML)

    , pages 1–8, 2004.
  • [Arora and Doshi2018] Saurabh Arora and Prashant Doshi. A survey of inverse reinforcement learning: Challenges, methods and progress. CoRR, abs/1806.06877, 2018.
  • [Arora et al.2019] Saurabh Arora, Prashant Doshi, and Bikramjit Banerjee. Online inverse reinforcement learning under occlusion. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 2019.
  • [Babes-Vroman et al.2011] Monica Babes-Vroman, Vukosi Marivate, Kaushik Subramanian, and Michael Littman. Apprenticeship learning about multiple intentions. In 28th International Conference on Machine Learning (ICML), pages 897–904, 2011.
  • [Birlutiu et al.2009] A. Birlutiu, P. Groot, and T. Heskes. Multi-task preference learning with Gaussian processes. In Proc. ESANN, pages 123–128, 2009.
  • [Bogert and Doshi2014] Kenneth Bogert and Prashant Doshi. Multi-robot inverse reinforcement learning under occlusion with interactions. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, AAMAS ’14, pages 173–180, 2014.
  • [Choi and Kim2011] Jaedeug Choi and Kee-Eung Kim. Inverse reinforcement learning in partially observable environments. J. Mach. Learn. Res., 12:691–730, 2011.
  • [Choi and Kim2012] Jaedeug Choi and Kee-Eung Kim. Nonparametric bayesian inverse reinforcement learning for multiple reward functions. In 25th International Conference on Neural Information Processing Systems (NIPS), pages 305–313, 2012.
  • [Choi and Kim2013] Jaedeug Choi and Kee-Eung Kim. Bayesian nonparametric feature construction for inverse reinforcement learning. In

    Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence

    , IJCAI ’13, pages 1287–1293. AAAI Press, 2013.
  • [Dimitrakakis and Rothkopf2012] Christos Dimitrakakis and Constantin A. Rothkopf. Bayesian multitask inverse reinforcement learning. In 9th European Conference on Recent Advances in Reinforcement Learning, pages 273–284, 2012.
  • [Gelman et al.2013] Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin. Bayesian Data Analysis. CRC Press, 3rd edition, 2013.
  • [Gleave and Habryka2018] A. Gleave and O. Habryka. Multi-task maximum entropy inverse reinforcement learning. arXiv preprint, (arXiv:1805.08882), 2018.
  • [Hausman et al.2017] Karol Hausman, Yevgen Chebotar, Stefan Schaal, Gaurav Sukhatme, and Joseph J. Lim. Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets. In Proc. NIPS, 2017.
  • [Ho and Ermon2016] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems (NIPS) 29, pages 4565–4573, 2016.
  • [Li et al.2017] Yunzhu Li, Jiaming Song, and Stefano Ermon. InfoGAIL: Interpretable imitation learning from visual demonstrations. In Proc. NIPS, 2017.
  • [Neal2000] Radford Neal. Markov chain sampling methods for dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9(2), 2000.
  • [Ng and Russell2000] Andrew Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In Seventeenth International Conference on Machine Learning, pages 663–670, 2000.
  • [Osa et al.2018] Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter Abbeel, and Jan Peters. An algorithmic perspective on imitation learning. Foundations and Trends® in Robotics, 7(1-2):1–179, 2018.
  • [Ramachandran and Amir2007] Deepak Ramachandran and Eyal Amir. Bayesian inverse reinforcement learning. In 20th International Joint Conference on Artifical Intelligence (IJCAI), pages 2586–2591, 2007.
  • [Russell1998] Stuart Russell. Learning agents for uncertain environments (extended abstract). In

    Eleventh Annual Conference on Computational Learning Theory

    , pages 101–103, 1998.
  • [Trivedi and Doshi2018] M. Trivedi and P. Doshi. Inverse learning of robot behavior for collaborative planning. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1–9, 2018.
  • [Wulfmeier and Posner2015] Markus Wulfmeier and Ingmar Posner. Maximum Entropy Deep Inverse Reinforcement Learning. arXiv preprint, 2015.
  • [Ziebart et al.2008] Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse reinforcement learning. In 23rd National Conference on Artificial Intelligence - Volume 3, pages 1433–1438, 2008.