Planning trajectories for a robot that interacts with other agents is challenging, as it requires prediction of the reactive behaviors of the other agents, in addition to planning for the robot itself. Classical approaches in the literature decouple the prediction and planning tasks. Usually, predicted trajectories of the other agents are computed first and provided as input for the robot planning module, which considers them as immutable obstacles. This formulation ignores the influence of the robot’s decisions on the other agents’ behaviors. Moreover, it can lead to the “frozen robot” problem, where no safe path to the goal can be found by the planner [Trautman2010] because of the false assumption that other agents will not yield or deviate from their predicted trajectory in response to the robot. Preserving the coupling between prediction and planning is thus key to producing richer interactive behavior for a robot acting among other agents.
Several recent works have used the theory of dynamic games to capture the coupled interaction among a robot and other agents, particularly in the context of autonomous driving [LeCleach2020a, Fridovich-Keil2020]. However, these works rely on the strong assumption that the robot has full knowledge of the other agents’ objective functions. In many applications, the robot only has access to a coarse estimate of these objective functions. For instance, in a crowd navigation problem, the robot might know the preferred walking speed of humans. In a ramp-merging scenario, the autonomous car might be aware of the desired distance drivers usually keep between themselves. These coarse estimates of the other agents’ objectives can be obtained using real data like the NGSIM driving dataset [Colyar2007] or the ETH dataset [Pellegrini2009]
for interacting pedestrians. Inverse reinforcement learning (IRL) approaches typically learn a general objective function to suit a large batch of demonstrations from multiple agents[Ng2000, Ziebart2008, Waugh2013]. On the contrary, our goal is to accurately estimate the individualized objective functions of specific agents in the vicinity of the robot we control. This online estimation process occurs while interacting with the other agents so that the robot can adapt to each agent individually. For instance, our approach allows for estimating the level of aggressiveness of a specific driver in the surroundings of the autonomous car.
The online estimation approach we propose is complementary to classical offline IRL methods: With IRL, we can learn a relevant set of objective-function features from real data, as well as a prior distribution over the objective-function parameters. Given these features and a prior on the parameters, we can use LUCIDGames to refine the parameter estimation for a specific agent based on online observation of this agent. Our approach assumes that agents solve a dynamic game and follow Nash equilibrium strategies. This setting models non-cooperating agents that act optimally to satisfy their individual objectives [LeCleach2020a, Fridovich-Keil2020]. We further assume that we have access to a class of objective functions parameterized by a small number of parameters. This could be the desired speed, or driver aggressiveness in the autonomous driving context.
To estimate these parameters, we adopt the unscented Kalman filtering (UKF) approach. In our case, the key part of this algorithm is the measurement model that maps the objective function parameters to the observation of the surrounding agents’ next state. To obtain this mapping, we use ALGAMES, a trajectory optimization solver for dynamic games that handles general nonlinear state and input constraints [LeCleach2020a].The choice of a derivative-free estimation method (UKF) is justified by the complexity of the measurement model, which includes multiple non-convex constrained optimization problems. Additionally, we design a planner for the robot that is robust to poor estimates of the other agents’ objectives. By sampling from the belief over the objective functions of the other agents and computing trajectories corresponding to those samples, we can translate the uncertainty in objective functions into uncertainty in predicted trajectories. Then, ellipsoidal bounds are fitted to the sampled trajectories to form “safety constraints”; collision constraints that account for objective uncertainty. Importantly, the calculation of these safety constraints reuses samples required by the UKF estimation algorithm. It is, therefore, executed at a negligible additional cost. In a receding-horizon loop, LUCIDGames controls one agent called the “robot” and estimates the other agents’ objectives at 40 Hz for a 3-player game with a strong level of interaction among the agents. Our primary contributions are as follows:
We propose a UKF-based method for a robot to estimate the objective function parameters of non-cooperating agents online, and show convergence of the estimate to the ground-truth parameters. (Fig. 4).
We combine the online parameter estimator with a game-theoretic planner. The combined estimator and planner, called LUCIDGames, runs online at 40Hz in a receding-horizon fashion.
We include safety constraints within LUCIDGames to impose ellipsoidal collision-avoidance constraints for the robot that reflect the uncertainty in the other agents’ future trajectories due to the Bayesian estimate of their parameters.
We compare LUCIDGames against game-theoretic and non-game-theoretic baselines. We show that LUCIDGames’ trajectory-prediction error rapidly decreases to match the accuracy of the oracle predictor that has access to the ground-truth objectives (Fig. 5). Furthermore, we show that LUCIDGames with safety constraints allows for realistic, cautious interactions between a robot and another agent making a unexpected maneuver (Fig. 1).
Ii Related Work
Ii-a Game-Theoretic Trajectory Optimization
Dynamic games have been used as a modeling framework in a wide variety of applications. For example, in autonomous driving [Fridovich-Keil2020, LeCleach2020a], power system control [Chen2015], drone and car racing [Spica2018] etc. The solutions of a dynamic game depend on the type of equilibrium selected [Basar1998]. Nash equilibrium models games without hierarchy between players; Each player’s strategy is the best response to the other players’ strategies. Nash equilibrium solutions have been studied extensively [Fridovich-Keil2020, Spica2018, Chen2015]. They seem to capture the game-theoretic interactions observed in some multi-agent non-cooperative problems, e.g. ramp merging. We follow this approach by solving for open-loop Nash equilibrium strategies. However, we intend to relax a key assumption made in previous works by estimating the other agents’ objective functions instead of assuming that they are known a priori by the robot we control.
Ii-B Objective Function Estimation
Estimating objective functions from historical data is a well investigated problem known as Inverse Optimal Control (IOC) or Inverse Reinforcement Learning (IRL). Typically, with the IRL approach, the objective function is linear in terms of a given set of state features [Ziebart2008]
. The goal is to identify a parameter vector that weights these features so that the behavior resulting from this estimated objective matches the observed behavior. While these classical approaches are usually framed in the discrete state and action space setting, they can also be applied to continuous state and action spaces arising in robotics problems[Ratliff2009, Mombaur2010]. However, these works are limited to single-agent problems.
In the multi-agent setting, some IRL approaches formulate the problem by assuming cooperative agents [Hadfield-Menell2016] or competing agents [Waugh2013, Ziebart2010]. These approaches have been demonstrated on discretized state and actions spaces. More recent works consider the multi-agent competitive setting with continuous state and action spaces [Inga2019, Molloy2020]. Their methods have typically been demonstrated on linear-quadratic games with low-dimensional states and control inputs. IOC and IRL-based techniques estimate the objective function’s parameters “offline”. Given a set of complete trajectories, they intend to identify one parameter vector that will best fit the data. Our goal is slightly different: As an agent in the game, we would like to perform the estimation “online”, with only knowledge of previous steps, and use our estimate to inform our actions for future time steps. This means that we have access to fewer demonstrations and that our computation time is limited to ensure real-time execution. On the other hand, we assume a low-dimensional parameter space with a coarse prior. In the multi-agent setting, several approaches were proposed to solve this type of online parameter estimation problem.
Ii-C Online Parameter Estimation
We choose the multi-agent autonomous driving problem of highway driving as our running example throughout this paper. We review approaches proposed for online parameter estimation in this context. The Intelligent Driver Model (IDM) [Treiber2000], is a rule-based lane-following model balancing two objectives: reaching a desired speed and avoiding collision with the preceding car. This model has few tunable parameters and returns the scalar acceleration of the vehicle along a predefined path. Several works used online filtering techniques to estimate these IDM parameters [Hoermann2017, Bhattacharyya2020]. Schultz et al. used particle filtering techniques to estimate the driver’s objective including discrete decision variables like turning left or right [Schulz2018]. These works demonstrated that estimating the surrounding drivers objectives helps better predict their future trajectories. However, this gained information was not used to improve the decision making of the cars. A recent work estimated the social value orientation (SVO) of the agent the robot is interacting with [Schwarting2019a]. However, this formulation still requires knowledge of the desired lane or desired speed of each agent. This is precisely the assumption we want to avoid in the present work.
Ii-D Data-Driven Trajectory Prediction
There is a rich literature on applying data-driven approaches to pedestrian or vehicle trajectory predictions. Such approaches usually require a large corpus of data and are trained offline as a general model to suit multiple agents. On the other hand, we require a small amount of data and find parameters online for a specific agent. Data-driven methods can predict a distribution over future trajectories e.g., offline inverse optimal control with online goal inference [Kitani2012]
; Conditional Variational Autoencoders (CVAE)[Schmerling2018a]
or Generative Adversarial Networks (GAN)[Bhattacharyya2018a]. Our approach maintains a unimodal belief over objective function parameters,111
Our approach can easily be extended to multimodal belief representation of objective function parameters using a Gaussian mixture model.which translates into a distribution over trajectory predictions. A shortcoming of the CVAE-based or GAN-based methods is that they ignore kinodynamic constraints on the predicted trajectories, allowing cars to move sideways, for instance. Incorporating information like drivable area maps which are common for autonomous driving applications [Aurora2019], could prevent infeasible trajectory predictions [Bansal2018]. Our approach generates dynamically feasible and collision-free predictions. One notable work in this field is Trajectron [salzmann2020]. It handles kinodynamic constraints and incorporates drivable area maps, as well as the robot’s planned trajectory, to inform the prediction. However, contrary to our algorithm, these data-driven methods ignore collision-avoidance constraints between agents and predict trajectories involving collisions as observed by Bhattacharyya [Bhattacharyya2020] with a learning-based method [Bhattacharyya2018a].
Iii Problem Statement
In a multi-player dynamic game, the robot takes its control decisions using LUCIDGames and carries out all the computation required by the algorithm. We assume the other agents are “ideal” players in the game. They have access to the ground-truth objective functions of all the players in the game. They take their control decisions by individually solving for a Nash equilibrium strategy based on these true objective functions and execute them in a receding-horizon loop. This assumption is necessary to generate a human driver model that is reactive to the robot’s actions and that maintains coupling between planning and trajectory prediction for the robot. Other approaches replayed prerecorded driving data to emulate human driving behavior [Hoermann2017, Bhattacharyya2020, Schwarting2019a], but this method ignores the reactive nature of human drivers to the robots’ decisions. Lane-following models, such as IDM [Bouton2019b] fail to capture complex driving strategies like nudging or changing lanes that our model can generate. Moreover, this assumption is required to avoid the complexity of the robot having to ”estimate the estimates” of the other agents. Nevertheless, our algorithm shows strong practical performance even when this assumption is violated. All the experiments in this paper are run with the “ideal” agents having noisy estimates of the objectives of the surrounding agents in the scene. We further assume that both the robot and the ideal agents plan by computing open-loop Nash equilibrium trajectories and execute these planned trajectories in a receding horizon loop.
Iii-a Dynamic Game Nash Equilibrium
We focus on the discretized dynamic game setting with time steps and players ( robot and agents). We denote the joint state of the system, and the control input of player at time step . Player ’s strategy is a control input sequence where . The robot’s strategy is denoted, , with and designates the strategies of the other agents in the game. The state trajectory is defined as where . It stems from executing the control strategies of all the players in the game on a joint dynamical system,
with denoting the time step index. We define the objective function of player ; . It is a function of its strategy, , and of the state trajectory of the joint system, . The goal of player is to select a strategy, , that will minimize its cost, , while respecting kinodynamic and collision-avoidance constraints. We compactly express these constraints as a set of inequalities :  X, U^νJ^ν(X, U^ν), C(X, U) ≤0 . Finding a Nash-equilibrium solution to the set of Problems (III-A) is called a generalized Nash equilibrium problem (GNEP) [LeCleach2020a, Facchinei2007]. It consists of finding an open-loop Nash equilibrium control trajectory, i.e. a vector, such that, for all , is a solution to (III-A) with the other players’ strategies set to . This implies that at a Nash equilibrium point, , no player can decrease their objective function by unilaterally modifying their strategy, , to any other feasible point. Solving this GNEP can be done efficiently with a dynamic game solver such as ALGAMES [LeCleach2020a]. We will consider it as an algorithmic module that takes as inputs the initial state of the system and the objective functions, , and returns an open-loop trajectory of the joint system comprising the robot and the ideal agents.
Iii-B Objective Function Parameterization
As is typically the case in the IRL and IOC literature [Ziebart2008, Mombaur2010, Inga2019], we assume that the objective function of player can be expressed as a linear combination of features, , extracted from the state and control trajectories of this player,
While restrictive, this parameterization encompasses many common objective functions like linear and quadratic costs. The UKF estimates the weight vector of all the agents in the game. We denote by the concatenation of the vectors that the robot has to estimate,
Iv Unscented Kalman Filtering Formulation
We propose an algorithm that allows the robot to estimate the objective functions’ parameter and to exploit this estimation to predict the other agents’ behaviors and make decisions for itself. We represent the belief over the parameter
as a Gaussian distribution and we sample sigma-points from it. Each sigma-point is a guess over the parameter. Given the current state of the system, , we can form a GNEP for each sigma-point. By solving these GNEPs, we obtain a set of predicted trajectories for the system. When we receive a new measurement of the state , we compare it to the trajectories we predicted earlier. The Gaussian belief over is updated with the typical Unscented Kalman Filter (UKF) [Wan2000] update rules, so that the sigma-points that had better prediction performance are now more likely.
The UKF framework requires two pieces: the process model, and the measurement model. In a typical filtering context, these are obvious. However, in our problem these are more subtle. Specifically, the quantity we estimate with the filter is . We assume this quantity evolves according to a random walk, so the process model for the UKF is the identity map plus Gaussian white process noise. The crucial part of our algorithm is the measurement model. In our case, the measured quantity available to the robot (with noise) is the system state, , at the current time. Hence, the measurement model is the map relating the parameter vector to the system state . This function is itself the solution of the dynamic game. Therefore, our UKF requires the solution of the dynamic game for each sigma-point of at each time step. The dynamics and measurement models, as well as the steps in our UKF estimator are described in detail below.
Iv-a Process and Measurement Model
Our estimator is executed in a receding-horizon loop. At each time step , the robot updates its Gaussian belief over the vector , which is parameterized by its mean, , and covariance matrix, . We assume that the ground-truth parameter is a random walk with relatively small process noise covariance, which means that the agents’ objectives are nearly constant, but may change slightly over the course of the estimation. This is a reasonable assumption as, for many robotics applications, an agent’s objective corresponds to its long-term goal and thus varies over time scales far larger than the estimator’s update period. The process model corresponds to an additive white Gaussian noise and is defined as follows,
We construct a measurement model, , that maps the parameter and the observed previous state to the current state that we observe222A direction for future work could be to consider the case where the robot has a nonlinear or partial observation of the state.:
This nonlinear function, , encapsulates the decision making process of the agents and the propagation of the system’s dynamics as detailed in Algorithm 2.
Iv-B UKF Algorithm
The estimator propagates the belief over the vector in time. The procedure that updates this belief is described in Algorithm 1. Lines 2 and 3 correspond to the prediction step, which exploits the process model. Line 4 samples sigma-points from a Gaussian distribution over the vector (Eq. 7-13),
We follow a classical sampling scheme that relies on parameters, , , , controlling the spread of the sigma-points and encoding prior knowledge about the distribution. Wan et al. provide a detailed interpretation for these parameters [Wan2000]. Line 5 applies the measurement model to the sampled sigma-points, , which are guesses over the vector . Specifically, for each sigma-point, we solve the dynamic games that the ideal agents and the robot encountered at the previous time step. Then, we propagate the system’s dynamics for one time step to obtain , a set of predictions over , the current state of the system. Lines 6 to 9 compute the Kalman gain, , and measurement prediction . Finally, the update step is executed in lines 10 and 11.
Iv-C LUCIDGames: Combining Parameter Estimation and Planning
LUCIDGames exploits the information gained via the estimator to inform the decision making of the robot. It jointly plans for itself and predicts the other agents’ trajectories. At time step , the robot solves the GNEP using the current state of the system and its mean estimate over . We obtain the next state by propagating forward the open-loop plans of both the robot and the ideal agents as detailed in Algorithm 2. The joint estimation and control procedure is detailed in Algorithm 3.
V Simulations: Design and Setup
We apply our algorithm to highway autonomous driving problems involving a high level of interactions between agents. Specifically, we test LUCIDGames in three driving scenarios exhibiting maneuvers such as overtaking, ramp merging and obstacle avoidance (Figure 2). We assume the robot follows the LUCIDGames algorithm for its decision making and estimation. The other vehicles are modeled as ideal agents solving the dynamic game with knowledge of the true parameters.
V-1 Problem Constraints
We consider a unicycle model for the dynamics of each vehicle. The state, , contains a 2D position, a heading angle and a scalar velocity for each vehicle. The control input, , consists of an angular velocity and a scalar acceleration. Additionally, we model the collision avoidance zone of each vehicle as a disk, preventing collision between vehicles and with the boundaries of the road.
V-2 Objective Function
We select a quadratic objective function incentivizing the agents to reach a desired state, , while limiting control inputs. On top of this objective function, we add a quadratic penalty on being close to other vehicles,
For agent , and designate its 2D position at time step and collision avoidance radius. and are scalar collision avoidance cost parameters encoding the magnitude of the cost and the distance at which this cost is “activated”.
In this work, we estimate a reduced number of objective function parameters. We choose 3 parameters with intuitive interpretations. Two of them are elements of the desired state, . They correspond to the desired speed and desired lateral position on the roadway (i.e. desired lane) of the vehicle. The last one is , which encodes the “aggressiveness” of the driver. Indeed, a large value for will penalize a vehicle driving too close to other vehicles, which will lead to less aggressive behavior. We remark that this parameterization is consistent with an objective function expressed as a linear combination of features, as in Equation 2. Therefore, it would be possible to use an IRL algorithm trained on real driving data to provide a prior on these parameters.
Vi Simulation Results
To assess the merits of LUCIDGames, we test it on highway driving scenarios as shown on Figure 2. We first assess the tractability and scalability of the approach for an increasing number of agents. Then, we perform an ablation study by removing the two main components of LUCIDGames: the online estimation and the game-theoretic reasoning. The goal is to investigate how each of these components affect the performance of LUCIDGames. This is also a way to compare LUCIDGames to related approaches proposed in the literature. Indeed, several works applied dynamic game solvers in a receding-horizon loop to autonomous driving problems without resorting to online estimation [LeCleach2020a, Fridovich-Keil2020]. Bhattacharyya et al. [Bhattacharyya2020]
used highway driving datasets to compare the trajectory prediction performance of rule-based and black-box driver models. The set of evaluated methods included among others: constant velocity prediction, Generative Adversarial Imitation Learning (GAIL)[Bhattacharyya2018a] and a particle filter estimating the parameters of the intelligent driver model (IDM) online. On the trajectory prediction task, the constant velocity baseline was the best performing method. Thus, we choose to compare our method to this non-game-theoretic baseline.
We run LUCIDGames in a receding horizon-loop using a coarse prior on the vector
. In practice, the initial belief is a Gaussian parameterized by its mean and variance:
Where is a large initial variance on each parameter (typically in our experiments); and
is the identity matrix. We run LUCIDGames on the ramp-merging scenario involving 2 to 4 agents and we compile the timing results in Table3. We demonstrate the tractability of the algorithm for complex autonomous driving scenarios, and we show real-time performance of the estimator for three agents (40Hz) and up to four agents (Hz). In practice, we trivially parallelize the implementation of LUCIDGames: For each sigma-point, , the algorithm requires the solution of a dynamic game (Algorithm 1, line 5). We solve these dynamic games simultaneously, in parallel, by distributing them on a multi-core processor. The number of dynamic games to solve in parallel scales like the number of sigma-points, which is linear in terms of the number of agents . Each individual dynamic game has a computational complexity of . In this work, all the experiments have been executed on a 16-core processor (AMD Ryzen 2950X).
|of Players||Freq. in Hz||in ms||in ms|
Running the MPC implementation of LUCIDGames 50 times on a ramp-merging scenario with 2,3 and 4 players, we obtain the mean update frequency of the MPC as well as the mean and standard deviation of, the time required to update the MPC plan and the belief over the other players’ objective functions.
Vi-B Parameter Estimation
We assess the ability of LUCIDGames to correctly estimate the ground-truth objectives of the other agents with only a few seconds of driving interaction. We test LUCIDGames on three scenarios: highway overtaking, ramp merging and obstacle avoidance. We compute the relative error between the ground-truth parameter and the mean of the Gaussian belief along the 12-second MPC simulation. We perform a Monte-Carlo analysis of the algorithm by sampling the initial state of the system as well as the objective parameter . The aggregated results from 50 samples are presented in Figure 4. We observe a significant decrease in the relative error between the ground-truth parameter and the mean of the belief.
Vi-C Trajectory Prediction
Additionally, we show that LUCIDGames allows the robot to better predict the trajectory of the other agents. We use the same Monte-Carlo analysis as described above on three driving scenario (Fig. 5). First, we observe that LUCIDGames initialized with a coarse prior starts with a large prediction error; around 4 m in all scenarios. However, it converges to a prediction error very much comparable to the one obtained with the oracle in about 1 second. This illustrates the ability of the robot using LUCIDGames to quickly improve its predictions about the surrounding agents by gathering information about them. The error obtained by keeping the coarse prior remains high and fairly constant during the simulation. The one obtained using LUCIDGames is an order of magnitude lower, in comparison, by the end of the simulation.
Second, we compare LUCIDGames to a non-game-theoretic baseline on trajectory prediction error. This baseline predicts the trajectories of the agents surrounding the robot by propagating straight-line and constant-speed trajectories for each agent. These predicted trajectories are of the same duration (3 seconds) as the open-loop predictions made by LUCIDGames. We choose to compare our approach to this baseline to assess the impact of including game-theoretic reasoning on the trajectory prediction performance. This line-prediction baseline may seem very coarse. However, in the context of highway driving, straight line trajectories are very pertinent for short (3 seconds) horizon predictions. In practice, we use a straight highway environment for our simulations (Figure 2). As the roadway is not curved, the only causes of trajectory curvature are lane changes, nudging and merging maneuvers. LUCIDGames is able to outperform the baseline by capturing these natural driving behaviors that go beyond lane following.
For the overtaking scenario (Figure 5), LUCIDGames starts off with a large prediction error but quickly converges to prediction error lower than the line-prediction baseline. However, the performance gap is small confirming that the line prediction baseline is a suitable model for short horizon prediction in typical highway driving.
On the other hand, for more complex driving scenarios like ramp merging, the gap between LUCIDGames and the line prediction technique is significant. We observe that this gap is the highest after 1 second when LUCIDGames has successfully converged. The gap consistently decreases afterwards as the system converges to a steady state where all the vehicles drive in straight lines following their desired lanes.
Similarly, for the collision-avoidance scenario, the prediction error obtained using LUCIDGames is around half that obtained using the line-prediction baseline after the parameter estimation has converged (1 second). We observe that the line prediction baseline almost matches LUCIDGames’ when the vehicles are constrained to drive on a narrower roadway (s). Finally, after the obstacle is passed (s), the performance gap increases in favor of LUCIDGames. Indeed, it is able to predict that vehicles are going to return to their desired lanes after avoiding the obstacle.
Vi-D Safe Trajectory Planning
We implement a robust trajectory planning scheme for the robot that accounts for uncertainty in the objective of the other agents by enforcing “safety constraints.” With LUCIDGames, we maintain a Gaussian belief over the other agents’ objectives. We thus quantify the uncertainty of our current objective function estimates. Taking into account such uncertainty can be instrumental in preventing the robot from making unsafe decisions. For instance, an autonomous vehicle should act cautiously when overtaking an agent for which it has an uncertain estimate of its desired speed and desired lane.
For many multi-robot systems, safety is ensured by avoiding collisions with other agents. Thus, we encode safe decision making for the robot by ensuring its decisions are robust to misestimation of the objective functions. First, the robot computes the “safety constraints,” which are inflated collision avoidance constraints around other agents by fitting ellipses around the trajectories sampled by the UKF (e.g., in the 95%-confidence ellipse). These safety constraints can be seen as approximate chance constraints that can be efficiently computed. Then, the robot solves the dynamic game corresponding to the mean of the belief over , with the safety constraints. In practice, when the uncertainty about is large, the sampled sigma-points and their corresponding trajectories are scattered and generate a large collision avoidance zone. The top roadway visualization in Figure 1 illustrates this situation. These safety constraints can be seen as a lifting of the uncertainty in the low-dimensional space of objective parameters onto the high-dimensional space of predicted trajectories. The “keep-out” zone is cone-shaped in free space as expected but shrinks down when the roadway narrows or when the sampled trajectories concur towards the same position.
We showcase the driving strategy emerging from this robust planning scheme in Figure 1. The human driver and the robot are confronted with an obstacle. Using LUCIDGames, the robot infers the human’s intent to change lanes (to avoid the obstacle), and negotiates, through the game theoretic planner, whether to yield to the human, or to let the human yield. In phase 1, the robot has a large initial uncertainty about the objective of the human-driven vehicle (blue). Indeed, the set of sampled trajectories contains both predictions where the human cuts in front of the robot, and ones in which the human yields to let the robot go first. Thus, the robot slows down to comply with the safety constraints which covers both hypotheses. In phase 2, the robot has correctly estimated the human’s intent to yield to the robot, to change lanes after the robot passes. Since the robot’s estimate of the human’s objective is more certain, the collision avoidance zone generated by the safety constraints shrinks. This allows the robot to plan an overtaking maneuver and regain speed. In phase 3, the robot safely proceeds in its own lane cruising at its desired velocity, while the human changes lanes behind the robot to avoid the obstacle. On the other hand, LUCIDGames without these safety constraints does not slow down to account for the initial uncertainty. The same is true for the oracle and the straight line prediction baseline. We observe the same behaviors on the scenario where the robot yields to the human.
Vi-E Results Discussion
We demonstrate the tractability of the algorithm for up to four agents with an update rate of Hz. However, it is important to notice that the robot’s controller and its estimator can be run at different rates. This would allow for a fast update of the robot’s plan and a slower update of its estimation of the objective functions of the other agents.
Vi-E2 Parameter Estimation
The good estimation performance of LUCIDGames was assessed through a Monte Carlo analysis. However, we have observed challenging scenarios demonstrating the complexity of the objective estimation task. For instance, if all the agents are far from each other, none of the collision avoidance penalties (Eq. 14) are “active”. In such a situation, it is impossible to estimate the aggressiveness parameter that scales the cost of these collision avoidance penalties. Nevertheless, we argue that this observability issue is not crucial in practice. Indeed, as long as the agents remain far from each other the aggressiveness parameter will not affect the trajectory prediction of the robot. Conversely, as soon as the agents get closer to each other, the aggressiveness parameter matters but the observability issue disappears.
Vi-E3 Trajectory Prediction
The human-driven vehicles in these simulations are modeled as agents solving the ground-truth dynamic game for a Nash equilibrium strategy in a receding horizon loop. We notice in Figure 5 that even when the robot has access to the ground-truth objective functions its trajectory prediction error is not null. This is due the noise added to the dynamics as well as the discrepancy between the open-loop plan predictions and the actual trajectory stemming from executing open-loop strategies in a receding horizon loop. We observe in Figure 5 that LUCIDGames consistently converges to error levels closely matching the ones of the agent having access to the ground-truth objective functions.
We have presented LUCIDGames, a game theoretic planning framework that includes the solution of an inverse optimal control problem online to estimate the objective function parameters of other agents. We demonstrate that this algorithm is fast enough to run online in a receding-horizon loop, and is effective in planning for an autonomous vehicle to negotiate complex driving scenarios while interacting with other vehicles. We showed that this method outperforms two benchmark planning algorithms, one assuming straight-line predictions for other agents, and one incorporating game-theoretic planning, but without online parameter estimation of other agents’ objective functions.
We envision several promising directions for future work: In this work, the set of objective function parameters has been designed with “expert” knowledge of the problem at hand, so that they encompass a large diversity of driving behaviors while remaining low dimensional. However, one could envision these parameters and associated features being identified via a data-driven approach. The overall approach of estimating online a reduced set of parameters to better predict the behavior of the system is appealing. Indeed, in this framework, the dynamic game solver lifts the low dimensional space of objective function parameters (order ) into the high dimensional space of predicted trajectories (order - ). This lifting or “generative model” natively embeds safety requirements by generating dynamically feasible trajectories respecting collision avoidance constraints. It also accounts for the fact that agents tend to act optimally with respect to some objective functions. Finally, through its game-theoretic nature, it captures the reactive nature of the agents surrounding the robot in autonomous driving scenarios, where negotiation between players is a crucial feature.