Improving Model Predictive Path Integral using Covariance Steering

09/24/2021 ∙ by Ji Yin, et al. ∙ Georgia Institute of Technology 0

This paper presents a novel control approach for autonomous systems operating under uncertainty. We combine Model Predictive Path Integral (MPPI) control with Covariance Steering (CS) theory to obtain a robust controller for general nonlinear systems. The proposed Covariance-Controlled Model Predictive Path Integral (CC-MPPI) controller addresses the performance degradation observed in some MPPI implementations owing to unexpected disturbances and uncertainties. Namely, in cases where the environment changes too fast or the simulated dynamics during the MPPI rollouts do not capture the noise and uncertainty in the actual dynamics, the baseline MPPI implementation may lead to divergence. The proposed CC-MPPI controller avoids divergence by controlling the dispersion of the rollout trajectories at the end of the prediction horizon. Furthermore, the CC-MPPI has adjustable trajectory sampling distributions that can be changed according to the environment to achieve efficient sampling. Numerical examples using a ground vehicle navigating in challenging environments demonstrate the proposed approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As autonomous vehicles and other robots become increasingly popular in our daily lives, one of the major concerns is whether humans can trust the robots’ ability to complete their assigned tasks safely. For autonomous vehicles, for example, neglecting or misinterpreting the disturbances during driving can lead to serious consequences within milliseconds. The demand for safety has led many researchers develop robust control and planning algorithms for autonomous robotic systems. For example, sampling-based planning algorithms that consider uncertainty and collision probability in the vertex selection or evaluation processes have been proposed in

[22, 16, 10, 12], and optimization-based planning algorithms that consider systematic disturbances and chance constraints explicitly by solving optimization problems have been developed in [1, 9, 20, 19].

In this paper, we propose an MPC-based robust trajectory planning approach that deals with environmental and plant uncertainty, while providing guarantees on the dispersion of the closed-loop system future trajectories. Model Predictive Control (MPC) is an algorithmic, optimization-based control design [25] that has gained popularity for autonomous vehicle control over the past years [24, 13]. The deterministic MPC approaches are model-based and generate trajectories assuming there are no uncertainties in the dynamics. As a result, MPC controllers are typically not robust to model parameter variations. To improve the performance of MPC controllers by taking system uncertainty into account, Robust MPC (RMPC) controllers have been proposed to handle deterministic uncertainties residing in a given compact set. RMPC generates control commands by considering worst-case scenarios, thus the resulting trajectories can be conservative. Reference [11] provides an extensive review summarizing all types of RMPC controllers. To achieve more aggressive planning, Stochastic MPC (SMPC) utilizes the probabilistic nature of the system uncertainty to account for the most likely disturbances, instead of considering only the worst-case disturbance, as with the RMPC [17, 8]. There are two classes of SMPC approaches in the literature. The first one is based on the analytical solutions of some optimization problem, such as [4, 5, 21], while the second approach relies on randomization to solve optimization problems, such as [2, 3, 29]. The proposed CC-MPPI controller is somewhere in-between these two, as it analytically computes a controlled dynamics by considering the model uncertainty and then generates the optimal control using randomized roll-outs of the controlled dynamics. This is discussed in greater detail in Section IV.

Most of current MPC implementations assume linear system dynamics and formulate the resulting MPC task as a quadratic optimization problem, which helps MPC meet the strict real-time requirements required for safe control. However, these approaches depend on simplified linear models that may not capture accurately the dynamics of the real system. Model Predictive Path Integral (MPPI) control [28] is a type of MPC algorithm that solves repeatedly finite-horizon optimal control tasks while utilizing nonlinear dynamics and general cost functions. Specifically, MPPI is a simulation-based algorithm that samples thousands of trajectories around some mean control sequence in real-time, by taking advantage of the parallel computing capabilities of modern Graphic Processing Units (GPUs). It then produces an optimal trajectory and its corresponding control sequence by calculating the weighted average of the cost of the ensuing sampled trajectories, where the weights are determined by the cost of each trajectory rollout. One of the advantages of the MPPI approach over more traditional MPC controllers is that it does not restrict the form of the cost function of the optimization problem [26], which can be non-quadratic and even discontinuous.

Despite its appealing characteristics, the MPPI algorithm may encounter problems when implemented in practice. In particular, when the mean control sequence lies inside an infeasible region, all the resulting MPPI sampled trajectories are concentrated within the same region, as illustrated in Fig. 1, and this may lead to a situation where the trajectories violate the constraints. Two cases this may happen are: first, when the MPPI algorithm diverges because the environment changes too fast; and, second, when the algorithm fails because the predicted dynamics do not capture the noise and uncertainty of the actual dynamics. The reason the MPPI algorithm may perform poorly under the previous two cases is because it fails to take into account the disturbances (either from the dynamics or from the environment) so that all sampled trajectories end up violating the constraints. Figure 1 shows the influence of the noise on MPPI sampled trajectories. In this figure, the gray curves are the MPPI sampled trajectories, the red curves show the boundaries of the trajectory sampling distribution, and the green curve represents the simulated trajectory of the robot following the optimal control sequence given the current distribution. In Fig. 1(a) the autonomous vehicle has sampling distribution mostly inside the track initially. In Fig. 1(b), the vehicle ends up in an unexpected pose due to unforeseen disturbances after it executes the control command. This further leads to the situation depicted in Fig. 1(c), where the algorithm diverges because all of the sampled trajectories violate the constraints.

Fig. 1: MPPI Divergence; from [27].

To mitigate the previous shortcomings of the MPPI algorithm, prior works apply a controller to track the output of the MPPI controller in order to keep the actual trajectory as close as possible to the predicted nominal trajectory. These approaches separate the planning and control tasks so that MPPI acts similarly to a path planner. For example, in [27] an iterative Linear Quadratic Gaussian (iLQG) controller was used to track the planned trajectory provided by MPPI. In [23] the authors propose a method that utilizes a tracking controller with augmentation to compensate for the mismatch between the nominal dynamics and the true dynamics. However, these methods do not improve the performance of the MPPI algorithm if there are significant changes in the environment within a short interval of time. The proposed CC-MPPI algorithm tries to address some of these shortcomings by improving the performance of the MPPI algorithm under the scenarios mentioned above. This is achieved by introducing adjustable trajectory sampling distributions, and by directly controlling the evolution of these trajectory distributions to avoid an uncontrolled dispersion at the end of the control horizon.

Ii Problem Formulation

The goal of the proposed Covariance-Controlled MPPI (CC-MPPI) controller is to make the distributions of the sampled trajectories more flexible than the ones generated by MPPI, such that the CC-MPPI algorithm samples more efficiently and with a smaller probability to be trapped in local minima when the optimal trajectory from the previous time step lies inside some high-cost region, as illustrated in Fig. 1(c). To this end, we introduce a desired terminal state covariance for the states of the dynamics (1b) at the final time step

as a hyperparameter for the CC-MPPI controller. The key idea is that the distribution of the sampled trajectories can be adjusted by a suitable choice of

together with the control disturbance variance

. The CC-MPPI controller solves the following optimization problem,

(1a)
subject to,
(1b)
(1c)
(1d)

at each iteration, where the state terminal cost and the state portion of the running cost can be arbitrary functions. The objective function (1a) minimizes the expectation of the state and control costs with

being a random vector subject to the dynamics (

1b).

Iii MPPI Algorithm Review

The MPPI controller, as described in [26], minimizes (1a) subject to (1b). As in Problem (1), the terminal cost and the state portion of the running cost of the MPPI can be arbitrary functions.

As in an MPC setting, the MPPI algorithm samples trajectories during each optimization iteration. Let be the mean control sequence, be the actual control sequence, and let , , be the control disturbance sequence corresponding to the sampled trajectory at the current iteration, such that , where . The cost for the sampled trajectory is given by [29]

(2)

where is the cost for the sampled trajectory at step, and is given in [29],

(3)

where is the ratio between the covariance of the injected disturbance and the covariance of the disturbance of the original dynamics [29]. The term in (3) is the cost for the disturbance-free portion of the control input, and both and in (3) penalize large control disturbances and smooth out the resulting control signal. The weights of the sampled trajectory are chosen as [30]

(4)

where,

(5)

and where determines how selective is the weighted average of the sampled trajectories. Note that the constant does not influence the solution, and it is introduced to prevent numerical instability of the algorithm. The MPPI algorithm generates the optimal control sequence and the mean sequence of the next iteration using the following equations,

(6)

In Section IV we discuss how the CC-MPPI controller satisfies the terminal constraint (1d), and then present the complete CC-MPPI algorithm.

Iv Covariance-Controlled MPPI

In this section, we introduce the proposed CC-MPPI controller. Section IV-A discusses the linearization of the dynamics (1b), and Section IV-B uses this linearization to achieve the terminal constraint (1d). Section V presents the proposed CC-MPPI algorithm that solves the optimization Problem (1).

Iv-a Linearized Model

We start by linearizing system (1b) along some reference trajectory using the approach outlined in [7]. The reference trajectory of the first optimization iteration is a random trajectory; starting with the second iteration, the reference trajectory of the current iteration is the trajectory generated by the optimal control sequence from the previous iteration. To this end, let be the reference control sequence at the current iteration, and let be the corresponding reference state sequence, such that

(7)

The dynamical system in (7) can then be approximated in the vicinity of with a discrete-time linear time-varying (LTV) system as follows

(8)

where and are the state and control input respectively to the LTV system at step , and

(9)
(10)

where and are the system matrices, and is the residual term of the linearization.

Iv-B Covariance-Controlled Trajectory Sampling

As with the baseline MPPI algorithm, the CC-MPPI algorithm simulates trajectories during each iteration. Let be the control sequence of the CC-MPPI sampled trajectory during the current iteration of the algorithm where we drop the superscript for simplicity. The optimal control sequence from the previous iteration and the reference control sequence of the current iteration is injected with artificial noise and a feedback term is added, such that

(11)

where follows the dynamics,

(12)

where since we assume perfect observation of the initial state [21]. Substituting (11) into (8) yields,

(13)

where is the state of the CC-MPPI sampled trajectory at step , and is the residual term of the linearization at step as defined in (10). Let be the state at the beginning of the current iteration. We can then rewrite the system in (13) in the compact form,

(14)

where , , , and the augmented system matrices , , , are defined similarly as in [20]. In order to compute to satisfy the terminal covariance constraint (1d), the CC-MPPI solves Problem (15) at each optimization iteration.

(15a)
subject to,
(15b)
(15c)

where , the augmented cost parameter matrices and . Since and , we have . It follows from (12) and (14) that

(16)

and,

(17)

The cost function in (15) can then be converted to the following equivalent form [19]

(18)

The reference control sequence is fixed and is given by the optimal control sequence from the previous CC-MPPI iteration, which implies that is fixed and is given by (16). For the optimization problem in (15), we can then drop the terms representing constant values in (18) and obtain the cost

(19)

Substituting (17) into (19), and using the fact that , yields,

(20)

where . Substituting (14) into (15c), we obtain,

(21)

Finally, Problem (15) can be converted into the following convex optimization problem,

(22a)
subject to,
(22b)

The problem (22) can be easily solved by a convex optimization solver such as Mosek [18] to obtain . It follows from (11) that the control sequence of the sampled trajectory is . We can then rollout the sampled trajectories using and the dynamical model (1b). The complete CC-MPPI algorithm is detailed in Section V.

V The CC-MPPI Algorithm

The CC-MPPI algorithm is given in Algorithm 1. Line 1

obtains the current estimate of the state

at the beginning of the current optimization iteration. Lines 1 to 1 rollout the reference trajectory using the discrete-time nonlinear dynamical model . Line 1 linearizes the model along and its corresponding control sequence as described in (7), (8), (9), (10), and calculates the augmented dynamical model matrices , , along with the linearization residual term . Line 1 computes the feedback gain for the closed-loop system in (14) by solving the convex optimization problem in (22). Lines 1 to 1 sample the control sequences, perform the rollouts and evaluate the sampled trajectories with the running cost (3). Specifically, lines 1 to 1 introduce sample trajectories of the close-loop dynamics and sample trajectories of zero-mean input, so that the algorithm can balance between smoothness of trajectories and low control cost [26]. Line 1 computes the optimal control sequence following (4), (5) and (6). Line 1 sends the first control command of the optimal control sequence to the actuators. Line 1 removes the executed command , and duplicates at the end of the horizon for .

Given :  ;
Input : 
1 while task not complete do
2       ;
3       for  do
4             ;
5            
6       end for
7      
8       ;
9       for  do
10            
11             ;
12             for  do
13                   if  then
14                         ;
15                  else
16                        
17                        
18                   end if
19                  ;
20                   ;
21                   ;
22                   ;
23                  
24             end for
25            ;
26            
27       end for
28       ;
29       ;
30       ;
31 end while
Algorithm 1 CC-MPPI Algorithm

Vi Results

In this section, we show via a series of numerical examples that the CC-MPPI algorithm outperforms the baseline MPPI algorithm in critical situations described in Section II. The terminal covariance in (22b) for the CC-MPPI should be determined based on the environment, and we can train a policy to compute . The design of such a policy is out of the scope of this paper.

Vi-a Vehicle Model

We assume the injected artificial noise in CC-MPPI and MPPI algorithms are significantly greater than the noise of the vehicle model, such that the model noise is negligible. We model the vehicle using a single-track bicycle model

(23a)
(23b)
(23c)
(23d)

where and the parameters , are distances from the COM to the rear and front wheels, respectively. The , are position coordinates of a fixed world coordinate frame. The is the vehicle yaw angle, and is velocity at COM with respect to the world coordinate frame. The and are throttle and steering inputs to the model, respectively. We discretize the system (23) with the Euler method, , where and time step .

Vi-B Controller Setup

Assuming that the model noise is significantly smaller than the injected noise , thus the term in (3) is negligible [29]. It follows from (3) that the MPPI and the CC-MPPI running cost for the step of the sampled trajectory takes the form,

(24)

for , where we take the state-dependent cost as,

(25)

The term in (25) is the boundary cost which prevents the vehicle from leaving the track, and it is given by

(26)

The term in (25) penalizes collisions with obstacles, where is a weighting coefficient. We choose two different forms of in our simulations. The first is discontinuous on the obstacles’ edges,

(27)

and the second is continuous on the obstacles’ edges,

(28)

where describes the distance from the vehicle’s COM to the center of the circular obstacle, and is the radius of the obstacle. We take m for all of the circular obstacles in this section. In our simulations, the terminal cost for the MPPI and CC-MPPI controllers has the form,

(29)

The first term in (29) is the progress cost, where is a weighting coefficient and represents the distance between the current vehicle state and the terminal state of the sample trajectory along the track centerline. The second term in (29) represents the vehicle’s lateral deviation from the track centerline. For both the MPPI and the CC-MPPI controllers, we set the control horizon to , the inverse temperature [26] to , the number of sampled trajectories at each iteration to , the portion of uncontrolled sample trajectories to , and the control cost matrix to . The parameter values discussed here are shared by all the controller setups in the simulations of this section.

Vi-C Planning in Fast-changing Environment: Unpredictable Obstacles

This experiment tests the CC-MPPI controller’s ability to respond to emergencies owing to unpredictable appearance of obstacles. We test the CC-MPPI and benchmark its performance against a baseline MPPI controller in an environment where an obstacle suddenly appears in the traveling direction of an autonomous vehicle. In this simulation, the CC-MPPI and MPPI controllers have injected noises of the same covariance , same weighting coefficients and in their trajectory costs. Figure 2 demonstrates that the MPPI controller fails to find a feasible solution and results in a collision with the obstacle. Figure 2 further shows that the CC-MPPI has a more effective trajectory sampling distribution strategy, which leads the vehicle to take a feasible trajectory that avoids collision.

Fig. 2: Responses of the MPPI and CC-MPPI to an unpredictable obstacle.The grey curves are the sampled trajectories, the green curves represent the predicted optimal trajectories generated by the controllers, and the black points show the actual trajectories taken by the vehicle.

Vi-D Aggressive Driving in Cluttered Environment

To further examine the performance of the CC-MPPI controller in more complicated environments, we run simulations using a CC-MPPI controller and an MPPI controller on a race track environment with obstacles densely scattered on the track. The track has a constant width of 0.6 m, the centerline has a length of 10.9 m and each turn of the centerline has radius 0.3 m. Each obstacle has radius 0.1 m and uses the continuous obstacle cost (28). The simulations in this section were set so that both controllers achieve minimum lap time while avoid collisions with cluttered obstacles. Both the CC-MPPI and the MPPI controllers have the same covariance for their injected noises. We then perform a grid search by varying the cost weight in (25) which corresponds to avoiding collisions with obstacles, and the cost weight in (29) for optimizing the vehicle velocity along the track centerline. Table I shows the grid search parameters. Figure 4 presents the results of the grid search in a scatter plot showing the distribution of lap times and number of collisions. Table II summarizes the grid search.

Cost Parameter Min Max Interval
75 450 37.5
1.65 2.97 0.33
TABLE I: CC-MPPI and MPPI Grid Search Parameter Values

Fig. 3: MPPI and CC-MPPI trajectories on a race track. The trajectories in red are generated by the CC-MPPI, and the trajectories in green are generated by MPPI using the same injected noise covariance as the CC-MPPI.

We define a collision as a situation where the vehicle state overlaps with an obstacle. We further define a failure to be the situation when the vehicle comes to a complete stop, or when the vehicle is too far away from the track centerline ( m). If the vehicle finishes laps without a failure, the simulation is considered a success. Figure 4 shows that the data points corresponding to CC-MPPI occupy the bottom part of the scatter plot, which indicates that the CC-MPPI generates trajectories that are significantly faster than those by the MPPI controller. Table II shows that the CC-MPPI achieves smaller average lap time, fewer collisions and higher success rate than the MPPI in simulations. Moreover, the two data points in the red circles in Figure 4 are produced by MPPI and CC-MPPI with the same set of and values, and Figure 3 visualizes the trajectories that correspond to these two data points. We see that the CC-MPPI generates a driving maneuver that is more aggressive than the MPPI, which helps explain why the CC-MPPI achieves a significantly smaller average lap time.

The performance of CC-MPPI, however, comes with an increased computational overhead. Using our implementation, the CC-MPPI controller runs at 13Hz, while the MPPI controller runs at 97Hz. All simulations were done on a desktop computer equipped with an i9 3.5GHz CPU, and an RTX3090 GPU. The main computational bottleneck of CC-MPPI is the computation of the feedback gain at each iteration. Possible remedies include updating the feedback gain less frequently, computing the feedback gains off-line and storing them in a lookup table, or using a faster, dedicated convex optimization solver that is more suitable for real-time implementation [14, 15, 6].

Controller Avg. laptime(s) No. collision/lap Success rate
CC-MPPI 4.20 160.52 98.18%
MPPI 6.44 171.38 30.78%
TABLE II: CC-MPPI Vs. MPPI of different settings

Fig. 4: Lap time and number of collisions distribution. Each point in this figure shows the average lap time and average number of collisions over 20 laps in a simulation using one pair of and from Table I. The orange points are produced by the MPPI controller and the cyan points are by the CC-MPPI controller.

Vii Conclusions And Future Work

We have proposed the Covariance-Controlled Model Predictive Path Integral (CC-MPPI) algorithm that incorperates covariance steering within the MPPI algorithm. The CC-MPPI algorithm has adjustable trajectory sampling distributions which can be tuned by changing the terminal covariance constraint in (1d) and the covariance of the injected noise in (1c), which makes it more flexible and robust than the MPPI algorithm. In the simulations, we showed that the CC-MPPI explores the environment and samples trajectories more efficiently than MPPI for the same level of exploration noise (). This results to the vehicle responding faster to unpredictable obstacles and avoid collisions in a cluttered environment than MPPI. The CC-MPPI performance can be further improved if and are tuned synchronously based on the information of the robot’s surrounding environment.

In the future, we can design a policy to choose judiciously the terminal covariance constraint and the injected noise covariance on-the-fly. The policy should evaluate the environment and assign , for the CC-MPPI controller, such that the trajectory sampling distribution of the controller can be tailored to carry out informed and efficient sampling in any environment.

References

  • [1] I. M. Balci and E. Bakolas (2021) Covariance steering of discrete-time stochastic linear systems based on Wasserstein distance terminal cost. IEEE Control Systems Letters 5 (6), pp. 2000–2005. External Links: Document Cited by: §I.
  • [2] D. Bernardini and A. Bemporad (2009) Scenario-based model predictive control of stochastic constrained linear systems. In Proceedings of the 48h IEEE Conference on Decision and Control, Shanghai, China, pp. 6333–6338. Note: held jointly with the 28th Chinese Control Conference External Links: Document Cited by: §I.
  • [3] G. C. Calafiore and L. Fagiano (2013) Robust model predictive control via scenario optimization. IEEE Transactions on Automatic Control 58 (1), pp. 219–224. External Links: Document Cited by: §I.
  • [4] M. Cannon, B. Kouvaritakis, and D. Ng (2009) Probabilistic tubes in linear stochastic model predictive control. Systems & Control Letters 58, pp. 747–753. External Links: Document Cited by: §I.
  • [5] M. Cannon, B. Kouvaritakis, and X. Wu (2009) Probabilistic constrained MPC for multiplicative and additive stochastic uncertainty. IEEE Transactions on Automatic Control 54 (7), pp. 1626–1632. External Links: Document Cited by: §I.
  • [6] D. Dueri, J. Zhang, and B. Açikmeşe (2014) Automated custom code generation for embedded, real-time second order cone programming. IFAC Proceedings Volumes 47 (3), pp. 1605–1612. Cited by: §VI-D.
  • [7] P. Falcone, M. Tufo, F. Borrelli, J. Asgari, and H. E. Tseng (2007) A linear time varying model predictive control approach to the integrated vehicle dynamics control problem in autonomous systems. IEEE Conference on Decision and Control (), pp. 2980–2985. External Links: Document Cited by: §IV-A.
  • [8] M. Farina, L. Giulioni, and R. Scattolini (2016-08) Stochastic linear model predictive control with chance constraints – a review. Journal of Process Control 44, pp. 53–67. External Links: Document Cited by: §I.
  • [9] M. Goldshtein and P. Tsiotras (2017) Finite-horizon covariance control of linear time-varying systems. In 56th IEEE Conference on Decision and Control (CDC), Vol. , pp. 3606–3611. External Links: Document Cited by: §I.
  • [10] Y. Huang and K. Gupta (2009) Collision-probability constrained prm for a manipulator with base pose uncertainty. In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. , pp. 1426–1432. External Links: Document Cited by: §I.
  • [11] A. A. Jalali and V. Nadimi (2006) A survey on robust model predictive control from 1999-2006. In International Conference on Computational Inteligence for Modelling Control and Automation and International Conference on Intelligent Agents Web Technologies and International Commerce (CIMCA’06), Vol. , pp. 207–207. External Links: Document Cited by: §I.
  • [12] G. Kewlani, G. Ishigami, and K. Iagnemma (2009) Stochastic mobility-based path planning in uncertain environments. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1183–1189. External Links: Document Cited by: §I.
  • [13] A. Liniger, X. Zhang, P. Aeschbach, A. Georghiou, and J. Lygeros (2017) Racing miniature cars: enhancing performance using stochastic MPC and disturbance feedback. In American Control Conference, Seattle, WA, pp. 5642–5647. External Links: Document Cited by: §I.
  • [14] Y. Mao, M. Szmuk, and B. Açikmeşe (2018) A tutorial on real-time convex optimization based guidance and control for aerospace applications. In Annual American Control Conference (ACC), Vol. , pp. 2410–2416. External Links: Document Cited by: §VI-D.
  • [15] J. Mattingley and S. Boyd (2012) CVXGEN: a code generator for embedded convex optimization. Optimization and Engineering 13 (1), pp. 1–27. Cited by: §VI-D.
  • [16] N. A. Melchior and R. Simmons (2007) Particle RRT for path planning with uncertainty. In Proceedings 2007 IEEE International Conference on Robotics and Automation, Vol. , pp. 1617–1624. External Links: Document Cited by: §I.
  • [17] A. Mesbah (2016) Stochastic model predictive control: an overview and perspectives for future research. IEEE Control Systems Magazine 36 (6), pp. 30–44. External Links: Document Cited by: §I.
  • [18] (2017) MOSEK aps, the MOSEK optimization toolbox for MATLAB manual. version 8.1.. External Links: Link Cited by: §IV-B.
  • [19] K. Okamoto, M. Goldshtein, and P. Tsiotras (2018) Optimal covariance control for stochastic systems under chance constraints. IEEE Control Systems Letters 2 (2), pp. 266–271. External Links: Document Cited by: §I, §IV-B.
  • [20] K. Okamoto and P. Tsiotras (2019) Optimal stochastic vehicle path planning using covariance steering. IEEE Robotics and Automation Letters 4 (3), pp. 2276–2281. External Links: Document Cited by: §I, §IV-B.
  • [21] K. Okamoto and P. Tsiotras (2019) Stochastic model predictive control for constrained linear systems using optimal covariance steering. Note: arXiv:1905.13296 External Links: 1905.13296 Cited by: §I, §IV-B.
  • [22] R. Pepy and A. Lambert (2006-Nov.) Safe path planning in an uncertain-configuration space using RRT. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5376 – 5381. External Links: Document Cited by: §I.
  • [23] J. Pravitra, K. A. Ackerman, C. Cao, N. Hovakimyan, and E. A. Theodorou (2020) L1-adaptive MPPI architecture for robust and agile control of multirotors. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7661–7666. External Links: Document Cited by: §I.
  • [24] U. Rosolia and F. Borrelli (2020) Learning how to autonomously race a car: a predictive control approach. IEEE Transactions on Control Systems Technology 28 (6), pp. 2713–2719. External Links: Document Cited by: §I.
  • [25] P. Tsiotras and M. Mesbahi (2017-02) Toward an algorithmic control theory. Journal of Guidance, Control, and Dynamics 40, pp. 1–3. External Links: Document Cited by: §I.
  • [26] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou (2018) Information-theoretic model predictive control: theory and applications to autonomous driving. IEEE Transactions on Robotics 34 (6), pp. 1603–1622. External Links: Document Cited by: §I, §III, §V, §VI-B.
  • [27] G. Williams, B. Goldfain, P. Drews, K. Saigol, J. M. Rehg, and E. A. Theodorou (2018) Robust sampling based model predictive control with sparse objective information. In Robotics: Science and Systems, Pittsburgh, PA, pp. 42–51. External Links: Document Cited by: Fig. 1, §I.
  • [28] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou (2017)

    Information theoretic MPC for model-based reinforcement learning

    .
    In IEEE International Conference on Robotics and Automation (ICRA), Singapore, pp. 1714–1721. External Links: Document Cited by: §I.
  • [29] G. Williams, A. Aldrich, and E. A. Theodorou (2017) Model predictive path integral control: from theory to parallel computation. Journal of Guidance, Control, and Dynamics 40 (2), pp. 344–357. External Links: Document, https://doi.org/10.2514/1.G001921 Cited by: §I, §III, §VI-B.
  • [30] G. Williams, B. Goldfain, P. Drews, J. M. Rehg, and E. A. Theodorou (2017) Autonomous racing with autorally vehicles and differential games. Note: ArXiv:1707.04540 Cited by: §III.