A decentralized algorithm for control of autonomous agents coupled by feasibility constraints

02/25/2017
by   Ugo Rosolia, et al.
0

In this paper a decentralized control algorithm for systems composed of N dynamically decoupled agents, coupled by feasibility constraints, is presented. The control problem is divided into N optimal control sub-problems and a communication scheme is proposed to decouple computations. The derivative of the solution of each sub-problem is used to approximate the evolution of the system allowing the algorithm to decentralize and parallelize computations. The effectiveness of the proposed algorithm is shown through simulations in a cooperative driving scenario.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

04/02/2020

Trajectory Optimization for Nonlinear Multi-Agent Systems using Decentralized Learning Model Predictive Control

We present a decentralized trajectory optimization scheme based on learn...
10/16/2019

Decentralized Heading Control with Rate Constraints using Pulse-Coupled Oscillators

Decentralized heading control is crucial for robotic network operations ...
03/21/2018

Communication-based Decentralized Cooperative Object Transportation Using Nonlinear Model Predictive Control

This paper addresses the problem of cooperative transportation of an obj...
03/18/2020

A One Dimensional Elliptic Distributed Optimal Control Problem with Pointwise Derivative Constraints

We consider a one dimensional elliptic distributed optimal control probl...
06/30/2011

Decentralized Control of Cooperative Systems: Categorization and Complexity Analysis

Decentralized control of cooperative systems captures the operation of a...
12/06/2019

Feasibility-Guided Learning for Robust Control in Constrained Optimal Control Problems

Optimal control problems with constraints ensuring safety and convergenc...
10/01/2020

Herding stochastic autonomous agents via local control rules and online global target selection strategies

In this Letter we propose a simple yet effective set of local control ru...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In the last decade researchers have focused on automation in several application fields and in the near future autonomous mechatronic systems will be part of our everyday lives. Under this scenario, the need for cooperative control algorithms able to manage the interactions among autonomous agents is increasing. Despite the advance in computational power allowing for solving complex tasks for a single autonomous agent in real time, it is far more challenging to control the interaction among autonomous agents [1]. Indeed, when two or more agents have to interact, there could be communication limitations or the dimension of the problem could increase exponentially, and consequently the computational burden.

In this paper we focus on dynamically decoupled systems subjected to coupling constraints. This could be the case for UAV flight formation, air traffic control, power management and several other applications [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Early works in the field did not explicitly take into account the coupling constraints [11, 12]. For example in UAV flight control, the collision avoidance constraints are usually enforced using barrier functions, which do not guaranty safety. In [8] a decentralized control strategy able to take into account hard constraints was proposed. However, the problem is solved sequentially and each decentralized optimization has to wait until the previous one is completed. Thus, for large scale systems, this approach could prove infeasible for real-time. In order to overcome this issue, the authors in [9] proposed a strategy to parallelize computations. The problem is divided into sub-problems, which are solved in parallel when the agents are not coupled. In [13] a robust distributed MPC which allows the authors to decouple the computation is presented. A robust tube is constructed for each -th agent and a local feedback controller is used to keep the agent into the tube. Therefore, communication between agents is only required to update the tube.

This work proposes a decentralized and parallelized algorithm to compute a nearly optimal solution of a specific class of non-linear non-convex problems under the assumption of no delay or loss of communication. The optimization is divided into sub-problems, similar to [9]. The main contribution of this paper is to propose a communication scheme which allows for independent computation of the solution of each sub-problem. This scheme is inspired by the GMRES\Continuation method [14, 15, 16], where the time evolution of a nonlinear algebraic system is traced by its derivative. The proposed scheme uses the derivative of the optimal solution to decouple the sub-problems, namely each autonomous agent approximates the behavior of the system based on its derivative. Continuations methods do not converge to the solution when the evolution of the system is discontinuous [14]. Unfortunately, the coupling inequality constraints introduce a discontinuity, as shown in [17]. Thus, we use a relaxed approach to deal with inequality constraints which allow us to use a continuation method. These relaxed conditions could be used also for explicit fixed time step algorithms.

This paper is organized as follow: Section II the centralized system is expressed as the summation of decentralized optimal control sub-problems. In Section III the control algorithm is described and the proposed conditions to deal with inequality constraints are derived. Section IV provides additional details on the algorithm and its range of applicability. Finally, in Section V the proposed control logic is tested on simulations of a cooperative driving scenario. Section VI provides final remarks.

Ii Problem Formulation

In this section the centralized control problem is introduced. Afterwards, the relaxation method used to guarantee continuity of the optimal solution to the control problem is described. Finally, we present the decoupling strategy.

Ii-a System description

The proposed algorithm aims to compute the trajectories of a system composed by dynamically decoupled agents. The dynamics of each agent have the following non-linear state space representation

(1)

with

being the state vector and

the control action related with the -th agent. Thus, the dynamic of the overall system can be written as

(2)

where is the state vector and the input vector.

The optimal control problem consists in the minimization of decoupled cost functions over a moving time interval with fixed duration :

(3a)
s.t.
(3b)
(3c)

where the feasibility constraints in (3c) may couple the agents.

Ii-B The optimal control problem

The feasibility constraints in Equation (3c) can be enforced through the cost function [17, 18]. Given a vector of time varying Lagrange multipliers, defined as

(4)

where

(5)

The centralized optimal control problem consists of the minimization of the augmented cost function

(6)

and is defined as

(7a)
s.t (7b)
(7c)

Ii-C Relaxation method

When the inequality constraint in Equation (5) is tightly satisfied after a period where it was not, the optimal solution has a discontinuity [17]. Unfortunately, continuation methods cannot be used to compute the solution at discontinuity points [19]. Thus, continuation methods are not suitable to compute the optimal solution when optimality is described by the KKT conditions (Eq. (5)). In order to overcome this issue, we introduce a set of slack variables to convert the inequality constraints into equality constraints,

(8)

It is clear, that when the equality constraints (Eq. 8) hold also the inequality feasibility constraints (Eq. (3c)) are satisfied. Moreover, this problem formulation provides for the removal of the KKT conditions (Eq. 5), which introduced a discontinuity.

The relaxed optimal control problem is defined as the minimization of the cost function

(9)

subject to the dynamic constraint (2). We underline that the effect of the slack variable is to add a safety margin which is determined by the tuning parameter , and that the optimal solution of the relaxed problem does not saturates the feasibility constraints. Therefore, the solution of the relaxed problem is suboptimal for the original problem, described in Section II.B.

Ii-D Decoupling strategy

The centralized control problem could be written as the summation of optimal control problems. Each problem is related to the -th agent and is defined as,

(10a)
s.t (10b)
(10c)

with

(11)

where is the set of subscripts of the inequalities involving the -th agent. It is clear that if at time the global optimal solutions of agents are known, the problem could be solved independently and its solution is globally optimal for the centralized relaxed problem.

Iii Algorithm

In this section a variation to the GMRES\Continuation methods, which allows to parallelized and decentralize computations, is presented. Moreover, we suggest a numerical strategy to handle the feasibility constraints based on their effect on optimality.

Iii-a Decentralized algorithm

The GMRES\Continuation method uses the derivative of the optimal solution to trace its behavior in time. For details on the numerical implementation and accuracy of continuation methods we refer to [15, 16, 20, 21, 22].

The proposed algorithm uses the derivative of the optimal solution to approximate the optimal trajectories of agents, enabling independent solution of each problem. To initialize the algorithm the derivative is computed with a centralized optimization method. After the initialization, the derivative is computed on-board on each -th agent and communicated to the others. Table I illustrates the algorithm’s steps. It is interesting to notice that the proposed algorithm does not introduced further numerical approximation with respect to the centralized algorithm based on continuation methods, as shown in the result section.

Initialization
Step 1) Compute the optimal solution and the optimal derivative with a centralized algorithm
Step 2) Communicate the optimal solution and its derivative to all the agents
Iteration k
Step 3) Each -th agent integrates numerically the trajectories of the other agents
Step 4) Each -th agent solves its problem to compute the optimal solution and its derivative
Step 5) Each -th agents communicates the optimal solution and its derivative to all the other agents
Step 6) go to step 3)
TABLE I: Algorithm scheme

Iii-B Handling coupling constraints

In this section the effect of the coupling feasibility constraints on optimality is analyzed, and the relaxed approach to deal with inequality constraints is introduced. As the solution to the relaxed problem is similar to the original one, when the feasibility inequality constraint is satisfied, the relaxed equality constraint does not influence optimality. In order to verify this statement, it is possible to compute the relationship between the slack variable and the Lagrange multiplier related to the -th constraint. If is optimal, the following relationship for the derivative of the equivalent running cost (Eq. 11) holds

(12)

Combining Equation (8) and Equation (12), the explicit relation between and the state can be written as

(13)

Equation (13) shows that when the inequality constraints is safely satisfied, the Lagrange multiplier related with the relaxed constraint is small in magnitude. Therefore, the effect of the related relaxed feasibility constraint on optimality is negligible. When this condition occurs, we would like not to consider the unnecessary feasibility constraint, reducing the dimensions of the -th optimization problem, , in Step 4) of Table I. Namely, we set a threshold value, , for which the -th Lagrange multiplier of Equation (9) is set to zero

(14)

Substituting in Equation (14) the relationship between the Lagrange multiplier and the system state (Eq. 13), a threshold value for which the relaxed feasibility constraint has to be enforced to the problem is obtained:

(15)

It is interesting to notice that these conditions (Eq. 15) are similar to the KKT conditions in Equation (5), but these are suitable to apply continuation methods and fixed time step algorithms.

Iv Algorithm Analysis

Iv-a Minimum principle properties

The MGRES/Continuation method is based on the optimality conditions stated by the minimum principle [16, 23, 24]. The minimum principle provides necessary conditions for global optimality and it is not always sufficient to compute the optimal solution [18]. Therefore, it is important to analyze the algorithm to understand which class of problems could be solved with the proposed control logic.

Firstly we define the difference between weak and strong minima. Given a general optimal control problem,

(16a)
s.t (16b)
(16c)

A trajectory is a weak minima if it minimizes the functional (Eq. (16a)) over all the trajectories close to in the sense of the 1-norm, meaning that

(17)

Conversely, a trajectory is a strong minima if it minimizes the functional (Eq. (16a)) over all the trajectories close to in the sense of -norm,

(18)

The optimality conditions stated by minimum principle are satisfied for strong minima and not for weak minima [18]. Therefore, the proposed algorithm is suitable to solve non- convex problems with respect to the -norm, if those are convex with respect to the -norm. For example, say that our control problem is to find the trajectory closest to zero, outside an unfeasible region as shown in Figure 1.

From Figure 1 is clear that the trajectory on the left and the one the right are far in the sense of -norm (the two trajectories have non-infinitesimal derivatives different in sign) and for this reason those could represent two weak minima for this problem. However, the two trajectories are close in the sense of the -norm, thus the problem has just one strong minima which can be correctly computed with the optimality conditions of the minimum principle.

Fig. 1: Domain of the feasible trajectories of the optimal control problem. The objective is to compute the trajectory closest to zero which does not cross the unfeasible region.

Iv-B Continuation method properties

The algorithm in Section IV.C is based on a continuation method, meaning that at each time instant the optimal solution is given and the algorithm computes its derivative. This derivative is used at the next time instant to approximate the optimal solution.

Therefore if there are more trajectories satisfying the minimum principle, the algorithm would compute the evolution in time of the given trajectory. However, there could be issues at bifurcation points where the optimal solution has two possible derivatives. This particular situation is discussed in the next section.

Iv-C Non-convex problem

Combining the properties of the minimum principle and the continuation methods we are able to solve a particular type of non-linear non-convex problem. Indeed the algorithm is able to take non-convex decisions if the candidate trajectories are close in the sense of the 0-norm. This property has a key importance in control problems where the optimization is performed on a moving time interval. In Figure 2 the domain of an optimal control problem similar to the one in Section IV.A is shown. Here the objective is to compute, on a moving time interval, the feasible trajectory closest to zero.

In Figure 2 the unfeasible region is outside the optimization window, thus the problem is convex. When the optimization windows moves in time, as soon as it encounters the unfeasible region, the problem becomes non-convex. Indeed there are two weak minima as shown in Figure 3.

Fig. 2: Optimal control problem on a time moving window. The objective is to compute the trajectory closest to zero, outside the unfeasible region.
Fig. 3: Representation of a bifurcation point. As soon as the unfeasible region enters the optimization window the there are two weak local minima.

Figure 3 depicts bifurcation point mentioned in Section IV.B. As soon as the unfeasible region enters the optimization windows, the global optimal solution bifurcates into two local optimal solutions with respect to the -norm. However, as shown in Figure 3, the two trajectories are close with respect to the -norm, and the optimality conditions of the minimum principle allow to compute the optimal derivative for the unique strong minima. Therefore, the algorithm correctly choses the global optimal trajectory.

Afterwards, when the unfeasible region is almost, completely inside the optimization window, the continuation algorithm follows the trajectory which was globally optimal at the bifurcation point (Fig. 4).

Fig. 4: Evolution of the optimal solution after the unfeasible region has entered the optimization window.

V Results

The algorithm is tested on a cooperative driving scenario, where autonomous vehicles are driving on the same roadway at different target speeds. In particular, the algorithm is used to compute the collision free-trajectories of each autonomous vehicles. The vehicles are modeled with a simplified system; this choice for the trajectory planning phase is well-established in literature [25, 26, 27]. It is important to note that this problem is suitable to test our algorithm as each vehicle has to take a non-convex decision during overtaking maneuvers. Moreover, we assume that no safety maneuvers are needed to guaranty the existence of the derivative required in Section III.A.

Simulation was performed on a Windows computer featuring an Intel CORE i5 processor using Matlab 2013b. In order to measure the computational time, a stand-along executable mex-function has been compiled for each agent. This function could be used on Linux PCs and experimental results are envisaged for the future.

V-a Comparison between decentralized and centralized approach

The agents in section III.A represent autonomous vehicles and are modeled using a Single Point Mass Model in a curvilinear abscissa reference frame, for more details [28, 29]. The cost function of each vehicle is designed for lane keeping at a cruise velocity:

(19)

where represents the distance traveled along the roadway mid-line, and the lateral distance between the vehicle’s center of gravity and the roadway mid-line. The inputs, and , are the velocity and the heading angle, respectively. are the weighting parameters. More details on this curvilinear reference frame are given in [28], [29] and [30].

Finally, the feasibility constraints in Section II.C are expressed as ellipses

(20)

where the axes are chosen accordingly with vehicle length, , and width, : 4 and 2 meters, respectively. In this example, for each problem the distance between the -th agent and the -th agent is given by . From Equation (13) and form our choice of , when then . Therefore we picked the threshold so that, when the relative distance between two agents is greater that , is set to zero and the agents are decoupled. Note that is the threshold distance used in commercial blind spot detection system.

V-B Simulation Results

V-B1 Comparison with a centralized algorithm

In this section two simulations with the same boundary conditions are carried out. The first one uses the proposed decentralized algorithm and the second one uses a centralized GMRES\Continuation algorithm. The solutions are compared to test optimality, as the solution computed with the centralized method is optimal for the relaxed problem.

As shown in Figure 5 two agents are traveling on the same straight path at different target velocities; therefore the faster agent overtakes the slower one. During the overtaking maneuver the agents move sideways from the centerline, so that the overall derivative of the steering angle and lateral velocity are minimized. Coefficients and boundary conditions used in the simulation for the two agents ( and ) can be found in Tables II and III.

Fig. 5: Trajectories of two agents traveling on the same target path at different target speed: the agent in red, the agent in green.
TABLE II: Simulation Coefficients
Agent [units]
[m]
[m/s]
TABLE III: Agents’s target velocity and lateral offset

Figures 6 shows the lateral difference between the trajectories of the two agents computed with the proposed decentralized control algorithm and the centralized one. The maximum difference, between the trajectories computed with the centralized algorithm and the decentralized one, is which is of the maximum lateral displacement. Thus, this proposed algorithm does not introduce further approximation with respect to a centralized continuation algorithm and it is able to compute a nearly optimal solution for the relaxed problem.

Finally, it is important to analyze the computational cost. The centralized control strategy takes on average 1 to compute the solution, while the decentralized one just , as show in Figure 7.

Fig. 6: Lateral difference between the trajectories of the agents computed with the proposed decentralized algorithm and with a centralized one.
Fig. 7: Comparison between computational cost of the centralized and decentralized algorithms.

V-C Communication method

When the number of agents increases, a decentralized algorithm is necessary to limit the computational burden. In this section a simulation involving five agents is carried out and the computational time is analyzed. In this scenario, the proposed relaxed method to deal with inequality constraints (Eq. 15) plays a crucial role. Coefficients and boundary conditions used in the simulation for the five agents can be found in Tables II, III and IV.

Agent [units]
[m]
TABLE IV: Agents’ initial conditions

In Figure 8 the trajectories of the five agents are shown. Agent1 travels at the highest cruise velocity and its starting position is the closest to the Y axis. Therefore, during the simulation it overtakes the slower agents that it encounters on the path.

Fig. 8: Trajectories of the five agents. Agent1 has the highest cruise velocity and it overtakes the others four agents. Agent2 and Agent3 have an intermediate cruise velocity and they overtake Agent4 and Agent5. Concluding, during the simulation there is a total of 8 overtaking maneuver.

It is clear that the feasibility constraints, which couple Agent1 with the others, should be enforced to the just during the overtaking maneuvers. In Figure 9, a Boolean variable with values and is used to indicate, respectively, if the -th constraint is enforced or is not enforced to .

Fig. 9: Coupling between Agent1 and the -th agent in function of the traveled distance on the X axis. When the Boolean variable is set to one the Lagrange multiplier is different from zero.

Figure 10 shows the trajectory of Agent1, and those of the other agents when the related feasibility constraints are enforced to . Here it is possible to see that the relaxed constraints are correctly enforced to the problem just during the overtaking maneuvers.

Fig. 10: Trajectory of Agent1 with its prospective view. The trajectories of the others agents are reported when the related feasibility constraint are enforced to the problem, namely when the Boolean variable of Figure 9 is set to 1.

V-D Complete Simulation

Fig. 11: Trajectory of Agent1 with its prospective view. The trajectory of the others agents are reported when the related feasibility constraint are enforced to the problem, namely when the Boolean variable of Figure 10 is set to 1.

Finally the algorithm is tested in the worst case scenario, where all the feasibility constraints has to be enforced to the problem (A video of the simulation can be found at http://youtu.be/wTfb5M1YH44 ). Figure 11 shows the behavior of the computational cost as a function of the relaxed constraints. In particular, Figure 11 is divided in five zones, numbered from to , to indicate the number of enforced constraints. The minimum computational cost, , is achieved when no constrains are enforced to the problem. Furthermore, the maximum computational cost, , is reached when all the four constraints are enforced to the problem. Thus, the increment in computational cost, between the unconstrained problem and the one where all four constrains are enforced, is . This increment is small when compared with the centralized approach which took to solve a problem involving two agents.

Vi Conclusions

In this paper a decentralized control algorithm for dynamically decoupled system, coupled by feasibility constraint, is presented. The algorithm, similarly to continuation methods, uses the derivative of the optimal solution to approximate the behavior of the system. This strategy allows to decouple and to parallelize computations.

Moreover, a relaxed approach to deal with inequalities constraints is introduced. This approach allows one to eliminate the discontinuity introduced by the KKT conditions; but it is able to recognize when an inequality constraint does not influence optimality and thus should not be enforced on the problem.

The algorithm has been successfully tested in simulation in a cooperative driving scenario. The control logic is able to compute a solution near the global optimal with a decentralized strategy. The size of the problem is reduced when the coupling between agents is not relevant, thus the computational burden in reduced. Finally, the computational cost of a simulation involving five coupled agents is compared with a centralized control problem involving two agents. This comparison underlines the advantage of the decentralized control strategy which took, on average, less time to solve the optimal control problem, though the dimension of the problem is four times larger.

Vii Acknowledgments

The authors thank Dr. Jacopo Guanetti from UC Berkeley from for his feedback on the manuscript presented.

References

  • [1] A. Bemporad and D. Barcelli, “Decentralized model predictive control,” in Networked control systems.   Springer, 2010, pp. 149–178.
  • [2] A. Vahidi and W. Greenwell, “A decentralized model predictive control approach to power management of a fuel cell-ultracapacitor hybrid,” in American Control Conference, 2007. ACC’07.   IEEE, 2007, pp. 5431–5437.
  • [3] A. Mohammadi and M. B. Menhaj, “Formation control and obstacle avoidance for nonholonomic robots using decentralized mpc,” in Networking, Sensing and Control (ICNSC), 2013 10th IEEE International Conference on.   IEEE, 2013, pp. 112–117.
  • [4] Z. Ma, I. Hiskens, and D. Callaway, “A decentralized mpc strategy for charging large populations of plug-in electric vehicles,” IFAC Proceedings Volumes, vol. 44, no. 1, pp. 10 493–10 498, 2011.
  • [5] S. Devasia, D. Iamratanakul, G. Chatterji, and G. Meyer, “Decoupled conflict-resolution procedures for decentralized air traffic control,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 2, pp. 422–437, 2011.
  • [6] V. R. Desaraju and J. P. How, “Decentralized path planning for multi-agent teams in complex environments using rapidly-exploring random trees,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on.   IEEE, 2011, pp. 4956–4961.
  • [7] D. Raimondo, L. Magni, and R. Scattolini, “Decentralized mpc of nonlinear systems: An input-to-state stability approach,” International Journal of Robust and Nonlinear Control, vol. 17, no. 17, pp. 1651–1667, 2007.
  • [8] A. Richards and J. How, “Decentralized model predictive control of cooperating uavs,” in Decision and Control, 2004. CDC. 43rd IEEE Conference on, vol. 4.   IEEE, 2004, pp. 4286–4291.
  • [9] T. Keviczky, F. Borrelli, K. Fregene, D. Godbole, and G. J. Balas, “Decentralized receding horizon control and coordination of autonomous vehicle formations,” IEEE Transactions on Control Systems Technology, vol. 16, no. 1, pp. 19–33, 2008.
  • [10] P. Trodden and A. Richards, “Cooperative distributed mpc of linear systems with coupled constraints,” Automatica, vol. 49, no. 2, pp. 479–487, 2013.
  • [11] P. Ogren, M. Egerstedt, and X. Hu, “A control lyapunov function approach to multi-agent coordination,” in Decision and Control, 2001. Proceedings of the 40th IEEE Conference on, vol. 2.   IEEE, 2001, pp. 1150–1155.
  • [12] A. Bicchi and L. Pallottino, “On optimal cooperative conflict resolution for air traffic management systems,” IEEE Transactions on Intelligent Transportation Systems, vol. 1, no. 4, pp. 221–231, 2000.
  • [13] P. Trodden and A. Richards, “Distributed model predictive control of linear systems with persistent disturbances,” International Journal of Control, vol. 83, no. 8, pp. 1653–1663, 2010.
  • [14] S. L. Richter and R. A. Decarlo, “Continuation methods: Theory and applications,” IEEE Transactions on Systems, Man, and Cybernetics, no. 4, pp. 459–464, 1983.
  • [15] T. Ohtsuka, “Continuation/gmres method for fast algorithm of nonlinear receding horizon control,” in Decision and Control, 2000. Proceedings of the 39th IEEE Conference on, vol. 1.   IEEE, 2000, pp. 766–771.
  • [16] ——, “A continuation/gmres method for fast computation of nonlinear receding horizon control,” Automatica, vol. 40, no. 4, pp. 563–574, 2004.
  • [17] A. E. Bryson,

    Applied optimal control: optimization, estimation and control

    .   CRC Press, 1975.
  • [18] D. Liberzon, Calculus of variations and optimal control theory: a concise introduction.   Princeton University Press, 2012.
  • [19] C. Kelley, “Iterative methods for linear and nonlinear equations, siam, philadelphia, 1995,” MR 96d, vol. 65002.
  • [20] Y. Saad and M. H. Schultz, “Gmres: A generalized minimal residual algorithm for solving nonsymmetric linear systems,” SIAM Journal on scientific and statistical computing, vol. 7, no. 3, pp. 856–869, 1986.
  • [21] T. Hashimoto, Y. Yoshioka, and T. Ohtsuka, “Receding horizon control with numerical solution for spatiotemporal dynamic systems,” in Decision and Control (CDC), 2012 IEEE 51st Annual Conference on.   IEEE, 2012, pp. 2920–2925.
  • [22] D. A. Knoll and D. E. Keyes, “Jacobian-free newton–krylov methods: a survey of approaches and applications,” Journal of Computational Physics, vol. 193, no. 2, pp. 357–397, 2004.
  • [23]

    T. Hashimoto, Y. Yoshioka, and T. Ohtsuka, “Receding horizon control for nonlinear parabolic partial differential equations with boundary control inputs,” in

    Decision and Control (CDC), 2010 49th IEEE Conference on.   IEEE, 2010, pp. 6920–6925.
  • [24] T. Ohtsuka and H. A. Fujii, “Real-time optimization algorithm for nonlinear receding-horizon control,” Automatica, vol. 33, no. 6, pp. 1147–1154, 1997.
  • [25] A. Gray, Y. Gao, T. Lin, J. K. Hedrick, H. E. Tseng, and F. Borrelli, “Predictive control for agile semi-autonomous ground vehicles using motion primitives,” in American Control Conference (ACC), 2012.   IEEE, 2012, pp. 4239–4244.
  • [26] Y. Gao, T. Lin, F. Borrelli, E. Tseng, and D. Hrovat, “Predictive control of autonomous ground vehicles with obstacle avoidance on slippery roads,” in ASME 2010 dynamic systems and control conference.   American Society of Mechanical Engineers, 2010, pp. 265–272.
  • [27] P. Falcone, F. Borrelli, J. Asgari, H. Tseng, and D. Hrovat, “Low complexity mpc schemes for integrated vehicle dynamics control problems,” in 9th international symposium on advanced vehicle control, 2008.
  • [28] U. Rosolia, F. Braghin, A. Alleyne, and E. Sabbioni, “Nlmpc for real time path following and collision avoidance,” SAE International Journal of Passenger Cars-Electronic and Electrical Systems, vol. 8, no. 2015-01-0313, pp. 401–405, 2015.
  • [29] U. Rosolia, S. De Bruyne, and A. G. Alleyne, “Autonomous vehicle control: A nonconvex approach for obstacle avoidance,” IEEE Transactions on Control Systems Technology, 2016.
  • [30] A. Micaelli and C. Samson, “Trajectory tracking for unicycle-type and two-steering-wheels mobile robots,” Ph.D. dissertation, INRIA, 1993.