Robust Tracking with Model Mismatch for Fast and Safe Planning: an SOS Optimization Approach

by   Sumeet Singh, et al.
berkeley college
Stanford University

In the pursuit of real-time motion planning, a commonly adopted practice is to compute a trajectory by running a planning algorithm on a simplified, low-dimensional dynamical model, and then employ a feedback tracking controller that tracks such a trajectory by accounting for the full, high-dimensional system dynamics. While this strategy of planning with model mismatch generally yields fast computation times, there are no guarantees of dynamic feasibility, which hampers application to safety-critical systems. Building upon recent work that addressed this problem through the lens of Hamilton-Jacobi (HJ) reachability, we devise an algorithmic framework whereby one computes, offline, for a pair of "planner" (i.e., low-dimensional) and "tracking" (i.e., high-dimensional) models, a feedback tracking controller and associated tracking bound. This bound is then used as a safety margin when generating motion plans via the low-dimensional model. Specifically, we harness the computational tool of sum-of-squares (SOS) programming to design a bilinear optimization algorithm for the computation of the feedback tracking controller and associated tracking bound. The algorithm is demonstrated via numerical experiments, with an emphasis on investigating the trade-off between the increased computational scalability afforded by SOS and its intrinsic conservativeness. Collectively, our results enable scaling the appealing strategy of planning with model mismatch to systems that are beyond the reach of HJ analysis, while maintaining safety guarantees.



There are no comments yet.


page 1

page 2

page 3

page 4


FaSTrack: a Modular Framework for Real-Time Motion Planning and Guaranteed Safe Tracking

Real-time, guaranteed safe trajectory planning is vital for navigation i...

Model Error Propagation via Learned Contraction Metrics for Safe Feedback Motion Planning of Unknown Systems

We present a method for contraction-based feedback motion planning of lo...

Real-time Funnel Generation for Restricted Motion Planning

In autonomous systems, a motion planner generates reference trajectories...

Planning, Fast and Slow: A Framework for Adaptive Real-Time Safe Trajectory Planning

Motion planning is an extremely well-studied problem in the robotics com...

Planning with Learned Dynamics: Guaranteed Safety and Reachability via Lipschitz Constants

We present an approach for feedback motion planning of systems with unkn...

Optimization-Based Hierarchical Motion Planning for Autonomous Racing

In this paper we propose a hierarchical controller for autonomous racing...

A Holistic Motion Planning and Control Solution to Challenge a Professional Racecar Driver

We present a holistically designed three layer control architecture capa...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As robotic systems become more pervasive, real-time motion planning becomes increasingly important. In general, motion planning algorithms must find a trade-off among three key challenges: (1) achieving fast computational speed to enable online re-planning, (2) accounting for complex system dynamics, and (3) ensuring formal safety and robustness guarantees. In particular, to enable high-frequency re-planning, a commonly adopted practice is to compute a trajectory by running a planning algorithm on a simplified, low-dimensional dynamical model, oftentimes just a single-integrator model. A feedback tracking controller that accounts for the full, high-dimensional dynamics of the system is then used to track such a trajectory (we refer to this approach as planning with model mismatch, see Figure 1). This approach usually guarantees fast planning speeds, but guaranteeing safety becomes difficult, due to the mismatch between the model used for planning and the actual dynamics of the robotic system. This approximation can introduce tracking error between the planned path and the actual trajectory of the system, potentially resulting in safety violations. Conversely, one could compute a collision-free motion plan by directly accounting for the full set of differential constraints of a robotic system (e.g., by using a general-purpose kinodynamic motion planning algorithm [16]). However, despite significant recent progress [17, 14, 13, 18], kinodynamic motion planning is still computationally intensive [16]. For robots characterized by “slow” or differentially flat dynamics, gradient-based trajectory optimization techniques (e.g.,[27, 31]) may provide quick convergence to a feasible solution, but extending such methods to systems with complex dynamics is challenging.

In general, providing formal guarantees on safety and robustness typically requires reasoning on the full dynamical model of a robotic system. One of the most powerful tools in this regard is the concept of “funnels” [5]. The key idea is to ensure that the trajectory of a system remains within precomputed funnels [35, 20, 19]; this is often achieved via Lyapunov analysis techniques. However, while able to provide formal guarantees around nominal trajectories, these methods are less suitable for a priori unknown environments in which safe trajectories must be found online. The work in [21] tackles this challenge by considering a library of funnels, from which maneuvers are composed in sequence to form guaranteed safe trajectories. To obtain trajectory-independent tracking error bounds, contraction theory and nonlinear optimization techniques can be used to construct tubes around dynamically-feasible trajectories obtained from any planning algorithm [32]. However, as mentioned above, rapidly computing dynamically-feasible nominal trajectories poses a significant computational challenge.

Figure 1: Left: Planning with model mismatch. The tracking system (car, red trajectory) follows a motion plan computed via a low-fidelity dynamical model (blue trajectory). Tracking with model mismatch while maintaining safety requires keeping the maximum tracking error bound (TEB) below a certifiable value. Right: TEB certification. The TEB is characterized by the property that on its boundary, all possible error dynamics resulting from the nominal trajectory point inwards under some tracking controller, so that the TEB is never exceeded.

To tackle real-time motion planning problems, recent works such as [6, 11, 15] combine low- and high-fidelity models to strike a balance among the three aforementioned challenges of computational speed, model accuracy, and robustness. In [15], a forward reachable set for the high-fidelity model is computed offline and then used to prune trajectories generated online using the low-fidelity model. The approach relies on an assumed model mismatch bound, expressed as the maximum distance between the low- and high-fidelity models under a certain metric. FaSTrack [6, 11] considers a slightly different definition of model mismatch, and casts a motion planning problem as a differential game wherein a low-fidelity “planning” system is being pursued (i.e., tracked) by a high-fidelity “tracking” system. Hamilton-Jacobi (HJ) reachability analysis is used to precompute a worst-case tracking error bound (TEB) as well as the optimal feedback tracking controller. Online, FaSTrack plans in real time using the low-fidelity planning system, while simultaneously tracking the plan and remaining within guaranteed tracking error bounds – despite the model mismatch. FaSTrack’s ability to ensure real-time planning of dynamic systems with safety guarantees makes it a desirable framework, but it requires a significant precomputation step in computing the TEB and controller. Executing this precomputation using HJ reachability is reliable and straightforward for nonlinear, highly-coupled dynamical systems with up to five states, or higher-dimensional systems of a specific form [8]. However, the computations required for HJ reachability analysis scale exponentially with the number of states, thus making FaSTrack difficult to apply to systems with a large number of states and/or highly-coupled dynamics.

Building upon FaSTrack, we present an approach to designing a feedback tracking controller along with guaranteed tracking error bounds via sum-of-squares (SOS) programming. SOS programs can be cast as semidefinite programs (SDPs), which may be solved using standard convex optimization toolboxes. SOS programming has been used extensively to provide convex relaxations to fundamentally non-convex or even NP-hard problems arising in robust control [35, 20, 19, 21, 24]. In the context of this work, we leverage SOS to derive sufficient conditions for bounding tracking control error. The price paid for such a convex relaxation is in the tightness of the error bound guarantee (which corresponds to the size/computational tractability of the SOS SDP), but critically not in the existence of the guarantee itself provided the SOS program is feasible. SOS programming thus represents an attractive alternative to exact HJ reachability for those robotic systems where HJ is not directly applicable due computational intractability. In addition, this paper, as well as FaSTrack [6, 11], has potential to complement works such as [15] that assume a model-mismatch error, by providing the TEB as well as a feedback tracking controller to achieve such a TEB.

Statement of Contributions: The contribution of this paper is threefold. First, we provide an SOS-based formulation for tracking a nominal motion plan computed on a simplified low-dimensional model. Second, leveraging the SOS formulation, we devise a bilinear optimization algorithm to jointly compute, offline, a trajectory-independent feedback tracking controller and the corresponding TEB. Finally, we demonstrate our approach on a 5D example for comparison with FaSTrack [11], and on a 8D example which is beyond the reach of HJ analysis. Collectively, our three contributions enable scaling the appealing strategy of planning with model mismatch to systems that are beyond the reach of HJ analysis, while maintaining safety guarantees.

Organization: The remainder of the paper is organized as follows. In Section 2 we provide the problem statement and give necessary background on SOS programming. Section 3 formalizes the problem as a bilinear SOS program, while Section 4 develops an algorithmic framework for its solution. In Section 5 we present numerical experiments. Finally, in Section 6, we draw our conclusions and suggest directions for future work.

2 Preliminaries

In this section, we present the problem formulation, and provide the necessary background material related to sum-of-squares programming.

2.1 Problem Formulation

Consider a relatively high-fidelity model of a robotic system, referred to as the “tracking model,” and a lower-fidelity model, referred to as the “planning model,” described, respectively, by the ordinary differential equations (ODEs)


where is the tracking model’s state (or “tracking state”), and is its control input (or “tracking control”), constrained to lie within the compact set . Similarly, is the planning state and is the planning control input, constrained to lie within the compact set . The planning model can be thought of as a coarser or lower-fidelity model for which motion planning can be done effectively in real time. For both ODEs in equation (1), we assume that the dynamics are Lipschitz continuous in the state for fixed controls, so that given any measurable control function, a unique trajectory exists for each system [10]. Furthermore, we assume that the planning state is a strict subset of the tracking state , in particular (this assumption allows one to clearly relate the tracking and planning state, as discussed in Section 3.1), and that the dynamics for the tracking model are control affine (a rather mild condition that significantly simplifies the SOS formulation, as discussed in Section 3.3).

Crucially, trajectories generated by using the planning model may not be dynamically feasible with respect to the tracking model. Thus, a tracking controller that accounts for the high-fidelity dynamics of the robotic system, as represented by the tracking model, may not be able to track these trajectories exactly and some (possibly large) tracking error may be incurred. Figure 1 illustrates this behavior, for the case of a 4D car model used to track a 2D single integrator model. Specifically, a planning algorithm is used to “quickly” find a nominal motion plan for the planning model, depicted by the dashed blue line. The robot, represented more accurately by the tracking model, strives to track the nominal motion plan as closely as possible, giving rise to the red trajectory. The presence of a minimum turning radius constraint in the tracking model prevents the robot from tracking the sharp corners exactly, thereby yielding a tracking error.

Such a decoupling approach is quite common in the field of motion planning [16], yet very few results exist on how to account for the resulting tracking error. In particular, we advocate that the computation of the nominal motion plan via the planning model should account for the tracking error in order to ensure safe execution (i.e., tracking) by the real system, described by the tracking model. Accordingly, the objective of this paper is to devise an efficient and scalable algorithm to compute (1) a trajectory-independent TEB (represented by the light blue circle in Fig. 1, left), which the planner can use as a “safety buffer” to ensure that the actual trajectory is collision free, and (2) a feedback tracking controller to track such a nominal motion plan while respecting the TEB. Intuitively, the tracking controller should have the property that on the boundary of the TEB, the error dynamics are driven “inwards,” regardless of the nominal motion plan (depicted in Fig. 1, right).

2.2 Sum-of-Squares (SOS) Programming Background

Our approach is rooted in SOS programming, for which we provide a brief review here (for a more detailed review of SOS programming and its applications, please refer to [23, 2, 21]). We begin by discussing semidefinite programs (SDPs), a class of convex optimization problems formulated over the space of symmetric positive semidefinite (psd) matrices. Denote the set of symmetric matrices as . A symmetric matrix is positive semidefinite (psd) if for all , and is denoted as . An SDP in standard form is written as:

subject to

where and are elements of , and “Tr” denotes the trace operator. SOS programs provide a means of certifying nonnegativity of polynomials either globally or over basic semialgebraic sets. A basic semialgebraic set is a subset of the Euclidean space characterized by a finite set of polynomial inequalities and equalities, that is,


where are multivariate polynomials in . The simplest task within this class of problems is verifying the nonnegativity of a given polynomial over , which is already NP-hard [23]. SOS programming provides a convex relaxation approach to accomplish this task. Specifically, a polynomial is SOS if it can be written in the form for some other polynomials . While such a decomposition is not necessary, it is sufficient to guarantee (global) nonnegativity of . If one can find a set of SOS polynomials and ordinary polynomials such that


then one obtains a certificate of nonnegativity of over . Indeed, in the above equation, when and , one has that , as required. Such a certificate is the extension of the generalized S-procedure [12] to the setting of real-valued polynomials [23], and additionally constitutes a necessary condition for a subclass of semialgebraic sets [25].

The computational advantage of SOS programming stems from its intrinsic link to SDPs. Specifically, a polynomial of degree is SOS if and only if , where and

is a vector of monomials up to order

. Thus, certifying that a polynomial is SOS reduces to the task of finding a psd matrix subject to a finite set of linear equalities, thus taking the form in (2). Certificates of the form in (4) will form the building block for our approach.

3 SOS-Based Robust Tracking

3.1 Relative System

Given the tracking  and planning models as in equation (1), we define the relative system state as , where , with , is a matrix-valued map with bounded derivatives, and is a projection matrix from to ( and denote the identity and zero matrices, respectively). In the definition of , we leveraged the assumption that the planning state is a strict subset of the tracking state . We make the following assumption on the dynamics of the relative system state, which is central for our approach:

Assumption 1 (Relative System Dynamics)

The relative system dynamics can be written as a Lipschitz continuous function of , that is, .

Lipschitz continuity is necessary for the existence and uniqueness of solutions to the ODEs. The structural property in Assumption 1 is satisfied for a number of dynamical models of mobile robotic systems. For example, for ubiquitous cases where the planning model has integrator dynamics, Assumption 1 is satisfied by simply selecting as the identity map. However, in general, it is difficult to characterize the conditions on the planning/tracking models such that Assumption 1 is satisfied. Therefore, in Table 1 in Appendix A, we provide a “catalog” of tracking and planning models fulfilling Assumption 1 for a representative set of robotic systems, along with the map that guarantees fulfillment of Assumption 1 and the resulting relative system dynamics. Specifically, the relative system states in the 6th column are obtained by combining the tracking state in the 2nd column, planning state in the 4th column, and the transformation in the 5th column via . Henceforth, we will consider planning/tracking models that fulfill Assumption 1, and refer to the system as the relative system.

Finally, we define the error state as the relative system state excluding the absolute states of the tracking model, and the auxiliary state as the relative system state excluding the error state. Hence, . For instance, for the case of a 5D car model used to track a 3D Dubins car model (second example in Table 1), and . Roughly speaking, considering relative system dynamics allows one to reformulate the tracking problem as a stabilization problem to the origin.

3.2 Optimization Problem

In order to define a notion of tracking error bound (TEB), let be a smooth mapping representing a vector of quantities we wish to remain bounded, with ; henceforth, we will refer to as the bounded state. The simplest example for is the identity map. However, for more complex planning/tracking models, the mapping can be more sophisticated to better capture the structure of the planner/tracker dynamics pair – some examples are provided in Table 1. The goal then is to find a closed and bounded set in the bounded state such that for all (as an illustration, see the light blue ball in Figure 1). Such an invariance condition is captured by the implication


To parameterize this set, let be a positive constant and be a smooth function with a bounded sublevel set. Also, define the function as simply the composition . Suppose that the following implication is true:

That is, there exists some tracking control such that the time-derivative of on the boundary is negative. It follows that the set is invariant. Then, the set is a valid error bound in the sense of equation (5). An illustration of this idea is provided in Fig. 1 (right).

Note that the set is equivalent, up to a translation and scaling factor, to the set , where and . To eliminate this redundancy, we impose, without loss of generality, the conditions and . The choice essentially corresponds to a translational adjustment that “centers” the set on a zero bounded state.

In the context of this paper, set is used as a “buffer” that a planner should consider along each candidate nominal motion plan, to ensure that the realized tracking trajectory is safe, that is, collision free. Clearly, the search for set is intertwined with the search for a tracking controller that can keep the realized tracking trajectory within it. We parameterize the tracking controller as a function of the relative system state and planning control signal, that is . Denote the closed-loop dynamics as and define

The search for a suitable function and controller is then formalized by the optimization problem:

subject to (6b)

We will next encode the constraints and objective of problem (6) as SOS certificates.

3.3 Reformulating the Constraints as SOS Certificates

We consider the constraints first. Under the assumption that the dynamics for the tracking model are control affine (common for most robotic systems), the function is control affine in . That is, takes the form . Thus,

Hence, the invariance condition requires the existence of a tracking controller such that


We can equivalently state inequality (7) as:


The equivalence between inequalities (7) and (8) follows from the observation that inequality (7) holds for some if and only if it holds for a minimizing the left hand side in inequality (8). Importantly, the minimization in inequality (8) is independent of , as the constraint set and objective coefficient vector do not depend on . Thus, we can simplify by making it a function of only (with a slight abuse of notation, we still refer to such a simplified tracking controller with ).

We can now define SOS certificates for the constraints in problem (6). Suppose set can be written as the semialgebraic set and let the controller be a polynomial function in . Expanding , one can encode constraint (6b) using the multiplier polynomials such that


(In the following we drop the summation notation and write the collective sum as a “dot-product:” ). To capture the control constraint in (6c), one may leverage two different techniques. The first one is to compose the tracking controller with a saturation function and write in case form corresponding to the unsaturated and saturated regimes, respectively. However, this approach scales exponentially in the dimension of the control input, as one needs to capture all combinations of saturation regimes [21]. Instead, we assume that the tracking control set is polytopic, i.e., , where is a set of linear inequalities in of the form , . Note that this is not an overly restrictive assumption as the control constraints are often taken to be box constraints. Then, we encode constraint (6c) using the multipliers :


Crucially, this approach scales linearly with respect to the number of inputs.

3.4 Reformulating the Objective as SOS Certificates

Minimizing the volume of , which itself is a non-linear mapping of the sublevel set of , is a difficult task if one reasons in terms of arbitrary functions . One approach is to approximate the volume by the integral of over an Euclidean ball of radius such that the ball is contained within the sublevel set of – an approach taken, for example, in [24]. Alternatively, one can minimize the volume of an encapsulating ellipsoid, easily captured using an additional constraint and a convex pseudo-objective.

Toward this end, we define the ellipsoid for some positive definite matrix , where is a small tolerance parameter. Then, the inclusion constraint is captured using the SOS multiplier :


As the volume of is inversely proportional to the square root of its determinant, the minimum volume objective reduces to maximizing the determinant of , and can be written in the form of (2[4, Chapter 4]. For non-identity mappings , the first SOS constraint in equation (11) can quickly become computationally challenging due to the quadratic terms in and the complexity of the relation . In this case, one may consider the following simplification. Since


the simplification is to outer bound the set on the right by using an ellipsoid. Specifically, let the ellipsoid be given by for some positive definite matrix . The inclusion condition , is captured using the SOS multiplier :


Crucially, the constraints above are deliberately written with respect to the variable , which is treated as an independent indeterminate from . This is beneficial when using a mapping other than the trivial identity map, as it allows one to approximate the otherwise complex set with potentially simpler functions in .

3.5 SOS Formulation of Optimization Problem

Collecting all the results so far, our approach entails conservatively (as we rely on sufficient SOS certificates) solving the optimization problem (6) as a SOS program:

subject to eqs. (9), (10), (13) (14b)

where . Reinforcing the ideas in Section 3.4, we iterate that is considered as a function in the independent indeterminate , while constraints (9), (10) are encoded using the indeterminate via the definition .

Constraints in (14b) are bilinear in the decision variables. Consequently, similar to the bilinear solution algorithms in [21] and [24], one must alternate between the decision variable sets and , each time holding the other variable set fixed. Specifically, we refer to the sub-problem where is part of the decision variable set as the sub-problem, and the sub-problem where is part of the decision variable set as the sub-problem. Direct implementation of alternations is hindered by two challenges. First, one requires a feasible initial guess for to begin the alternations. In [21] and [24], the authors leverage locally linearized models and the solution to the Riccati equation to generate such a guess. In this paper we address a fundamentally more challenging problem as we consider the controllability of a relative dynamical system between two different dynamical models. Consequently, the relative system is often not linearly controllable at (even if the individual models are) and thus one requires an additional procedure to generate a feasible function for problem (14) from an initially infeasible guess. Second, alternating optimization is prone to numerical instabilities as the solutions of the individual convex problems frequently lie on boundaries of the respective feasible sets. We address these challenges next.

4 Solving the Bilinear Optimization Problem

The general idea to tackle both of the aforementioned challenges entails using iterative slack minimization. We first discuss the solutions to the and sub-problems (Sections 4.1 and 4.2, respectively) and then present the general solution algorithm in Section 4.3.

4.1 The Sub-Problem

A naïve implementation of the sub-problem would entail solving problem (14) with respect to for a given . However, constraint (9) as written may generate controllers that are numerically unstable when held fixed in the sub-problem. Given the polytopic constraint set and a fixed , one can exactly characterize the “most stabilizing controller” as the function that minimizes the left hand side in inequality (8) for all such that . Thus, we propose the following two-step method for solving the sub-problem, for a given function :

  1. Find the “most stabilizing controller” as the solution to:

    subject to (15b)

    where is a slack variable for the invariance constraint. Notice that when and , then . Denote the optimal slack as .

  2. Compute the tightest bounding ellipsoid :

    subject to (16b)

The two steps above comprise the sub-problem. The benefit of decomposing the sub-problem in this fashion is that we independently search for the most stabilizing controller while simultaneously relaxing the invariance constraint by using the slack variable ; in particular, accounts for the suboptimality of the computed controller with respect to the left hand side of inequality (9).

4.2 The Sub-Problem

Given a controller , and multiplier polynomials , , and from the sub-problem, the sub-problem is defined as

subject to (17b)

where is a slack vector for the control constraints and is a Pareto trade-off parameter. Notice that the control slack is only necessary in the sub-problem since the controller is held fixed within this problem. In contrast, in problem (15), we directly optimize over the set of strictly feasible controllers. Given these slack-based relaxed sub-problems, we now provide a solution algorithm for problem (14).

4.3 Solution Algorithm

Before providing the full solution algorithm, we describe an initialization procedure that generates a numerically stable initial guess for . The procedure is detailed in Algorithm 1. In particular, the algorithm may be initialized by using any polynomial guess for such that (e.g., ).

1:Input: Initial guess satisfying ; slack tolerance ; max # of iterations .
3:while  do
4:      Solve  (15)–(16).
5:     if  then return ()
6:      Solve  (17) with and subject to additional constraint .
7:     if  then return ()
8:     .
9:end while
10:return Failure.
Algorithm 1 Generating Feasible Guess

Notice that in line 6, when problem (17) is solved, the Pareto parameter is set to 0 since the objective for the initialization algorithm is to simply generate a feasible initial guess for problem (14). Furthermore, we impose the additional constraint to ensure monotonic improvement in solution quality between sub-problems. As the original problem is still non-convex, the convergence of the slack variables below the tolerance threshold is not guaranteed for every initial guess of . However, typical guesses such as for a positive diagonal matrix appear to work quite well, as the experiment section will illustrate.

Given an initial guess generated by Algorithm 1, we are now ready to solve problem (14). To do so, we will make use of a slightly modified version of the sub-problem, given below:

subject to eqs. (17b) – (17f) (18b)

where is the previous numerically stable solution (i.e., with optimal slack values ), is a fixed parameter, and is a backtracking search parameter that is adjusted in an iterative fashion. Note that the Pareto parameter now multiplies the slack terms in the objective. Specifically, we iteratively solve problem (18), each time checking the slack tolerances to ensure numerical stability, while using constraint (18c) to enforce a trust region around the current numerically stable solution. The full solution algorithm is summarized in Algorithm 2.

1:Input: Output tuple from Algorithm 1; slack tolerance ; termination tolerance .
2:, converged false.
3:Initialize: .
4:while  converged do
5:      Solve (15)– (16) subject to additional constraint: .
6:     if Line 5 successfully solved then
7:          .
8:           Backtrack.
9:          if  then converged trueelse .
10:     else
11:          converged true.
12:     end if
13:end while
Algorithm 2 Solving Problem (14)
1:Input: Solution and from line 5 in Algorithm 2 and current best solution ; slack tolerance ; backtrack parameters .
2:Initialize: , bt false, .
3:while bt do
4:      Solve  (18).
5:     if  then else bt true.
6:end while
7:while bt  do
8:     .
9:      Solve  (18).
10:     if  then
11:          .
12:          bt false.
13:     end if
14:end while
Algorithm 3 Backtrack

The backtrack search procedure in line 8 is summarized in Algorithm 3. Within this procedure, we first iteratively maximize within the trust region (18c) centered on the previous stable numerical solution, using the slack values as a check on solution quality (lines 36). Once solution quality degrades, we backtrack (shrink) the trust region in lines 714 until we again fall below the slack tolerances. Note that Algorithm 3 will either return an updated that is numerically stable (with respect to slack tolerances), or it will simply return the numerically stable solution from line 5 in Algorithm 2 if unable to make progress. Thus, Algorithm 2 terminates if either (1) improvement in stalls, or (2) the function returned by the backtrack procedure is not strictly feasible (i.e., ) with respect to line 5 in Algorithm 2, but is still acceptable according to the slack tolerances.

The key difference between the alternating method described here and a similar procedure in [24] is that the authors in [24] minimize the slack variable in both sub-problems and use binary search to iteratively refine an upper bound on the cost function within the sub-problem. On the other hand, our algorithm maintains numerical stability by using the slack variables solely as a check on the allowable change in the solution, while taking advantage of minimizing the true objective (i.e., ) within both sub-problems. This is especially useful in situations where the second phase of the algorithm struggles to make notable improvements on the objective (e.g., due to a smaller set of decision variables), thereby allowing the sub-problem to take over.

5 Numerical examples

In this section we numerically validate our proposed approach. Specifically, in Section 5.1, we compare our approach with the HJ-based approach in [11], while in Section 5.2, we study a high-dimensional system which is beyond the reach of HJ analysis. Additional numerical examples are provided in Appendix B. The code is available at

5.1 Comparison with the HJ Method

In this section, we compare our method with the HJ-based approach in [11] for the case of a 5D car model used to track a 3D Dubins car model (that is, the second example in Table 1). This example was chosen because five dimensions is the current limit for standard grid-based HJ reachability methods when techniques such as decomposition cannot be applied. The system dynamics and bounded state definition are provided in the second row of Table 1. The model parameters were chosen as m/s, rad/s, m/s, rad/s. For the SOS method, we parameterized and as 2nd order polynomials in and , respectively. The trigonometric terms and were approximated with Chebyshev polynomials up to degree 2, over the range rad. To ensure the validity of these approximations, an additional constraint was appended to problems (16) and (17), namely:


where and is a SOS polynomial in . The initial guess for was simply , where is a diagonal matrix with randomly sampled positive entries. In order to ensure a fair comparison, the cost function in the min-max game for the HJ method was , which is the closest analogue to the SOS objective of minimizing the entire volume of and not simply its projection onto the position coordinates. Figure 2 plots the projections of the boundary of set onto the components (i.e., the dimensions most relevant for collision checking) for the HJ and SOS solutions, respectively. The right-most panel in this figure provides a top-down view onto the plane. As expected, the HJ solution provides a tighter error bound – approximately 42% smaller than the SOS error bound, or about m in absolute terms.

Figure 2: Projection of the boundary of onto . Left: SOS. Middle: HJ. Right: Top-down view (i.e., onto for both solutions). The HJ positional error bound is smaller (42% of SOS bound), but the SOS solution requires only 0.3% of the computation time required by HJ analysis.

The main reason behind this is that by using a grid-based approach to solve the min-max HJ game, one exactly computes the non-smooth “optimal controller” in (8), whereas the SOS approach attempts to find the best polynomial approximation. On the other hand, the computation time difference for the two solutions is substantial – approximately 25 hours for the HJ solution versus 5 minutes for the SOS solution. Specifically, the HJ solution was obtained by solving a corresponding HJ PDE [11] using a C++ implementation of the Lax Friedrichs numerical scheme [34]. The SOS programs were solved using the Spotless polynomial optimization toolbox [36] and MOSEK SDP solver [3]. Computations were done on a desktop computer with an Intel Core i7-2600K CPU and 16 GB of RAM. At least for this example, the SOS approach appears to provide a high-quality approximation to the optimal (HJ-based) solution, in a fraction of the time required by the HJ-based approach.

In Appendix B, we provide an additional comparison between the HJ and SOS methods for a 4D dynamically extended Dubins car tracking a 2D single integrator planning model (row 1 in Table 1). This example highlights a case in which the HJ method is efficient enough to warrant its use instead of the SOS method (i.e. when the dynamics are less than five dimensions and/or can be decomposed into subsystems that are each less than five dimensions). In this example, an optimal bound of 0.1 m between the two models was computed using HJ while SOS yielded a bound of 0.3 m. The computation time comparison was on the order of 2 minutes for SOS versus 5 minutes for HJ. The projection of the optimal onto the translational plane from the HJ method clearly illustrates the limitation of SOS in its ability to approximate non-smooth quantities. Indeed, herein lies the primary weakness of the SOS method – tight approximations require high degree polynomials which can rapidly increase the size of the SOS problems. Despite this limitation inherent in the SOS method, the approach allows us to establish a conservative over-approximation of the tracking error bound (when one exists) for a given planner/tracker pair and gauge the sensitivity of the expected tracking error to changing model parameters such as inertial properties and control authorities. Furthermore, as the next section illustrates, the SOS method allows us to perform this analysis for higher dimensional systems that are severely challenging (albeit, not impossible) for HJ. As such, the HJ and SOS methods should really be viewed as complementary to one another.

5.2 High-Dimensional Example

In this section, we present numerical results for a high-dimensional system for which the HJ approach is intractable. Specifically, we consider the case where an 8D plane model is used to track a 4D decoupled Dubins plane model (that is, the fourth example in Table 1). To ensure that this example cannot be solved using standard grid-based HJ reachability, we must first determine that decomposition techniques would not be applicable. The 8D relative system dynamics are highly coupled, which means that approximate decomposition techniques would lead to results that are overly conservative [8]. The dynamics also do not form self-contained subsystems, making exact decomposition impossible [7]. This example well illustrates the usefulness of our approach. The planning model (i.e., decoupled Dubins plane) is a benchmark planning example with well characterized optimal trajectories [9]. The full 8D dynamics of the plane, however, are considerably more difficult to plan trajectories for online. Specifically, the system dynamics and bounded state definition are provided in the fourth row of Table 1. The planner model parameters are as follows. The constant speed is set to be the nominal speed for the plane in straight level flight conditions (lift equals gravitational force) at a nominal angle of attack , which corresponds to m/s (see [30] for the relevant constants used to compute this quantity). The maximum magnitude of the turning rate for the planner, , is set to be of the plane’s maximum turning rate in a horizontal coordinated turn at nominal speed , and was computed to be rad/s (equivalently, a minimum turning radius of 30 m for the horizontal Dubins model). Finally, the planner’s vertical velocity is limited to the range .

Taking advantage of the structure of the dynamics, the normalized acceleration control was chosen to exactly cancel drag, plus an additional 2nd degree polynomial in as determined by the SOS program. The rest of the controller components and were also parameterized as 2nd order polynomials in and , respectively. The plane’s (tracking) control limits were chosen to be m/s ( times the acceleration needed to cancel drag at level trim conditions), s, and s. To enforce the validity of the Chebyshev approximation for the trigonometric and terms in the dynamics, we enforce the additional constraints rad, rad, rad, and m/s. The overall (initialization plus main optimization) computation time was under 2 hours. The projection of the set onto (the position errors in the frame of the Dubins car) had span m. The bound is naturally more loose in the dimensions since the planning model’s horizontal velocity is equal to the plane’s trim conditions at level flight, limiting the plane’s ability to use its remaining velocity range for both tracking and ascending/descending. Thus, for such a choice of planning and tracking models, the bound appears reasonable. To obtain tighter bounds, one could alternatively use the coupled kinematic plane model proposed in [22] as the planning model.

For an application of this bound to online motion planning, we created a cluttered obstacle environment with trees and buildings (see Figure 2(a)). The goal region is the blue shaded box in the corner, and the plane starts at position . The nominal motion plan was computed using kinodynamic FMT* [29], with the locally optimal trajectories in [9] as steering connections). Having computed all steering connections offline (¡ 2 min computation time for 5000 samples), the online computation time for the trajectory was on the order of 100 ms. Collision checking was performed using obstacles inflated by the size of the plane (roughly a m box envelope) and the projection of the ellipsoidal bound onto (rotated and translated by the orientation and position of the Dubins plane). In Figure 2(a), the nominal motion plan (i.e., the plan for the decoupled Dubins model) is shown in red, while the actual trajectory followed by the plane is shown in black. We overlay the projection of set onto position space, and its sweep along the nominal Dubins airplane motion plan. The snapshot confirms that this “tube” remains obstacle free, and highlights the necessity of using a -based buffer as the plane negotiates difficult turns (see Figure 2(b)). A video of this simulation can be found at Additional plots and simulations are provided in Appendix B.

Figure 3: Left: snapshot of the nominal Dubins motion plan (red), along with the bounding ellipsoidal tube. The actual trajectory (black) stays within the ellipsoidal tube at all times. Right: Close-up of a tight turn at the end towards the goal.

6 Conclusions

In this work we have presented a principled and scalable approach for planning with model mismatch. Specifically, by harnessing the tool of SOS programming, we designed an algorithmic framework to compute, offline, for a given pair of planning and tracking models, a feedback tracking controller and associated tracking bound; this bound can then used by the planning algorithm as a safety-margin when generating motion plans. The efficacy of the approach was verified via illustrative examples, which include a comparison with a state-of-the-art method based on reachability analysis.

There are several key avenues for future work: First, we would like to investigate structural conditions that guarantee fulfillment of Assumption 1, along with augmenting the catalog of planning/tracking models in Table 1. Second, we plan to investigate how the conservativeness of our approach may be reduced by using more complex certificates such as the Stengle Positivstellensatz. Third, to fully explore the scalability advantages of our approach, we will consider more tractable relaxations of SOS programming, such as SDSOS and DSOS optimization [1]. Fourth, it is of interest to study how to minimize the projected volume of the TEB onto the error states of interest. Finally, we plan on implementing our approach on agile robotic systems such as unmanned aerial vehicles.


  • [1] Ahmadi, A.A., Majumdar, A.: DSOS and SDSOS optimization: LP and SOCP-based alternatives to sum of squares optimization. In: IEEE Annual Conf. on Information Sciences and Systems (2014)
  • [2] Ahmadi, A.A., Majumdar, A.: Some applications of polynomial optimization in operations research and real-time decision making. Optimization Letters 20(4), 709–729 (2016)
  • [3] ApS, M.: MOSEK optimization software (2017), Available at
  • [4] Ben-Tal, A., Nemirovski, A.: Lectures on modern convex optimization: analysis, algorithms, and engineering applications. SIAM (2001)
  • [5] Burridge, R.R., Rizzi, A.A., Koditschek, D.E.: Sequential composition of dynamically dexterous robot behaviors. Int. Journal of Robotics Research 18(6), 534–555 (1999)
  • [6] Chen, M., Bansal, S., Fisac, J.F., Tomlin, C.J.: Robust sequential path planning under disturbances and adversarial intruder. IEEE Transactions on Control Systems Technology (2018), in press
  • [7] Chen, M., Herbert, S., Tomlin, C.J.: Fast reachable set approximations via state decoupling disturbances. In: Proc. IEEE Conf. on Decision and Control (2016)
  • [8] Chen, M., Herbert, S.L., Vashishtha, M.S., Bansal, S., Tomlin, C.J.: Decomposition of reachable sets and tubes for a class of nonlinear systems. IEEE Transactions on Automatic Control (2018), in press
  • [9] Chitsaz, H., LaValle, S.M.: Time-optimal paths for a dubins airplane. In: Proc. IEEE Conf. on Decision and Control (2007)
  • [10] Coddington, E.A., Levinson, N.: Theory of Ordinary Differential Equations. McGraw-Hill (1955)
  • [11] Herbert, S.L., Chen, M., Han, S., Bansal, S., Fisac, J.F., Tomlin, C.J.: FaSTrack: a modular framework for fast and guaranteed safe motion planning. In: Proc. IEEE Conf. on Decision and Control (2017)
  • [12] Iwasaki, T., Hara, S.: Generalized KYP lemma: unified frequency domain inequalities with design applications. IEEE Transactions on Automatic Control 50(1), 41–59 (2005)
  • [13] Janson, L., Schmerling, E., Clark, A., Pavone, M.: Fast Marching Tree: a fast marching sampling-based method for optimal motion planning in many dimensions. Int. Journal of Robotics Research 34(7), 883–921 (2015)
  • [14] Karaman, S., Frazzoli, E.: Sampling-based algorithms for optimal motion planning. Int. Journal of Robotics Research 30(7), 846–894 (2011)
  • [15] Kousik, S., Vaskov, S., Johnson-Roberson, M., Vasudevan, R.: Safe trajectory synthesis for autonomous driving in unforeseen environments. In: Proc. ASME Dynamic Systems and Control Conf. (2017)
  • [16] LaValle, S.M.: Motion planning: Wild frontiers. IEEE Robotics and Automation Magazine 18(2), 108–118 (2011)
  • [17] LaValle, S.M., Kuffner, J.J.: Randomized kinodynamic planning. Int. Journal of Robotics Research 20(5), 378–400 (2001)
  • [18] Li, Y., Littlefield, Z., Bekris, K.E.: Asymptotically optimal sampling-based kinodynamic planning. Int. Journal of Robotics Research 35(5), 528–564 (2016)
  • [19] Majumdar, A., Ahmadi, A.A., Tedrake, R.: Control design along trajectories with sums of squares programming. In: Proc. IEEE Conf. on Robotics and Automation (2013)
  • [20] Majumdar, A., Tedrake, R.: Robust online motion planning with regions of finite time invariance. In: Algorithmic Foundations of Robotics X. Springer (2013)
  • [21] Majumdar, A., Tedrake, R.: Funnel libraries for real-time robust feedback motion planning. Int. Journal of Robotics Research 36(8), 947–982 (2017)
  • [22] Owen, M., Beard, R.W., McLain, T.W.: Implementing dubins airplane paths on fixed-wing UAVs. In: Handbook of Unmanned Aerial Vehicles. Springer Dordrecht (2015)
  • [23] Parrilo, P.A.: Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization. Ph.D. thesis, Massachusetts Inst. of Technology (2000)
  • [24] Posa, M., Koolen, T., Tedrake, R.: Balancing and step recovery capturability via sums-of-squares optimization. In: Robotics: Science and Systems (2017)
  • [25] Putinar, M.: Positive polynomials on compact semi-algebraic sets. Indiana Univ. Mathematics Journal 42(3), 969–984 (1993)
  • [26] Rajamani, R.: Vehicle Dynamics and Control. Springer, Boston, MA, 2 edn. (2012)
  • [27] Ratliff, N., Zucker, M., Bagnell, J.A., Srinivasa, S.: CHOMP: gradient optimization techniques for efficient motion planning. In: Proc. IEEE Conf. on Robotics and Automation (2009)
  • [28] Schmerling, E., Janson, L., Pavone, M.: Optimal sampling-based motion planning under differential constraints: the drift case with linear affine dynamics. In: Proc. IEEE Conf. on Decision and Control (2015)
  • [29] Schmerling, E., Janson, L., Pavone, M.: Optimal sampling-based motion planning under differential constraints: the driftless case. In: Proc. IEEE Conf. on Robotics and Automation (2015), Extended version available at
  • [30]

    Schmerling, E., Pavone, M.: Evaluating trajectory collision probability through adaptive importance sampling for safe motion planning. In: Robotics: Science and Systems (2017)

  • [31] Schulman, J., Ho, J., Lee, A., Awwal, I., Bradlow, H., Abbeel, P.: Finding locally optimal, collision-free trajectories with sequential convex optimization. In: Robotics: Science and Systems (2013)
  • [32] Singh, S., Majumdar, A., Slotine, J.J.E., Pavone, M.: Robust online motion planning via contraction theory and convex optimization. In: Proc. IEEE Conf. on Robotics and Automation (2017), Extended Version, Available at
  • [33] Steinhardt, J., Tedrake, R.: Finite-time regional verification of stochastic non-linear systems. Int. Journal of Robotics Research 31(7), 901–923 (2012)
  • [34] Tanabe, K., Chen, M.: beacls Library, Available at
  • [35] Tedrake, R., Manchester, I.R., Tobenkin, M., Roberts, J.W.: LQR-trees: Feedback motion planning via sums-of-squares verification. Int. Journal of Robotics Research 29(8), 1038–1052 (2010)
  • [36] Tobenkin, M.M., Permenter, F., Megretski, A.: SPOTLESS polynomial and conic optimization (2013), software available from


Appendix A Catalog of Planning/Tracking Models

90 Tracking system Tracking model Planning system Planning model Transformation Relative system dynamics Suggested bounded state mapping 4D car – position – heading – speed – turn rate control – accel. control 2D single integrator – position – speed control 5D car – position – heading – speed – turn rate – accel. control – ang. accel. control 3D Dubins car – position – heading – constant speed – turn rate control 6D planar quadrotor – position – velocity – pitch – pitch rate – thrust control – ang. accel. control 4D double integrator – position – velocity – accel. control 8D plane, see [30] – position – heading – speed – flight path angle – roll – angle of attack – control