Computing Funnels Using Numerical Optimization Based Falsifiers

09/23/2021
by   Jiří Fejlek, et al.
0

In this paper, we present an algorithm that computes funnels along trajectories of systems of ordinary differential equations. A funnel is a time-varying set of states containing the given trajectory, for which the evolution from within the set at any given time stays in the funnel. Hence it generalizes the behavior of single trajectories to sets around them, which is an important task, for example, in robot motion planning. In contrast to approaches based on sum-of-squares programming, which poorly scale to high dimensions, our approach is based on falsification and tackles the funnel computation task directly, through numerical optimization. This approach computes accurate funnel estimates far more efficiently and leaves formal verification to the end, outside all funnel size optimization loops.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/19/2018

Multi-agent Gaussian Process Motion Planning via Probabilistic Inference

This paper deals with motion planning for multiple agents by representin...
07/13/2021

Motion Planning by Learning the Solution Manifold in Trajectory Optimization

The objective function used in trajectory optimization is often non-conv...
07/23/2020

Robust Control Synthesis and Verification for Wire-Borne Underactuated Brachiating Robots Using Sum-of-Squares Optimization

Control of wire-borne underactuated brachiating robots requires a robust...
03/21/2022

db-A*: Discontinuity-bounded Search for Kinodynamic Mobile Robot Motion Planning

We consider time-optimal motion planning for dynamical systems that are ...
03/13/2018

Solving First Order Autonomous Algebraic Ordinary Differential Equations by Places

Given a first-order autonomous algebraic ordinary differential equation,...
03/20/2019

Combining Coarse and Fine Physics for Manipulation using Parallel-in-Time Integration

We present a method for fast and accurate physics-based predictions duri...
10/21/2020

Trip Recovery in Lower-Limb Prostheses using Reachable Sets of Predicted Human Motion

People with lower-limb loss, the majority of which use passive prosthese...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

An important task in robotics motion planning is to follow a given trajectory into some target set [18, 1, 23]. Especially, numerous path planning algorithms [27, 22, 11] rely on this task. This involves first designing a controller that follows this trajectory [15], and then determining a neighbourhood of the trajectory (a funnel [24]) where the controller fulfils its goal of reaching a given target set [12, 21]. In this paper, we present an efficient method for this second task.

In the literature [22, 24, 9], funnel construction is usually based on sum-of-squares programming (SOS) [20], a relaxation technique for polynomial systems. However, such formulations are sensitive to numerical errors and scale poorly to high dimensions—both in theory [10] and in practice [16]. Moreover, SOS methods tend to underestimate the actual funnel size [3, 17].

To alleviate these drawbacks of SOS methods, in this paper, we propose the use of falsifiers based on numerical optimization to compute funnel candidates directly. We leave potential formal verification to the end. This allows for an efficient funnel optimization loop, since the dimensions of subsequent non-linear programming (NLP) problems are the same as the dimension of the original problem and do not further increase as it is the case of SOS methods, which increases at least quadratically in the problem dimension 

[10]. Our computational experiments show that without the verification part, the falsifiers still provide quite accurate estimates of control funnels. As an additional advantage we note that the method is also applicable to non-polynomial systems that SOS-based methods cannot handle directly (i.e., they need the non-polynomial dynamics to be approximated by a polynomial one).

The structure of the paper is as follows. In Section 2, we state the precise problem. In Section 3, we review the problem of funnel construction and describe existing approaches based on SOS programming. In Section 4, we introduce our algorithm and explain its implementation. In Section 5, we provide computational experiments. Section 6 concludes the paper.

2 Problem Statement

Consider a system

(1)

where is a smooth function. We further assume that system (1) has a unique solution for any initial point and time . We denote this solution by , which is a function in . We will also simply write for .

Let and . In this paper, we consider the problem of computing funnels [24]. A funnel is a time-varying set of states for such that for all and , the solution from stays in the funnel, i.e. for all . In addition, we require that the final part of a funnel is a subset of some chosen set of goal states . Finally, we also want a funnel as large in volume as possible.

To ease the construction of funnels, we reduce our attention to funnels that are constructed around some chosen system trajectory for that ends in  [22, 24]. These funnels can be described using a differentiable positive definite function , and a differentiable333actually, continuous and right differentiable suffices real function as sublevel sets . Hence, funnel construction reduces to construction of functions and such that forms a funnel as large in volume as possible.

Simplifying the problem even further, we will assume that the shape is fixed beforehand. Consequently, all that remains is to determine an optimal wrt. fixed .

3 Funnel construction

In this section, we will shortly review general funnel construction and an SOS-based variant [22, 24]. We assume a set as described in the previous section.

First, we explore conditions on and that make a funnel. Let us define for each sublevel sets , and level sets . Hence, we can write in our notation .

Assume that and are chosen in such a way that the final sublevel set is a subset of a target set, i.e. . Moreover, assume that for all and all the value of decreases faster or increases slower than along the system dynamics, that is

(2)

where Due to this requirement, for all , all states in stay in , for all . Consequently, sets forms a funnel.

As we mentioned in the previous section, we reduce our attention to the case in which is fixed beforehand. An often suitable candidate for is a solution of the Lyapunov equation [5] or the Riccati equation [6]. A candidate then has quadratic form where is a solution to a respective equation. Still, even if we fix , we still need to determine . Additionally, we would like to be chosen in such a way that the sublevel sets are as large as possible.

In previous work [22, 24], is parametrized piecewise-linearly and the parameters are optimized using a line-search approach. In each iteration, is verified using SOS programming. Moreover, computation of can be approximated by performing it in finitely many time samples [22, 24]. This partially alleviates the needed computational burden. However, certain care must be taken with choosing time samples to obtain a reasonable approximation of a funnel as we will see later in Example 1 of our computational experiments.

To be more specific, let us assume that both and are polynomials in . Choose time instants , denote as , and set for an interval To optimize the volume of a discrete funnel, a linear cost is considered for optimization [24]. Hence, values for which meets (2) in all time samples are found by solving an SOS program

subject to (3)

where and are real polynomials. Note that the constraint in (3) is bilinear in and , and thus an algorithm for solving (3) iteratively alternates between solving (3) for multipliers and with fixed , and solving (3) for with fixed and  [22, 24]. The computations also require a feasible start (i.e. a valid initial funnel) as described in [24]. Lastly, we should also mention that -iterations can be solved separately and independently for each time sample. In the case of -iterations, one has the choice of either solving them as a whole, in one single SOS program, or splitting them into multiple SOS programs that must then be solved sequentially, backwards in time. In our experience, such splitting is necessary in higher dimensions.

Sum-of-squares programming, while a convex optimization problem, is computationally demanding, can encounter numerical problems, and scales poorly to high dimensions [10, 16]. In particular, an SOS polynomial constraint in (3) can be reformulated as [13]

(4)

where

is a vector of monomials up to degree

, and is an unknown semidefinite matrix with elements provided that polynomial on the left hand side is of degree  [10]. Since polynomials are equal only if their coefficients are equal, constraint (4) can be replaced with equalities (coefficient matching conditions [10]) and one semidefinite matrix constraint . Hence, the states are removed from the optimization, but a new semidefinite matrix variable is introduced, which causes the aforementioned scalability issues in SOS programming [10].

Moreover, the approach requires repeated solving of (3) to perform optimization over . Also note that a resulting value of may not be optimal, since transformation to SOS is a relaxation technique. A final slight drawback of SOS relaxation is that system dynamics (1) and must be polynomials.

4 Constructing using numerical optimization

In:

A system , a goal region , a reference trajectory for with , positive definite function , time samples and sampling for each interval .

Out:

Funnel

  1. Let be s.t.

  2. For

    1. Put

    2. Repeat until subsequent iterations do not change

      • Solve (6) from a random initial point.

      • If the solution evolves outside of , find a solution to (5) and put .

    3. Repeat until subsequent iterations do not change

      • Solve (8) for the respective sampling from random initial points.

      • If the solution does not meet (7), put

  3. return the funnel , where

    is a piece-wise linear interpolation between the samples

Algorithm 1 Funnel Synthesis

In this section, we describe a funnel computation algorithm that avoids the use of costly SOS programming. We propose the use of falsifiers based on numerical optimization to solve the optimization of and to leave potential formal verification to the end.

Our algorithm samples the constructed funnels in time and proceeds backwards. Let us choose time instants and find for a given a value , such that is as large as possible. This NLP problem can be efficiently solved for and quadratic using semidefinite programming. Next, we compute the samples which we denote as . Finally, we assume an interpolation between the samples and check condition (2) for the interpolated funnel.

Three NLPs are to be solved for each time sample. The first two, NLPs (6) and (5), are used to provide the time sampled optimal funnel (in terms of volume). The final one (8) checks condition (2) that would be used for formal verification of the interpolated funnel. The algorithm shrinks the funnel, if any counterexample to condition (2) is found, or accepts the sampled value, if it does not.

Let us describe the algorithm more closely. Assume that we already determined the optimal value of . To determine the optimal value of , consider the NLP that seeks a point with smallest possible value  for which the system leaves after evolving from to :

subject to (5)

Unfortunately, NLP (5) is non-convex, thus a local NLP solver can solve this NLP only approximately. Therefore, for reliably accepting a certain value , more needs to be done. The first step to do so is another NLP

subject to (6)

that checks whether the current estimate results in a counterexample, a state that evolves outside of . If the found optimum is bigger than , we found a counter-example, and hence we solve NLP (5), using the solution to NLP (6) as an initial feasible estimate. This solution gives us a new, smaller estimate for . If the found optimum is not bigger than , we cannot make a definite conclusion, since NLP (6) is again non-convex. Hence, we increase the trust in the current estimate by repeatly solving NLP (6) from random initial points until no further counter-example is found within a certain number of subsequent iterations.

The use of NLP (6) has two major advantages over only iterating NLP (5) from random initial points. First, NLP (6) directly checks for the existence of a counter-example, making it more efficient for this purpose, in our experience. And second, the result of NLP (6) provides a much more useful starting point for NLP (5) than random starting points.

To enforce the termination of the loop between NLPs (6) and (5), we update as , where is the found numerical solution of (5) and . To see that the loop must terminate after finitely many iterations, consider that there must be small enough such that no counterexample exists due to continuity of solutions of ordinary differential equations wrt. their initial conditions [5] and the fact that is positive definite. It should also be noted that, in general, we do not have available in explicit form, and hence we must approximate it using numerical integration.

After the end of the iteration between NLPs (6) and (5), we try to extend the funnel from to the whole time interval . For this we use linear interpolation between and . Based on this, we would have to check condition (2) for all and all . However, as mentioned in [22, 24], this is not convenient to check due to dependency on time . It is computationally far more efficient (for both SOS relaxation and our presented approach) to simply sample the time interval and to check the condition discretely. Moreover, continuity arguments show [24] that provided that sampling is fine enough, no counterexamples to condition (2) may exist.

Assume a sampling of interval , and denote by the linear interpolation between and used in [22, 24]. Then , and for , and all , we must ensure

(7)

If is bounded from above, we can guarantee that these conditions can be met by choosing and potentially also small enough, making the right hand side of the inequality arbitrarily large. Thus again the resulting algorithm will succeed in finitely many iterations regardless of our choice of , though time steps can vary during the run. If we ignore the fact that time step must in general be variable to guarantee the existence of a funnel, we need to employ a line-search strategy on to obtain an optimal funnel that meets (7) wrt. a fixed time step .

We can check condition (7) by numerical optimization, namely by solving an NLP

subject to (8)

which is again a non-convex problem. Again we ensure reliability of the check by solving the NLP repeatedly from random initial points until no more counter-examples appear. If a counter-example is found, we reduce using a multiplier .

Algorithm 1 summarizes the whole algorithm. We estimate the initial value of as , where is the previous computed value. Note that should be chosen large enough to ensure that the first estimate always contains a counterexample, and thus the first estimate is always an upper bound of the optimal funnel size.

(a) Pendulum
(b) Quadcopter
Figure 1: Reference trajectories: states (red) and control (blue)

5 Computational Experiments

(a) Pendulum
(b) Quadcopter
Figure 2: Funnels for falsifier based (dashed line) and SOS based (full line) methods for time steps (blue, red, green, purple)

In this section, we discuss the results of computational experiments using the method from the previous section. The implementation was done in MATLAB R2017b and ran on a PC with Intel Core i7-10700K, 3.8GHz and 32GB of RAM. We will do a comparison between the method described in the previous section and the SOS method described in [24].

The NLP solver used for our method and for generation of reference trajectories was implemented in CasADi [2] with an internal NLP solver ipopt [26]. The SOS method was implemented in Yalmip [7] with an interval SDP solvers sdpt3 [25] and SeDuMi [19].

5.1 Example 1: Inverted pendulum

Falsifier based SOS based
iter
3
4
4
4
Table 1: Pendulum: results for both methods, time required , and volume of the funnel (falsifier based method) and number of iterations, and time required in -steps , and time required in -steps , and volume of the funnel (SOS based method)

We start with a simple two dimensional problem, an inverted pendulum, and continue with more involved examples later. The dynamics of the inverted pendulum are

(9)

where we set Assume the task of steering an inverted pendulum to its unstable equilibrium and stabilizing there. First, we computed a stabilizing reference trajectory of length with step , see Figure 0(a).

Next, we constructed an LQR tracking controller for the interpolated discrete trajectory (piecewise cubic in states and piecewise linear in control) by solving the Riccati equation for with a final value of a cost-to-go matrix using the RKF45 integrator with a maximum step and used again cubic interpolation, and hence we obtained matrices for . We set as our target set.

Finally, we compute funnels. For the SOS method, we approximated the non-polynomial dynamics with Taylor polynomials of order 2 and set to be quadratic. We terminated the SOS algorithm, if the total sum increased less than between two subsequent iterations. In our method, we set , and , and the iteration bounds and . We used just one sample for each interval ( for the interval ) in evaluating derivatives for both methods.

The results can be seen in Figure 1(a) and Table 1. We can immediately see that our falsifier based method is significantly faster. Moreover, notice that SOS funnels are larger for a few last time intervals with time steps , which could be considered unexpected since the SOS method should be the more conservative one due to its inbuilt SOS relaxation. These values however do not relate to the actual funnel and hence are incorrect. The overestimation of the actual funnel happened, since derivatives were checked too sparsely for this example. Falsifier based funnels do not suffer from this overestimation in this example since funnel sizes are also estimated from above by numerical integration, not just by derivatives alone.

5.2 Example 2: Quadcopter

Let us consider a twelve dimensional problem. We assume the quadcopter model [11]

(10)

for and the trajectory from Figure 0(b). We again constructed an LQR tracking controller for the interpolated discrete trajectory (piecewise cubic in states and piecewise linear in control) by solving the Riccati equation for and with final value of a cost-to-go matrix using the RKF45 integrator with a maximum step and again used cubic interpolation, and hence we obtained matrices for . We set as our target set.

For the SOS method, we again approximated non-polynomial dynamics with Taylor polynomials of order 2 and set to be quadratic. We terminated the SOS algorithm, if the total sum increased less than between two subsequent iterations. In our method, we again set , , and . We again considered just one sample for each interval ( for the interval ) in evaluating derivatives for both methods.

The results can be seen in Figure 1(b) and Table 2. The SOS based funnels are overall slightly smaller, as expected. Although, the first values at , which are arguably the most important, are between smaller. In addition, the SOS method is significantly slower here (). This difference is far more pronounced than in the low-dimensional inverted pendulum example, due to the lower of scalability of SOS programming in the problem dimension [10].

Falsifier based SOS based
iter
0.901 3 219 141 0.845
1.075 3 444 282 0.913
1.224 3 931 637 1.040
1.310 3 2651 2232 1.128
Table 2: Quadcopter: results for both methods, time required , and volume of the funnel (falsifier based method) and number of iterations, and time required in -steps , and time required in -steps , and volume of the funnel (SOS based method)

5.3 Example 3: Pendulum revisited

Let us return to a pendulum example, where we explore our method on problems of higher dimensions parametric in . We assume a model of an -link pendulum with and we set the other parameters (all weights and lengths) to . The derivation of equations of motion can be found in [8]. The equation can be written in manipulator form

(11)

where we assume that is a control input. Next, we construct a LQR controller for a slightly simpler model

(12)

linearized around the pendulum-upwards equilibrium . Consequently, we get a nonlinear stabilizing controller for -link pendulum

where

is an identity matrix and

is a gain matrix of the LQR controller.

We compute funnels for stabilization of the -link pendulum with the derived controller for and . We set a target

where , and is a cost-to-go matrix of the LQR controller, and where is chosen in such a way that volume of is the same as a volume of hypersphere with radius of in dimensions. Notice that the whole problem is time-invariant since the chosen system trajectory around which we will construct a funnel is constant as well as the shape

We again set for our method , . Moreover, we used and . And we again considered just one sample for each interval ( for the interval ) in evaluating derivatives.

We computed funnels of length with a time step for i.e. for state dimensions up to . For a comparison, we tested our method on the original -link pendulum model and its linearized model. The results can be seen in Table 3. As can be seen from the results, the funnels were successfully computed for all . However, the required computational time increases steadily for the original model, approximately by factor of one third for each new link added. This is mostly caused by the fact that system dynamics become more and more complex with each link added, which steadily increases computational time required for evaluation of system dynamics and its first and second order derivatives.

It should be noted however that the computational time remained much more reasonable for the linearized model where this increase in complexity naturally does not occur. The computational time actually increased only roughly three times from one link to twenty links. This shows that our method can work reasonably well even in high dimensions provided that a model dynamics are not too complicated. To illustrate a comparison to the SOS method, we computed one iteration of the SOS method on the lineariazed examples. As can bee seen in Table 3, the computational time increases dramatically even for the linearized model and becomes impractical with about links.

Original Linear SOS Linear Original Linear
1 11
2 12
3 13
4 14
5 15
6 16
7 17
8 18
9 19
10 20
Table 3: -link pendulum:

time required of the first iteration for the SOS method for the linearized model, and the time required for the falsifier based method for the original model and its linearization

For the linearized model, we can compare the computed values of with the true optimal values. These can be computed directly for linear systems with an ellipsoidal funnel using the state transition matrix. Assume a linear system and an ellipsoid . Using and affine mapping with the matrix , this ellipsoid transforms into the ellipsoid

after time . Hence, the optimal value for a funnel with a given shape in time that ends in the ellipsoid is the ellipsoid of maximal value for which

We computed the optimal values of with a time step iteratively backwards in time. The comparison of the resulting values of can be seen in Table 4. The values computed for the linearized model are slightly lower than the optimal ones, and this difference becomes larger for a higher number of links. This underestimation of the funnels is largely caused by the derivative check (8) that assumes a piece-wise linear , which the optimal is not. If the derivative check is skipped, the results of our method and the optimal values are nearly identical.

Original Linear, DC Linear, no DC Linear, optimal
1
2
3
4
5
6
7
8
9
10
11
12
 13
14
15
16
17
18
19
20
Table 4: -link pendulum:

value for the original model and its linearization (with and without derivative check (DC)), and the optimal value of for the linearized model computed via the state transition matrix

6 Conclusion

In this paper, we presented an algorithm that computes funnels along trajectories of systems of ordinary differential equations. Compared to related work based on SOS programming, in our computational experiments, the algorithm computed larger funnels in less time. The algorithm does not formally verify in itself, but its result can then be formally verified using a well-known palette of verification techniques that includes—in addition to SOS programming—computer algebra [4] or interval computation [14].

References

  • [1] N. H. Amer, H. Zamzuri, K. Hudha, and Z. A. Kadir (2017) Modelling and control strategies in path tracking control for autonomous ground vehicles: a review of state of the art and challenges. Journal of Intelligent & Robotic Systems 86 (2), pp. 225–254. Cited by: §1.
  • [2] J. A. E. Andersson, J. Gillis, G. Horn, J. B. Rawlings, and M. Diehl (2018) CasADi— A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation 11 (1). Cited by: §5.
  • [3] Y. Chang, N. Roohi, and S. Gao (2019) Neural Lyapunov control. In Advances in Neural Information Processing Systems, Vol. 32, pp. 3245–3254. Cited by: §1.
  • [4] G. E. Collins and H. Hong (1991) Partial cylindrical algebraic decomposition for quantifier elimination. Journal of Symbolic Computation 12, pp. 299–328. Cited by: §6.
  • [5] H. K. Khalil (2002) Nonlinear systems. 3rd edition, Prentice Hall. Cited by: §3, §4.
  • [6] D. Liberzon (2011) Calculus of variations and optimal control theory: a concise introduction. Princeton University Press, New Jersey. Cited by: §3.
  • [7] J. Löfberg (2004) YALMIP : a toolbox for modeling and optimization in MATLAB. In In Proceedings of the CACSD Conference, Taipei, Taiwan. Cited by: §5.
  • [8] A. M. Lopes and J. A. T. Machado (2015) Dynamics of the n-link pendulum: a fractional perspective. International Journal of Control 90 (6), pp. 1192–1200. Cited by: §5.3.
  • [9] A. Majumdar, A. A. Ahmadi, and R. Tedrake (2013) Control design along trajectories with sums of squares programming. IEEE International Conference on Robotics and Automation, pp. 4054–4061. Cited by: §1.
  • [10] A. Majumdar, G. Hall, and A. A. Ahmadi (2020)

    Recent scalability improvements for semidefinite programming with applications in machine learning, control, and robotics

    .
    Annual Review of Control, Robotics, and Autonomous Systems 3 (1), pp. 331–360. Cited by: §1, §1, §3, §5.2.
  • [11] A. Majumdar and R. Tedrake (2017) Funnel libraries for real-time robust feedback motion planning. The International Journal of Robotics Research 36 (8), pp. 947–982. Cited by: §1, §5.2.
  • [12] J. Moore and R. Tedrake (2012) Control synthesis and verification for a perching UAV using LQR-trees. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), Vol. , pp. 3707–3714. External Links: Document Cited by: §1.
  • [13] P. A. Parrilo (2003) Semidefinite programming relaxations for semialgebraic problems. Mathematical Programming 96, pp. 293–320. Cited by: §3.
  • [14] S. Ratschan (2006) Efficient solving of quantified inequality constraints over the real numbers. ACM Transactions on Computational Logic 7 (4), pp. 723–748. Cited by: §6.
  • [15] H. Ravanbakhsh, S. Aghli, C. Heckman, and S. Sankaranarayanan (2018) Path-following through control funnel functions. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 401–408. External Links: Document Cited by: §1.
  • [16] H. Ravanbakhsh and S. Sankaranarayanan (2019) Learning control Lyapunov functions from counterexamples and demonstrations. Autonomous Robots 43 (2), pp. 275–307. Cited by: §1, §3.
  • [17] S. M. Richards, F. Berkenkamp, and A. Krause (2018)

    The Lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems

    .
    In Proceedings of The 2nd Conference on Robot Learning, volume 87 of Proceedings of Machine Learning Research 44 (1), pp. 466––476. Cited by: §1.
  • [18] B. Rubí, R. Pérez, and B. Morcego (2020) A survey of path following control strategies for UAVs focused on quadrotors. Journal of Intelligent & Robotic Systems 98 (2), pp. 241–265. Cited by: §1.
  • [19] J.F. Sturm (1999) Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software 11–12, pp. 625–653. Note: Version 1.05 available from http://fewcal.kub.nl/sturm Cited by: §5.
  • [20] W. Tan and A. Packard (2004) Searching for control Lyapunov functions using sums of squares programming. Allerton conference on communication, control and computing, pp. 210–219. Cited by: §1.
  • [21] J. Z. Tang, A. M. Boudali, and I. R. Manchester (2017) Invariant funnels for underactuated dynamic walking robots: new phase variable and experimental validation. In 2017 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 3497–3504. External Links: Document Cited by: §1.
  • [22] R. Tedrake, I. R. Manchester, M. Tobenkin, and J. W. Roberts (2010) LQR-trees: feedback motion planning via sums-of-squares verification. The International Journal of Robotics Research 29 (8), pp. 1038–1052. Cited by: §1, §1, §2, §3, §3, §3, §4, §4.
  • [23] T. G. Thuruthel, Y. Ansari, E. Falotico, and C. Laschi (2018) Control strategies for soft robotic manipulators: a survey. Soft Robotics 5 (2), pp. 149–163. Cited by: §1.
  • [24] M. M. Tobenkin, I. R. Manchester, and R. Tedrake (2011) Invariant funnels around trajectories using sum-of-squares programming. IFAC Proceedings Volumes 44 (1), pp. 9218–9223. Cited by: §1, §1, §2, §2, §3, §3, §3, §4, §4, §5.
  • [25] K.C. Toh, M.J. Todd, and R.H. Tutuncu (1999) SDPT3—a Matlab software package for semidefinite programming. Optimization Methods and Software 11, pp. 545–581. Cited by: §5.
  • [26] A. Wächter and L. T. Biegler (2006) On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming. Mathematical Programming 106 (1), pp. 25–57. Cited by: §5.
  • [27] A. Weiss, C. Danielson, K. Berntorp, I. Kolmanovsky, and S. Di Cairano (2017) Motion planning with invariant set trees. In 2017 IEEE Conference on Control Technology and Applications (CCTA), Vol. , pp. 1625–1630. External Links: Document Cited by: §1.