1 Introduction
An important task in robotics motion planning is to follow a given trajectory into some target set [18, 1, 23]. Especially, numerous path planning algorithms [27, 22, 11] rely on this task. This involves first designing a controller that follows this trajectory [15], and then determining a neighbourhood of the trajectory (a funnel [24]) where the controller fulfils its goal of reaching a given target set [12, 21]. In this paper, we present an efficient method for this second task.
In the literature [22, 24, 9], funnel construction is usually based on sumofsquares programming (SOS) [20], a relaxation technique for polynomial systems. However, such formulations are sensitive to numerical errors and scale poorly to high dimensions—both in theory [10] and in practice [16]. Moreover, SOS methods tend to underestimate the actual funnel size [3, 17].
To alleviate these drawbacks of SOS methods, in this paper, we propose the use of falsifiers based on numerical optimization to compute funnel candidates directly. We leave potential formal verification to the end. This allows for an efficient funnel optimization loop, since the dimensions of subsequent nonlinear programming (NLP) problems are the same as the dimension of the original problem and do not further increase as it is the case of SOS methods, which increases at least quadratically in the problem dimension
[10]. Our computational experiments show that without the verification part, the falsifiers still provide quite accurate estimates of control funnels. As an additional advantage we note that the method is also applicable to nonpolynomial systems that SOSbased methods cannot handle directly (i.e., they need the nonpolynomial dynamics to be approximated by a polynomial one).The structure of the paper is as follows. In Section 2, we state the precise problem. In Section 3, we review the problem of funnel construction and describe existing approaches based on SOS programming. In Section 4, we introduce our algorithm and explain its implementation. In Section 5, we provide computational experiments. Section 6 concludes the paper.
2 Problem Statement
Consider a system
(1) 
where is a smooth function. We further assume that system (1) has a unique solution for any initial point and time . We denote this solution by , which is a function in . We will also simply write for .
Let and . In this paper, we consider the problem of computing funnels [24]. A funnel is a timevarying set of states for such that for all and , the solution from stays in the funnel, i.e. for all . In addition, we require that the final part of a funnel is a subset of some chosen set of goal states . Finally, we also want a funnel as large in volume as possible.
To ease the construction of funnels, we reduce our attention to funnels that are constructed around some chosen system trajectory for that ends in [22, 24]. These funnels can be described using a differentiable positive definite function , and a differentiable^{3}^{3}3actually, continuous and right differentiable suffices real function as sublevel sets . Hence, funnel construction reduces to construction of functions and such that forms a funnel as large in volume as possible.
Simplifying the problem even further, we will assume that the shape is fixed beforehand. Consequently, all that remains is to determine an optimal wrt. fixed .
3 Funnel construction
In this section, we will shortly review general funnel construction and an SOSbased variant [22, 24]. We assume a set as described in the previous section.
First, we explore conditions on and that make a funnel. Let us define for each sublevel sets , and level sets . Hence, we can write in our notation .
Assume that and are chosen in such a way that the final sublevel set is a subset of a target set, i.e. . Moreover, assume that for all and all the value of decreases faster or increases slower than along the system dynamics, that is
(2) 
where Due to this requirement, for all , all states in stay in , for all . Consequently, sets forms a funnel.
As we mentioned in the previous section, we reduce our attention to the case in which is fixed beforehand. An often suitable candidate for is a solution of the Lyapunov equation [5] or the Riccati equation [6]. A candidate then has quadratic form where is a solution to a respective equation. Still, even if we fix , we still need to determine . Additionally, we would like to be chosen in such a way that the sublevel sets are as large as possible.
In previous work [22, 24], is parametrized piecewiselinearly and the parameters are optimized using a linesearch approach. In each iteration, is verified using SOS programming. Moreover, computation of can be approximated by performing it in finitely many time samples [22, 24]. This partially alleviates the needed computational burden. However, certain care must be taken with choosing time samples to obtain a reasonable approximation of a funnel as we will see later in Example 1 of our computational experiments.
To be more specific, let us assume that both and are polynomials in . Choose time instants , denote as , and set for an interval To optimize the volume of a discrete funnel, a linear cost is considered for optimization [24]. Hence, values for which meets (2) in all time samples are found by solving an SOS program
subject to  (3)  
where and are real polynomials. Note that the constraint in (3) is bilinear in and , and thus an algorithm for solving (3) iteratively alternates between solving (3) for multipliers and with fixed , and solving (3) for with fixed and [22, 24]. The computations also require a feasible start (i.e. a valid initial funnel) as described in [24]. Lastly, we should also mention that iterations can be solved separately and independently for each time sample. In the case of iterations, one has the choice of either solving them as a whole, in one single SOS program, or splitting them into multiple SOS programs that must then be solved sequentially, backwards in time. In our experience, such splitting is necessary in higher dimensions.
Sumofsquares programming, while a convex optimization problem, is computationally demanding, can encounter numerical problems, and scales poorly to high dimensions [10, 16]. In particular, an SOS polynomial constraint in (3) can be reformulated as [13]
(4) 
where
is a vector of monomials up to degree
, and is an unknown semidefinite matrix with elements provided that polynomial on the left hand side is of degree [10]. Since polynomials are equal only if their coefficients are equal, constraint (4) can be replaced with equalities (coefficient matching conditions [10]) and one semidefinite matrix constraint . Hence, the states are removed from the optimization, but a new semidefinite matrix variable is introduced, which causes the aforementioned scalability issues in SOS programming [10].4 Constructing using numerical optimization
In this section, we describe a funnel computation algorithm that avoids the use of costly SOS programming. We propose the use of falsifiers based on numerical optimization to solve the optimization of and to leave potential formal verification to the end.
Our algorithm samples the constructed funnels in time and proceeds backwards. Let us choose time instants and find for a given a value , such that is as large as possible. This NLP problem can be efficiently solved for and quadratic using semidefinite programming. Next, we compute the samples which we denote as . Finally, we assume an interpolation between the samples and check condition (2) for the interpolated funnel.
Three NLPs are to be solved for each time sample. The first two, NLPs (6) and (5), are used to provide the time sampled optimal funnel (in terms of volume). The final one (8) checks condition (2) that would be used for formal verification of the interpolated funnel. The algorithm shrinks the funnel, if any counterexample to condition (2) is found, or accepts the sampled value, if it does not.
Let us describe the algorithm more closely. Assume that we already determined the optimal value of . To determine the optimal value of , consider the NLP that seeks a point with smallest possible value for which the system leaves after evolving from to :
subject to  (5)  
Unfortunately, NLP (5) is nonconvex, thus a local NLP solver can solve this NLP only approximately. Therefore, for reliably accepting a certain value , more needs to be done. The first step to do so is another NLP
subject to  (6)  
that checks whether the current estimate results in a counterexample, a state that evolves outside of . If the found optimum is bigger than , we found a counterexample, and hence we solve NLP (5), using the solution to NLP (6) as an initial feasible estimate. This solution gives us a new, smaller estimate for . If the found optimum is not bigger than , we cannot make a definite conclusion, since NLP (6) is again nonconvex. Hence, we increase the trust in the current estimate by repeatly solving NLP (6) from random initial points until no further counterexample is found within a certain number of subsequent iterations.
The use of NLP (6) has two major advantages over only iterating NLP (5) from random initial points. First, NLP (6) directly checks for the existence of a counterexample, making it more efficient for this purpose, in our experience. And second, the result of NLP (6) provides a much more useful starting point for NLP (5) than random starting points.
To enforce the termination of the loop between NLPs (6) and (5), we update as , where is the found numerical solution of (5) and . To see that the loop must terminate after finitely many iterations, consider that there must be small enough such that no counterexample exists due to continuity of solutions of ordinary differential equations wrt. their initial conditions [5] and the fact that is positive definite. It should also be noted that, in general, we do not have available in explicit form, and hence we must approximate it using numerical integration.
After the end of the iteration between NLPs (6) and (5), we try to extend the funnel from to the whole time interval . For this we use linear interpolation between and . Based on this, we would have to check condition (2) for all and all . However, as mentioned in [22, 24], this is not convenient to check due to dependency on time . It is computationally far more efficient (for both SOS relaxation and our presented approach) to simply sample the time interval and to check the condition discretely. Moreover, continuity arguments show [24] that provided that sampling is fine enough, no counterexamples to condition (2) may exist.
Assume a sampling of interval , and denote by the linear interpolation between and used in [22, 24]. Then , and for , and all , we must ensure
(7) 
If is bounded from above, we can guarantee that these conditions can be met by choosing and potentially also small enough, making the right hand side of the inequality arbitrarily large. Thus again the resulting algorithm will succeed in finitely many iterations regardless of our choice of , though time steps can vary during the run. If we ignore the fact that time step must in general be variable to guarantee the existence of a funnel, we need to employ a linesearch strategy on to obtain an optimal funnel that meets (7) wrt. a fixed time step .
We can check condition (7) by numerical optimization, namely by solving an NLP
subject to  (8)  
which is again a nonconvex problem. Again we ensure reliability of the check by solving the NLP repeatedly from random initial points until no more counterexamples appear. If a counterexample is found, we reduce using a multiplier .
Algorithm 1 summarizes the whole algorithm. We estimate the initial value of as , where is the previous computed value. Note that should be chosen large enough to ensure that the first estimate always contains a counterexample, and thus the first estimate is always an upper bound of the optimal funnel size.
5 Computational Experiments
In this section, we discuss the results of computational experiments using the method from the previous section. The implementation was done in MATLAB R2017b and ran on a PC with Intel Core i710700K, 3.8GHz and 32GB of RAM. We will do a comparison between the method described in the previous section and the SOS method described in [24].
The NLP solver used for our method and for generation of reference trajectories was implemented in CasADi [2] with an internal NLP solver ipopt [26]. The SOS method was implemented in Yalmip [7] with an interval SDP solvers sdpt3 [25] and SeDuMi [19].
5.1 Example 1: Inverted pendulum
Falsifier based  SOS based  

iter  
3  
4  
4  
4 
We start with a simple two dimensional problem, an inverted pendulum, and continue with more involved examples later. The dynamics of the inverted pendulum are
(9) 
where we set Assume the task of steering an inverted pendulum to its unstable equilibrium and stabilizing there. First, we computed a stabilizing reference trajectory of length with step , see Figure 0(a).
Next, we constructed an LQR tracking controller for the interpolated discrete trajectory (piecewise cubic in states and piecewise linear in control) by solving the Riccati equation for with a final value of a costtogo matrix using the RKF45 integrator with a maximum step and used again cubic interpolation, and hence we obtained matrices for . We set as our target set.
Finally, we compute funnels. For the SOS method, we approximated the nonpolynomial dynamics with Taylor polynomials of order 2 and set to be quadratic. We terminated the SOS algorithm, if the total sum increased less than between two subsequent iterations. In our method, we set , and , and the iteration bounds and . We used just one sample for each interval ( for the interval ) in evaluating derivatives for both methods.
The results can be seen in Figure 1(a) and Table 1. We can immediately see that our falsifier based method is significantly faster. Moreover, notice that SOS funnels are larger for a few last time intervals with time steps , which could be considered unexpected since the SOS method should be the more conservative one due to its inbuilt SOS relaxation. These values however do not relate to the actual funnel and hence are incorrect. The overestimation of the actual funnel happened, since derivatives were checked too sparsely for this example. Falsifier based funnels do not suffer from this overestimation in this example since funnel sizes are also estimated from above by numerical integration, not just by derivatives alone.
5.2 Example 2: Quadcopter
Let us consider a twelve dimensional problem. We assume the quadcopter model [11]
(10)  
for and the trajectory from Figure 0(b). We again constructed an LQR tracking controller for the interpolated discrete trajectory (piecewise cubic in states and piecewise linear in control) by solving the Riccati equation for and with final value of a costtogo matrix using the RKF45 integrator with a maximum step and again used cubic interpolation, and hence we obtained matrices for . We set as our target set.
For the SOS method, we again approximated nonpolynomial dynamics with Taylor polynomials of order 2 and set to be quadratic. We terminated the SOS algorithm, if the total sum increased less than between two subsequent iterations. In our method, we again set , , and . We again considered just one sample for each interval ( for the interval ) in evaluating derivatives for both methods.
The results can be seen in Figure 1(b) and Table 2. The SOS based funnels are overall slightly smaller, as expected. Although, the first values at , which are arguably the most important, are between – smaller. In addition, the SOS method is significantly slower here ( – ). This difference is far more pronounced than in the lowdimensional inverted pendulum example, due to the lower of scalability of SOS programming in the problem dimension [10].
Falsifier based  SOS based  

iter  
0.901  3  219  141  0.845  
1.075  3  444  282  0.913  
1.224  3  931  637  1.040  
1.310  3  2651  2232  1.128 
5.3 Example 3: Pendulum revisited
Let us return to a pendulum example, where we explore our method on problems of higher dimensions parametric in . We assume a model of an link pendulum with and we set the other parameters (all weights and lengths) to . The derivation of equations of motion can be found in [8]. The equation can be written in manipulator form
(11) 
where we assume that is a control input. Next, we construct a LQR controller for a slightly simpler model
(12) 
linearized around the pendulumupwards equilibrium . Consequently, we get a nonlinear stabilizing controller for link pendulum
where
is an identity matrix and
is a gain matrix of the LQR controller.We compute funnels for stabilization of the link pendulum with the derived controller for and . We set a target
where , and is a costtogo matrix of the LQR controller, and where is chosen in such a way that volume of is the same as a volume of hypersphere with radius of in dimensions. Notice that the whole problem is timeinvariant since the chosen system trajectory around which we will construct a funnel is constant as well as the shape
We again set for our method , . Moreover, we used and . And we again considered just one sample for each interval ( for the interval ) in evaluating derivatives.
We computed funnels of length with a time step for i.e. for state dimensions up to . For a comparison, we tested our method on the original link pendulum model and its linearized model. The results can be seen in Table 3. As can be seen from the results, the funnels were successfully computed for all . However, the required computational time increases steadily for the original model, approximately by factor of one third for each new link added. This is mostly caused by the fact that system dynamics become more and more complex with each link added, which steadily increases computational time required for evaluation of system dynamics and its first and second order derivatives.
It should be noted however that the computational time remained much more reasonable for the linearized model where this increase in complexity naturally does not occur. The computational time actually increased only roughly three times from one link to twenty links. This shows that our method can work reasonably well even in high dimensions provided that a model dynamics are not too complicated. To illustrate a comparison to the SOS method, we computed one iteration of the SOS method on the lineariazed examples. As can bee seen in Table 3, the computational time increases dramatically even for the linearized model and becomes impractical with about links.
Original  Linear  SOS Linear  Original  Linear  

1  11  
2  12  
3  13  
4  14  
5  15  
6  16  
7  17  
8  18  
9  19  
10  20 
time required of the first iteration for the SOS method for the linearized model, and the time required for the falsifier based method for the original model and its linearization
For the linearized model, we can compare the computed values of with the true optimal values. These can be computed directly for linear systems with an ellipsoidal funnel using the state transition matrix. Assume a linear system and an ellipsoid . Using and affine mapping with the matrix , this ellipsoid transforms into the ellipsoid
after time . Hence, the optimal value for a funnel with a given shape in time that ends in the ellipsoid is the ellipsoid of maximal value for which
We computed the optimal values of with a time step iteratively backwards in time. The comparison of the resulting values of can be seen in Table 4. The values computed for the linearized model are slightly lower than the optimal ones, and this difference becomes larger for a higher number of links. This underestimation of the funnels is largely caused by the derivative check (8) that assumes a piecewise linear , which the optimal is not. If the derivative check is skipped, the results of our method and the optimal values are nearly identical.
Original  Linear, DC  Linear, no DC  Linear, optimal  

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20 
value for the original model and its linearization (with and without derivative check (DC)), and the optimal value of for the linearized model computed via the state transition matrix
6 Conclusion
In this paper, we presented an algorithm that computes funnels along trajectories of systems of ordinary differential equations. Compared to related work based on SOS programming, in our computational experiments, the algorithm computed larger funnels in less time. The algorithm does not formally verify in itself, but its result can then be formally verified using a wellknown palette of verification techniques that includes—in addition to SOS programming—computer algebra [4] or interval computation [14].
References
 [1] (2017) Modelling and control strategies in path tracking control for autonomous ground vehicles: a review of state of the art and challenges. Journal of Intelligent & Robotic Systems 86 (2), pp. 225–254. Cited by: §1.
 [2] (2018) CasADi— A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation 11 (1). Cited by: §5.
 [3] (2019) Neural Lyapunov control. In Advances in Neural Information Processing Systems, Vol. 32, pp. 3245–3254. Cited by: §1.
 [4] (1991) Partial cylindrical algebraic decomposition for quantifier elimination. Journal of Symbolic Computation 12, pp. 299–328. Cited by: §6.
 [5] (2002) Nonlinear systems. 3rd edition, Prentice Hall. Cited by: §3, §4.
 [6] (2011) Calculus of variations and optimal control theory: a concise introduction. Princeton University Press, New Jersey. Cited by: §3.
 [7] (2004) YALMIP : a toolbox for modeling and optimization in MATLAB. In In Proceedings of the CACSD Conference, Taipei, Taiwan. Cited by: §5.
 [8] (2015) Dynamics of the nlink pendulum: a fractional perspective. International Journal of Control 90 (6), pp. 1192–1200. Cited by: §5.3.
 [9] (2013) Control design along trajectories with sums of squares programming. IEEE International Conference on Robotics and Automation, pp. 4054–4061. Cited by: §1.

[10]
(2020)
Recent scalability improvements for semidefinite programming with applications in machine learning, control, and robotics
. Annual Review of Control, Robotics, and Autonomous Systems 3 (1), pp. 331–360. Cited by: §1, §1, §3, §5.2.  [11] (2017) Funnel libraries for realtime robust feedback motion planning. The International Journal of Robotics Research 36 (8), pp. 947–982. Cited by: §1, §5.2.
 [12] (2012) Control synthesis and verification for a perching UAV using LQRtrees. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), Vol. , pp. 3707–3714. External Links: Document Cited by: §1.
 [13] (2003) Semidefinite programming relaxations for semialgebraic problems. Mathematical Programming 96, pp. 293–320. Cited by: §3.
 [14] (2006) Efficient solving of quantified inequality constraints over the real numbers. ACM Transactions on Computational Logic 7 (4), pp. 723–748. Cited by: §6.
 [15] (2018) Pathfollowing through control funnel functions. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. , pp. 401–408. External Links: Document Cited by: §1.
 [16] (2019) Learning control Lyapunov functions from counterexamples and demonstrations. Autonomous Robots 43 (2), pp. 275–307. Cited by: §1, §3.

[17]
(2018)
The Lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems
. In Proceedings of The 2nd Conference on Robot Learning, volume 87 of Proceedings of Machine Learning Research 44 (1), pp. 466––476. Cited by: §1.  [18] (2020) A survey of path following control strategies for UAVs focused on quadrotors. Journal of Intelligent & Robotic Systems 98 (2), pp. 241–265. Cited by: §1.
 [19] (1999) Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software 11–12, pp. 625–653. Note: Version 1.05 available from http://fewcal.kub.nl/sturm Cited by: §5.
 [20] (2004) Searching for control Lyapunov functions using sums of squares programming. Allerton conference on communication, control and computing, pp. 210–219. Cited by: §1.
 [21] (2017) Invariant funnels for underactuated dynamic walking robots: new phase variable and experimental validation. In 2017 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 3497–3504. External Links: Document Cited by: §1.
 [22] (2010) LQRtrees: feedback motion planning via sumsofsquares verification. The International Journal of Robotics Research 29 (8), pp. 1038–1052. Cited by: §1, §1, §2, §3, §3, §3, §4, §4.
 [23] (2018) Control strategies for soft robotic manipulators: a survey. Soft Robotics 5 (2), pp. 149–163. Cited by: §1.
 [24] (2011) Invariant funnels around trajectories using sumofsquares programming. IFAC Proceedings Volumes 44 (1), pp. 9218–9223. Cited by: §1, §1, §2, §2, §3, §3, §3, §4, §4, §5.
 [25] (1999) SDPT3—a Matlab software package for semidefinite programming. Optimization Methods and Software 11, pp. 545–581. Cited by: §5.
 [26] (2006) On the implementation of a primaldual interior point filter line search algorithm for largescale nonlinear programming. Mathematical Programming 106 (1), pp. 25–57. Cited by: §5.
 [27] (2017) Motion planning with invariant set trees. In 2017 IEEE Conference on Control Technology and Applications (CCTA), Vol. , pp. 1625–1630. External Links: Document Cited by: §1.
Comments
There are no comments yet.