Piece-wise polynomial representation of trajectory is adopted. Any segment of the trajectory can be denoted as an -order polynomial
where is the coefficient matrix, is the duration and
is a basis function. It is worth noting that
should be an odd number hereafter, which makes the mapping bijective between the coefficient matrix and its boundary condition.
Consider derivatives of up to order
We denote and by and , respectively. The boundary condition is described by tuple . The following mapping holds:
is the mapping matrix. Since is an odd number, it is easy to know that is a non-singular square matrix. In other words, the mapping in 6 is bijective. Therefore, any segment of a trajectory can be equivalently expressed by tuple or tuple .
Consequently, we consider an -segment trajectory parametrized by time allocation as well as boundary conditions of all segments. The trajectory is defined by
where lies in the -th segment and is a boundary condition of the -th segment. Normally, some entries in are fixed while the others are to be optimized. We split into two parts, the fixed part which is viewed as constant, and the free part which is to be optimized. Then, the whole trajectory can be fully determined by
2 Optimization Objective
The following time regularized quadratic objective function is used:
where and are the lowest and the highest order of derivative to be penalized respectively, is the weight of the -order derivative and is the weight of time regularization. When , some derivatives on the boundary of each segment may not exist, hence we sum up objectives on all segments instead, which have the form
for the -th segment, where is a symmetric matrix  consisting of high powers of , and is trace operation. The overall objective is formulated as
where is the direct sum of its diagonal blocks, and is a permutation matrix.
In Eq. 12, and are all parameters that directly determine the structure of . It is important to know that not all settings for are legal. Instead of restricting those parameters, we make the following assumption on the objective function such that the setting is meaningful.
For any finite , the corresponding -sublevel set of
is bounded and satisfies
Intuitively, Assumption 1 forbids the objective from taking meaningful value when decision variables are extremely large or any duration is extremely small. For example, consecutive repeating waypoints with identical boundary conditions fixed in are illegal, because the optimal duration on corresponding segment becomes which violates condition (15). In other words, the segment should not exist if the objective is to be minimized. Another example is that is also illegal. Non-positive weight on total duration means that the objective can be sufficiently low when duration on each segment is large enough. In such a case, the boundness condition is violated.
3 Unconstrained Optimization Algorithm
To optimize Eq. 12, Algorithm. 1 is proposed. Initially, is solved for any provided . After that, the minimization of the objective function is done through a two-phase process, in which only one of and is optimized while the other is fixed.
In the first phase, the sub-problem
is solved for each . We employ the unconstrained QP formulation by Richter et al. , which we briefly introduce here. The matrix is partitioned as
then the solution is obtained analytically through
In the second phase, the sub-problem
is solved for each . In this phase, the scale of sub-problem can be reduced into each segment. Due to our representation of trajectory, once is fixed, the boundary conditions isolate each entry in from the others. Therefore, can be optimized individually to get all entries of . As for the -th segment, its cost in (11) is indeed a rational function of . We show the structure of and omit the trivial deduction for brevity:
where and are orders of numerator and denominator respectively. The coefficient is determined by . It is clear that is smooth on . Due to the positiveness of , we have as or . Therefore, the minimizer exists for
To find all candidates, we compute the derivative of (20):
The minimum exists in the solution set of , which can be calculated through any modern univariate polynomial real-roots solver . The second phase is completed by updating every entry in .
4 Convergence Analysis
We first explore some basic properties of , which help a lot in convergence analysis of Algorithm 1. We have already shown that are rational function of each entry in . As for the part, it is indeed partially convex, which is given by the following lemma.
is convex in for any , provided that Assumption 1 holds.
Assumption 1 implies that , holds for all and at least one is nonzero. Otherwise, the boundness on or positiveness on its time allocation is violated. Thus, for any , the objective function is always positive, which can be seen from (10). The non-negativity of implies the positive semidefiniteness of the symmetric matrix . Since is the principal submatrix of , it is also positive semidefinite. We compute the Hessian matrix of with respect to :
which means is positive semidefinite. Therefore, is convex in . ∎
For any convex function , if the following inequality holds for any
in which is a constant and is Frobenius norm, then
See Theorem 2.1.5 in . ∎
The gradient of with respect to can be calculated as
The difference in gradient at and is
Assumption 1 ensures that is nonzero matrix, which means it has largest singular value for any . According to the basic property of spectral norm, we have
It is clear that the objective function is non-increasing in any iteration, i.e., for any , we have
Moreover, the objective function is non-negative, which means for for any . Therefore,
Since , the following condition holds by Lemma 3:
Notice that in each iteration, then
We simply let , then for all . According to Assumption 1, is bounded and satisfies condition (15). Then there exists positive constant and such that always holds for . Consequently, is also upper bounded by a positive constant . We have
We sum it up for all iterations:
Since the right hand side is bounded, we have
Taking the minimum of left hand side equals
Rearranging gives the result. ∎
Theorem 1 shows that, under no assumption on convexity, Algorithm 1 shares the same global convergence rate as that of gradient descent with the best step-size chosen in each iteration . However, the best step-size is practically unavailable. As a contrast, Algorithm 1 does not involve any step-size choosing in each iteration. Sub-problems (Eq. 16 and Eq. 19) both can be solved exactly and efficiently due to their algebraic convenience. Therefore, Algorithm 1 is faster than gradient-based methods in practice.
Although only convergence to stationary point is guaranteed, strict saddle points are theoretically and numerically unstable  for Algorithm 1, which is indeed a first-order method. Moreover, when the stationary point is a strict local minimum, we show that the convergence rate is faster than the general case in Theorem 1.
Consider a positive sequence satisfying
where is a constant. Then for all ,
Rearranging gives the result. ∎
Define the neighborhood as
A strict local minimum satisfies , then there exists such that is locally convex in the domain . Moreover, there exists a positive integer such that holds for all , so we only consider hereafter. Due to the local convexity, we have
By applying Cauchy-Schwartz inequality on the right hand side, we have
Notice that the distance between and the local minimum is upper-bounded by , thus
where is the upper bound of . Combining these two conditions, we get
We apply Lemma 4 by defining
then the result follows. ∎
When Algorithm 1 converges to a strict local minimum, the above theorem shows that the local convergence rate is . Note that it is possible to accelerate our method to attain the optimal rate of first-order methods or use high-order methods to achieve a faster rate.
-  Adam Bry, Charles Richter, Abraham Bachrach, and Nicholas Roy. Aggressive flight of fixed-wing and quadrotor aircraft in dense indoor environments. The International Journal of Robotics Research, 34:1002 – 969, 2015.
-  Jason D. Lee, Ioannis Panageas, Georgios Piliouras, Max Simchowitz, Michael I. Jordan, and Benjamin Recht. First-order methods almost always avoid strict saddle points. Mathematical Programming, 176:311–337, 2019.
-  Yurii Nesterov. Lectures on Convex Optimization. 2018.
-  Michael Sagraloff and Kurt Mehlhorn. Computing real roots of real polynomials. J. Symb. Comput., 73:46–86, 2013.