Detailed Proofs of Alternating Minimization Based Trajectory Generation for Quadrotor Aggressive Flight

02/21/2020 ∙ by Zhepei Wang, et al. ∙ 0

This technical report provides detailed theoretical analysis of the algorithm used in Alternating Minimization Based Trajectory Generation for Quadrotor Aggressive Flight. An assumption is provided to ensure that settings for the objective function are meaningful. What's more, we explore the structure of the optimization problem and analyze the global/local convergence rate of the employed algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Preliminaries

Piece-wise polynomial representation of trajectory is adopted. Any segment of the trajectory can be denoted as an -order polynomial

(1)

where is the coefficient matrix, is the duration and

(2)

is a basis function. It is worth noting that

should be an odd number hereafter, which makes the mapping bijective between the coefficient matrix and its boundary condition.

Consider derivatives of up to order

(3)

we have

(4)

where

(5)

We denote and by and , respectively. The boundary condition is described by tuple . The following mapping holds:

(6)

where

(7)

is the mapping matrix. Since is an odd number, it is easy to know that is a non-singular square matrix. In other words, the mapping in 6 is bijective. Therefore, any segment of a trajectory can be equivalently expressed by tuple or tuple .

Consequently, we consider an -segment trajectory parametrized by time allocation as well as boundary conditions of all segments. The trajectory is defined by

(8)

where lies in the -th segment and is a boundary condition of the -th segment. Normally, some entries in are fixed while the others are to be optimized. We split into two parts, the fixed part which is viewed as constant, and the free part which is to be optimized. Then, the whole trajectory can be fully determined by

(9)

2 Optimization Objective

The following time regularized quadratic objective function is used:

(10)

where and are the lowest and the highest order of derivative to be penalized respectively, is the weight of the -order derivative and is the weight of time regularization. When , some derivatives on the boundary of each segment may not exist, hence we sum up objectives on all segments instead, which have the form

(11)

for the -th segment, where is a symmetric matrix [1] consisting of high powers of , and is trace operation. The overall objective is formulated as

(12)
(13)

where is the direct sum of its diagonal blocks, and is a permutation matrix.

In Eq. 12, and are all parameters that directly determine the structure of . It is important to know that not all settings for are legal. Instead of restricting those parameters, we make the following assumption on the objective function such that the setting is meaningful.

Assumption 1.

For any finite , the corresponding -sublevel set of

(14)

is bounded and satisfies

(15)

Intuitively, Assumption 1 forbids the objective from taking meaningful value when decision variables are extremely large or any duration is extremely small. For example, consecutive repeating waypoints with identical boundary conditions fixed in are illegal, because the optimal duration on corresponding segment becomes which violates condition (15). In other words, the segment should not exist if the objective is to be minimized. Another example is that is also illegal. Non-positive weight on total duration means that the objective can be sufficiently low when duration on each segment is large enough. In such a case, the boundness condition is violated.

3 Unconstrained Optimization Algorithm

Input:
Output:
begin
       ;
       ;
       while  do
             ;
             ;
             ;
             if  then
                  break
            ;
            
      ;
       return ;
Algorithm 1 Unconstrained Spatial-Temporal AM

To optimize Eq. 12, Algorithm. 1 is proposed. Initially, is solved for any provided . After that, the minimization of the objective function is done through a two-phase process, in which only one of and is optimized while the other is fixed.

In the first phase, the sub-problem

(16)

is solved for each . We employ the unconstrained QP formulation by Richter et al. [1], which we briefly introduce here. The matrix is partitioned as

(17)

then the solution is obtained analytically through

(18)

In the second phase, the sub-problem

(19)

is solved for each . In this phase, the scale of sub-problem can be reduced into each segment. Due to our representation of trajectory, once is fixed, the boundary conditions isolate each entry in from the others. Therefore, can be optimized individually to get all entries of . As for the -th segment, its cost in (11) is indeed a rational function of . We show the structure of and omit the trivial deduction for brevity:

(20)

where and are orders of numerator and denominator respectively. The coefficient is determined by . It is clear that is smooth on . Due to the positiveness of , we have as or . Therefore, the minimizer exists for

(21)

To find all candidates, we compute the derivative of (20):

(22)

The minimum exists in the solution set of , which can be calculated through any modern univariate polynomial real-roots solver [4]. The second phase is completed by updating every entry in .

4 Convergence Analysis

We first explore some basic properties of , which help a lot in convergence analysis of Algorithm 1. We have already shown that are rational function of each entry in . As for the part, it is indeed partially convex, which is given by the following lemma.

Lemma 1.

is convex in for any , provided that Assumption 1 holds.

Proof.

Assumption 1 implies that , holds for all and at least one is nonzero. Otherwise, the boundness on or positiveness on its time allocation is violated. Thus, for any , the objective function is always positive, which can be seen from (10). The non-negativity of implies the positive semidefiniteness of the symmetric matrix . Since is the principal submatrix of , it is also positive semidefinite. We compute the Hessian matrix of with respect to :

(23)

which means is positive semidefinite. Therefore, is convex in . ∎

Lemma 2.

For any convex function , if the following inequality holds for any

(24)

in which is a constant and is Frobenius norm, then

(25)
Proof.

See Theorem 2.1.5 in [3]. ∎

Lemma 3.

Provided that Assumption 1 is satisfied, then the following inequality holds for any and any :

(26)

where

(27)

and

is the largest singular value of

.

Proof.

The gradient of with respect to can be calculated as

(28)

The difference in gradient at and is

(29)

Assumption 1 ensures that is nonzero matrix, which means it has largest singular value for any . According to the basic property of spectral norm, we have

(30)

Combining (29) and (30), we get

(31)

According to Lemma 1 and Lemma 2, if we substitute by , together with the fact that (27) implies , the result follows. ∎

Theorem 1.

Consider the process in Algorithm 1 started with any . Provided that Assumption 1 is satisfied, then the inequality always holds for -th iteration:

where and are both constant.

Proof.

It is clear that the objective function is non-increasing in any iteration, i.e., for any , we have

(32)

Moreover, the objective function is non-negative, which means for for any . Therefore,

(33)

Since , the following condition holds by Lemma 3:

(34)

Notice that in each iteration, then

(35)

Therefore,

(36)

We simply let , then for all . According to Assumption 1, is bounded and satisfies condition (15). Then there exists positive constant and such that always holds for . Consequently, is also upper bounded by a positive constant . We have

(37)

We sum it up for all iterations:

(38)

Since the right hand side is bounded, we have

(39)

Taking the minimum of left hand side equals

(40)

Rearranging gives the result. ∎

Theorem 1 shows that, under no assumption on convexity, Algorithm 1 shares the same global convergence rate as that of gradient descent with the best step-size chosen in each iteration [3]. However, the best step-size is practically unavailable. As a contrast, Algorithm 1 does not involve any step-size choosing in each iteration. Sub-problems (Eq. 16 and Eq. 19) both can be solved exactly and efficiently due to their algebraic convenience. Therefore, Algorithm 1 is faster than gradient-based methods in practice.

Although only convergence to stationary point is guaranteed, strict saddle points are theoretically and numerically unstable [2] for Algorithm 1, which is indeed a first-order method. Moreover, when the stationary point is a strict local minimum, we show that the convergence rate is faster than the general case in Theorem 1.

Lemma 4.

Consider a positive sequence satisfying

(41)

where is a constant. Then for all ,

(42)
Proof.

Apparently,

(43)

hence

(44)

Rearranging gives the result. ∎

Theorem 2.

Provided that Assumption 1 is satisfied, let denote any strict local minimum of to which Algorithm 1 converges, then there exist and , such that

(45)

for all , where .

Proof.

Define the neighborhood as

(46)

A strict local minimum satisfies , then there exists such that is locally convex in the domain . Moreover, there exists a positive integer such that holds for all , so we only consider hereafter. Due to the local convexity, we have

(47)

By applying Cauchy-Schwartz inequality on the right hand side, we have

(48)

Notice that the distance between and the local minimum is upper-bounded by , thus

(49)

According to the inequality (37) deduced in the proof of Theorem 1, we have

(50)

where is the upper bound of . Combining these two conditions, we get

(51)

We apply Lemma 4 by defining

(52)

then the result follows. ∎

When Algorithm 1 converges to a strict local minimum, the above theorem shows that the local convergence rate is . Note that it is possible to accelerate our method to attain the optimal rate of first-order methods or use high-order methods to achieve a faster rate.

References

  • [1] Adam Bry, Charles Richter, Abraham Bachrach, and Nicholas Roy. Aggressive flight of fixed-wing and quadrotor aircraft in dense indoor environments. The International Journal of Robotics Research, 34:1002 – 969, 2015.
  • [2] Jason D. Lee, Ioannis Panageas, Georgios Piliouras, Max Simchowitz, Michael I. Jordan, and Benjamin Recht. First-order methods almost always avoid strict saddle points. Mathematical Programming, 176:311–337, 2019.
  • [3] Yurii Nesterov. Lectures on Convex Optimization. 2018.
  • [4] Michael Sagraloff and Kurt Mehlhorn. Computing real roots of real polynomials. J. Symb. Comput., 73:46–86, 2013.