Optimal time-complexity speed planning for robot manipulators

02/09/2018 ∙ by Luca Consolini, et al. ∙ 0

We consider the speed planning problem for a robotic manipulator. In particular, we present an algorithm for finding the time-optimal speed law along an assigned path that satisfies velocity and acceleration constraints and respects the maximum forces and torques allowed by the actuators. The addressed optimization problem is a finite dimensional reformulation of the continuous-time speed optimization problem, obtained by discretizing the speed profile with N points. The proposed algorithm has linear complexity with respect to N and to the number of degrees of freedom. Such complexity is the best possible for this problem. Numerical tests show that the proposed algorithm is significantly faster than algorithms already existing in literature.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For robotic manipulators, the motion planning problem is often decomposed into two subproblems: path planning and speed planning LaValle (2006).

The first problem consists in finding a path (i.e., the curve followed by the joints) that joins assigned initial and final positions. The second problem consists in finding the time-optimal speed law along the path that satisfies assigned velocity and acceleration constraints and respects the maximum forces and torques allowed by the actuators. In this paper we consider only the second problem. Namely, given a path in the robot configuration space, we want to find the optimal speed-law that allows following while satisfying assigned kinematic and dynamics constraints. More specifically, we consider the problem

(1)

where: is the travel time; is the generalized position;

is the generalized force vector;

is the mass matrix; is a matrix accounting for centrifugal and Coriolis effects; is an external force term (for instance gravity); is a set that represents the kinematic and dynamic limitations of the manipulator.

1.1 Related Works

There are mainly three different families of speed profile generation methods: Numerical Integration, Dynamic Programming, and Convex Optimization.

References Bobrow, Dubowsky, and Gibson (1985); Pfeiffer and Johanni (1987) are among the first works that study Problem 1 using the Numerical Integration approach. In particular, they find the time-optimal speed law as a function of arc-length and not as a function of time. This choice simplifies the mathematical structure of the resulting problem and has been adopted by most of successive works. In Bobrow, Dubowsky, and Gibson (1985); Pfeiffer and Johanni (1987) the optimization problem is solved with iterative algorithms. In particular, reference Bobrow, Dubowsky, and Gibson (1985) finds the points in which the acceleration changes sign using the numerical integration of the second order differential equations representing the motions obtained with the maximum and minimum possible accelerations. Reference Pfeiffer and Johanni (1987) is based on geometrical considerations on the feasible set. However, this approach has some limitations due to the determination of the switching points that is the main source of failure of this approach (see Slotine and Yang (1989); Shiller and Lu (1992)). For recent results on Numerical Integration see Pham (2014); Pham, Caron, and Nakamura (2013); Pham and Stasse (2015). For instance Pham and Stasse (2015) considers the case of redundant manipulators.

In the Dynamic Programming approach the problem is solved with a finite element approximation of the Hamilton-Jacobi-Bellman equation (see Shin and McKay (1986); Singh and Leu (1987); Oberherber, Gattringer, and Müller (2015)). The main difficultly with this approach is the high computational time due to the need of solving a problem with a large number of variables.

The Convex Optimization approach is based on the approximation of Problem (1) with a finite dimensional optimization problem obtained through spatial discretization. Reference Verscheure, Demeulenaere, Swevers, Schutter, and Diehl (2009) is one of the early works using this approach. It shows that Problem (1) becomes convex after a change of variables and that a discretized version of Problem (1) is a Second-Order Cone Programming (SOCP) problem. This approach has the advantage that the optimization problem can be tackled with available solvers (e.g., see Lipp and Boyd (2014); Gurobi Optimization (2016)). Moreover, differently from the Numerical Integration, this approach allows considering other convex objective functions. However, the convex programming approach could be inappropriate (see for instance Pham (2014)) for online motion planning since the computational time grows rapidly (even if still polynomially) with respect to the number of samples in the discretized problem. Subsequent works, starting from Verscheure, Demeulenaere, Swevers, Schutter, and Diehl (2009), extend the applicability of this approach to different scenarios (see Debrouwere, Van Loock, Pipeleers, Dinh, Diehl, De Schutter, and Swevers (2013); Csorvási, Nagy, and Vajk (2017)) and propose algorithms that reduce the computational time (see Hauser (2014); Nagy and Vajk (2018)). To reduce computational time, reference Hauser (2014) proposes an approach based on

Sequential Linear Programming

(SLP). Namely, the algorithm proposed in Hauser (2014) sequentially linearizes the objective function around the current point, while a trust region method ensures the convergence of the process. Further, Nagy and Vajk (2018) shows that, using a suitable discretization method, the time optimal velocity profile can be obtained by Linear Programming (LP) with the benefit of a lower computation time with respect to convex solvers.

A very recent, and very interesting, paper, closely related to our work, is Pham and Pham (2017). In Section 4.1, we will shortly describe the approach proposed there and compare it with our approach.

Our approach combines the ideas which we previously proposed in two other works. Namely, in Consolini, Locatelli, Minari, and Piazzi (2017a) we proposed an exact linear-time forward-backward algorithm for the solution of a velocity planning problem for a vehicle over a given trajectory under velocity, normal and tangential acceleration bounds. In Csorvási, Nagy, and Vajk (2017), a method based on the sequential solution of two-dimensional subproblems is proposed for the solution of the so-called waiter motion problem. The method is able to return a feasible, though not necessarily optimal, solution. In the current paper we merge the ideas proposed in the two above mentioned papers in order to derive an approach for the speed planning of robotic manipulators. This will be proved to return an optimal solution and to have linear time complexity both with respect to the number of discretization points and to the number of degrees of freedom of the robotic manipulator.

1.2 Main results

The purpose of this paper is to provide a speed planning method for robotic manipulators with optimal time complexity. With respect to the existing literature, the new contributions of this work are the following ones.

  • We propose a new algorithm for solving a finite dimensional reformulation of Problem (1) obtained with discretization points.

  • We show that if set in Problem (1) is defined by linear constraints, then the proposed algorithm has complexity , where is the number of discretization points and is the number of degrees of freedom. Moreover, such complexity is optimal.

  • By numerical tests, we show that the proposed procedure is significantly faster than algorithms already existing in literature.

1.3 Paper Organization

In Section 2, we present the time-optimal control problem for robotic manipulators in continuous time. In Section 3, we present a class of optimization problems and an exact solution algorithm. We prove the correctness of the algorithm and compute its time complexity, showing that such complexity is optimal in case of linear constraints. In Section 4, we show that by suitably discretizing the continuous time problem, it is possible to obtain a finite dimensional problem with linear constraints that falls into the class defined in Section 3. Finally, we present an experiment for a 6-DOF industrial robotic manipulator and we compare the performance of the proposed approach with that of existing solvers (see Lipp and Boyd (2014); Gurobi Optimization (2016); Nagy and Vajk (2018)).

1.4 Notation

We denote with the set of nonnegative real numbers. For a vector , denotes the component-wise absolute value of and we define the norms , . We also set .

For , we denote by the set of continuous functions from to that have continuous first derivatives. For , denotes the derivative and notation is used if is a function of time. We set . We say that is bounded if there exists such that .

Consider . We say that , if there exists a positive constant such that, for all sufficiently large values of , .

2 Problem formulation

Let be a smooth manifold of dimension that represents the configuration space of a robotic manipulator with -degrees of freedom (-DOF). Let be a smooth curve whose image set represents the assigned path to be followed by the manipulator. We assume that there exist two open sets , and an invertible and smooth function . Function is a local chart that allows representing each configuration with coordinate vector .

The coordinate vector of a trajectory in satisfies the dynamic equation

(2)

where is the generalized position vector, is the generalized force vector, is the mass matrix, is the matrix accounting for centrifugal and Coriolis effects (assumed to be linear in ) and is the vector accounting for joints position dependent forces, including gravity. Note that we do not consider Coulomb friction forces.

Let be a function such that and () . The image set represents the coordinates of the elements of reference path . In particular, and are the coordinates of the initial and final configurations. Define as the time when the robot reaches the end of the path. Let be a differentiable monotone increasing function that represents the position of the robot as a function of time and let be such that . Namely, is the velocity of the robot at position . We impose () . For any

, using the chain rule, we obtain

(3)

Substituting (3) into the dynamic equations (2) and setting , we rewrite the dynamic equation (2) as follows:

(4)

where the parameters in (4) are defined as

(5)

The objective function is given by the overall travel time defined as

(6)

Let be assigned bounded functions and consider the following minimum time problem:

Problem 1.
(7)
subject to
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)

where (8) represents the robot dynamics, (9)-(10) represent the relation between the path and the generalized position shown in (3), (11) represents the bounds on generalized forces, (12) and (13) represent the bounds on joints velocity and acceleration. Constraints (15

) specify the interpolation conditions at the beginning and at the end of the path.

The following assumption is a basic requirement for fulfilling constraint (12).

Assumption 1.

We assume that is a positive continuous function, i.e., () with .

Next assumption requires that the maximum allowed generalized forces are able to counteract external forces (such as gravity) when the manipulator is fixed at each point of .

Assumption 2.

We assume that , such that .

In fact for condition (11) reduces to () .

Problem 1 is nonconvex, but it becomes convex after a simple change of variables (as previously noted in Verscheure, Demeulenaere, Swevers, Schutter, and Diehl (2009)). Indeed, set

(16)

and note that

(17)

Then, Problem 1 becomes:

Problem 2.
(18)
subject to
(19)
(20)
(21)
(22)
(23)
(24)
(25)

where the squares of the two vectors and in (23) are to be intended component-wise. Problem 2 is convex since the objective function (18) is convex and the constraints (19)-(25) are linear.

The following proposition (that will be proved in the appendix) shows that Problem 2 admits a solution.

Proposition 1.

Problem 2 admits an optimal solution , and moreover,

where is a constant depending on problem data.

We do not directly solve Problem 2, but find an approximated solution based on a finite dimensional approximation. Namely, consider the following problem, obtained by uniformly sampling the interval in points from to :

Problem 3.
(26)
subject to (27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)

where , , , , , , , , and , with .

Thank to constraints (28)-(30), it is possible to eliminate variables and and use only , with as decision variables. The feasible set of Problem 3 is a non-empty set since is a feasible solution (in fact, it also has a nonempty interior).

Since Problem 3 is convex, we can easily find a solution with an interior point method (see Verscheure, Demeulenaere, Swevers, Schutter, and Diehl (2009)).

After solving Problem 3, it is possible to find an approximated solution of Problem 2. Indeed, by quadratic interpolation, we associate to a vector , solution of Problem 3, a continuously differentiable function such that the following relations hold for

(36)

Namely, interpolates and at 0 and , respectively, and the average values of consecutive entries of at the midpoint of the discretization intervals. Moreover, the derivative of at the midpoints of the discretization intervals corresponds to the finite differences of .

We define the class of quadratic splines as the subset of , such that, for , is a quadratic polynomial. For , let be such that:

and let be such that

and

For ease of notation, set, for ,

The following proposition, whose proof is presented in appendix, defines the interpolating quadratic spline fulfilling (36).

Proposition 2.

For any , there exists a unique element such that the following interpolation conditions hold

(37)

Note that is continuously differentiable and that (36) holds. Such element will be denoted by .

Note that by Proposition 2, there exists a unique function that interpolates the solution of Problem 3. Then and are computed from by using relations (17) and (19), namely we set

Functions , and are approximate solutions of Problem 2. Indeed, (26) and (30) are approximations of (18) and (21), moreover, functions , and satisfy, by construction, constraints (19)-(25) for and by continuity, (19)-(25) are also approximately satisfied for . By increasing the number of samples , the solutions of Problem 3 become better approximations of the solutions of Problem 2. It is reasonable to suppose that, as approaches to , the solutions of Problem 3 converge to the solutions of Problem 2. Anyway this convergence property is not proved in this paper being outside its scope. It can be proved on the lines of Consolini, Laurini, Locatelli, and Cabassi (2017b), that presents a convergence result for a related speed planning problem for an autonomous vehicle.

3 Solution algorithms and complexity issues for the generalized problem

In this section we present an optimal time complexity algorithm that solves a specific class of optimization problems. In the subsequent section we will show that Problem 2 can be approximated by a finite dimensional problem that belongs to this class.

3.1 Exact algorithm for the solution of some special structured problems

The problems under consideration have the form

(38)

where we make the following assumptions.

Assumption 3.

We assume:

  • monotonic non increasing;

  • , concave, increasing and ,
    ;

  • , concave, increasing and ,
    .

The constraints in (38) can be rewritten in compact form as follows:

where:

Note that and are both concave and increasing over , since they are the minimum of a finite number of functions with the same properties. We prove that the same holds for .

Proposition 3.

is increasing and concave over .

Proof.

The fact that is increasing follows immediately from the increasingness of and . For what concerns concavity, :

where the first inequality is a consequence of the concavity of and the fact that is increasing, while the second inequality comes from concavity of . ∎

It immediately follows that:

(39)

Then, there exists at most one point such that . Similarly, there exists at most one point such that . Note that are the positive fixed points of and , respectively. Alternatively, is also the optimal solution of the following two-dimensional convex problem:

The following result holds.

Proposition 4.

Under Assumption 3, the optimal solution of (38) is the component-wise maximum of its feasible region, i.e., if we denote by the feasible region, it is the point such that for all .

Proof.

See Consolini, Locatelli, Minari, and Piazzi (2017a); Nagy and Vajk (2018). ∎

We consider Algorithm 1 for the solution of problem (38).

Data : ;
1 Set ;
2 /* Forward phase */
3;
4 foreach  do
5       Compute the nonnegative fixed points and for and , respectively (if they do not exist, set , );
6       Set ;
7       Set ;
8      
9 end foreach
10/* Backward phase */
11;
12 foreach  do
13       Set ;
14      
15 end foreach
Algorithm 1 Forward-Backward algorithm for the solution of the problem.

The algorithm is correct, as stated in the following proposition.

Proposition 5.

Algorithm 1 returns the optimal solution of problem (38).

Proof.

We first remark that at each iteration holds. If a fixed point for exists, then after the backward propagation, , and , where denotes the upper bound for after the forward phase. We show that

(40)

If this is true for all , then the the point at the end of the backward phase is a feasible solution of (38). Indeed, by definition of in the backward phase,

while by (40),

so that is feasible for (38). Since holds and is monotone non increasing, we have that is the optimal solution of (38). We only need to prove that (40) is true. Note that is the result of the first forward propagation, so that . Thus, we need to prove that with . In view of (39), if the fixed point for exists, it is the unique nonnegative root of and

from which the result is proved. Otherwise, if no fixed point exists,

form which the result is still proved. ∎

Under a given condition, Algorithm 1 can be further simplified.

Remark 1.

If and , , fulfill the so called superiority condition, i.e.

then and the forward phase can be reduced to

3.2 Solving the subproblems in the forward phase and complexity results

We first remark that

defined in the forward phase of Algorithm 1 are the solution of the two-dimensional convex optimization problem

(41)

Alternatively, can also be detected as fixed points of and , respectively, where

Although any convex optimization or any fixed point solver could be exploited for detecting these values, we propose the simple Algorithm 2, which turns out to be quite effective in practice. We denote by

Note that increasing and concave implies that is increasing and convex and, consequently, is increasing and convex .

1Let ;
2 Set , , ;
3 while  do
4       Set and ;
5       Set and ;
6       Let be the solution of the one-dimensional equation ;
7       Set ;
8      
9 end while
10return ;
Algorithm 2 Algorithm for the computation of the new values and .

We illustrate how the algorithm works through an example.

Example 1.

Let

and

Moreover, let . Then, we initially set .
In the first iteration we have

with , and

with . Then, is the solution of the equation , i.e., (See also Figure 1).

Figure 1: The first step of Algorithm 2. The red lines represent the linear function while the blue ones are the functions . The green line represents the solution of the one dimensional equation .

In the second iteration we have

with , and

with . Then, is the solution of the equation , i.e., (See also Figure 2).

Figure 2: The second step of Algorithm 2.

In the third iteration we have